Tag Archives: google cloud

Skip the setup— Run code directly from Google Cloud’s documentation

Posted by Abby Carey, Developer Advocate

Blog header

Long gone are the days of looking for documentation, finding a how-to guide, and questioning whether the commands and code samples actually work.

Google Cloud recently added a Cloud Shell integration within each and every documentation page.

This new functionality lets you test code in a preprovisioned virtual machine instance while learning about Google Cloud services. Running commands and code from the documentation cuts down on context switching between the documentation and a terminal window to run the commands in a tutorial.

This gif shows how Google Cloud’s documentation uses Cloud Shell, letting you run commands in a quickstart within your Cloud Shell environment.

gif showing how Google Cloud’s documentation uses Cloud Shell, letting you run commands in a quickstart within your Cloud Shell environment.

If you’re new to developing on Google Cloud, this creates a low barrier to entry for trying Google Cloud services and APIs. After activating billing verification with your Google Cloud account, you can test services that have a free tier at no charge, like Pub/Sub and Cloud Vision.

  1. Open a Google Cloud documentation page (like this Pub/Sub quickstart).
  2. Sign into your Google account.
  3. In the top navigation, click Activate Cloud Shell.
  4. Select your project or create one if you don’t already have one. You can select a project by running the gcloud config set project command or by using this drop-down menu:
    image showing how to select a project
  5. Copy, paste, and run your commands.

If you want to test something a bit more adventurous, try to deploy a containerized web application, or get started with BigQuery.

A bit about Cloud Shell

If you’ve been developing on Google Cloud, chances are you’ve already interacted with Cloud Shell in the Cloud Console. Cloud Shell is a ready-to-go, online development and operations environment. It comes preinstalled with common command-line tools, programming languages, and the Cloud SDK.

Just like in the Cloud Console, your Cloud Shell terminal stays open as you navigate the site. As you work through tutorials within Google Cloud’s documentation, the Cloud Shell terminal stays on your screen. This helps with progressing from two connected tutorials, like the Pub/Sub quickstart and setting up a Pub/Sub Proxy.

Having a preprovisioned environment setup by Google eliminates the age old question of “Is my machine the problem?” when you eventually try to run these commands locally.

What about code samples?

While Cloud Shell is useful for managing your Google Cloud resources, it also lets you test code samples. If you’re using Cloud Client Libraries, you can customize and run sample code in the Cloud Shell’s built in code editor: Cloud Shell Editor.

Cloud Shell Editor is Cloud Shell’s built-in, browser-based code editor, powered by the Eclipse Theia IDE platform. To open it, click the Open Editor button from your Cloud Shell terminal:

Image showing how to open Cloud Shell Editor

Cloud Shell Editor has rich language support and debuggers for Go, Java, .Net, Python, NodeJS and more languages, integrated source control, local emulators for Kubernetes, and more features. With the Cloud Shell Editor open, you can then walk through a client library tutorial like Cloud Vision’s Detect labels guide, running terminal commands and code from one browser tab.

Open up a Google Cloud quickstart and give it a try! This could be a game-changer for your learning experience.

Cloud Shell Editor has rich language support and debuggers for Go, Java, .Net, Python, NodeJS and more languages, integrated source control, local emulators for Kubernetes, and more features. With the Cloud Shell Editor open, you can then walk through a client library tutorial like Cloud Vision’s Detect labels guide, running terminal commands and code from one browser tab.

Open up a Google Cloud quickstart and give it a try! This could be a game-changer for your learning experience.

Driving success for Australian retail with digitalisation

In spite of the  significant disruption to the retail sector, we’ve seen firsthand how companies have accelerated their digital transformation journey to better differentiate themselves in a highly competitive market.

In fact, the Google Cloud Retail Digital Pulse looked at the digital maturity across Australian retail and found the segment to be steadily advancing towards becoming more digitally resilient. The business sentiment is however still mixed and the market shows a split between those who have embraced digital and are thriving, and those that have not and are struggling. 

Among Australian respondents, almost a quarter (23.3 percent) said investment in digitalisation was driven by the desire to reduce costs and improve profitability, with 17.5 percent wanting to improve customer experience to drive revenue and/or increase KPI scores. Building customer data platforms, enhancing capabilities around marketing optimisation and the ability to drive personalisation are some priority use cases for Australian retailers. While challenges remain around budgets, inability to harness customer/operational data and lack of digital transformation roadmaps, more than half (58.3 percent) of respondents are looking to Cloud Services providers for help with digitalisation. 

We’re proud of the role we play in supporting the retail sector in Australia (and beyond) to enhance digital offerings and embrace the future of retail, empowering businesses to harness their data to achieve tangible business results and enhance customer relationships. 

Here’s a look at how some of our Australian customers are leveraging Google Cloud to transform their offering for customers: 

Country Road Group & David Jones: Unearthing data for digital growth

When COVID-19 forced the closure of storefronts, the Country Road Group and David Jones marketing teams sought to demonstrate the value of digital for both engaging customers and sales. After experiencing a 50 percent increase in requests for data analytics, pulling reports from multiple dashboards and structuring them to be meaningful and relevant, became extremely time intensive. 

Looking to Cloud Services providers for help with digitalisation, Country Road Group & David Jones engaged MightyHive to transform marketing for the business. The provider leveraged BigQuery as the business’s data source, connecting with Looker to better explore, share, and visualise the company's supplier and campaign data. 

Consolidating multiple disparate data sources into just three dashboards has minimised the volume of manual reporting, saving the team a full day per week. Looker has also delivered more comprehensive insights to inform the future growth of the business. 

Hanes: Data drives enhanced consumer experience

Hanes Australasia is home to some of Australia’s best-known apparel and lifestyle brands, including Bonds, Bras N Things, and Sheridan. It’s among the 40.8 percent of retailers who adopted technology for marketing optimisation. 

Hanes recognised that data was key to understanding consumer behaviour, preferences, and to driving revenue from its ecommerce investments. The company implemented Google Cloud services—including scalable and serverless BigQuery data warehousing, the Firebase mobile development platform, Cloud Functions to build and connect cloud services, and Pub/Sub event ingestion and delivery—to deliver on these opportunities. 

The business can now collect detailed in-store transaction data along with on-site transaction and customer event data, that is streamed in near-real time into Google Cloud. This data provides a wealth of information that can be transformed into actionable insights for marketing optimisation, and to help support wholesale partners.

JB HiFi: Personalisation drives transaction value

With 39.8 percent of Australian respondents leveraging technology for product discovery and search, JB HiFi is one example of how personalisation can be a powerful driver of success.  

Previously, JB Hi-Fi’s buying team would manually recommend products to visitors — a time-consuming process that meant recommendations of three or four associated products represented only a fraction of the more than  50,000 products available on its website. 

After deploying Recommendations AI, JB Hi-Fi found the average transaction value (ATV) for products recommended increased, when compared to manual processes. Furthermore, monthly average revenue from recommended products increased when compared to manually curated products, and the conversion rate for products offered on the JB Hi-Fi home page also improved. 

The adoption of Recommendations AI has also given JB Hi-Fi the ability to give its customers a more personalised online experience matching it to the personalised, expert experience delivered to customers in-store.

Continuing momentum in a 'post-COVID' world

Beyond the noise and challenges of COVID-19,  retailers are still navigating  what the ‘new normal’ looks like for them and how to manage the blurring of online and in-store interactions with customers. As ecommerce continues as the driving engine for growth, digital transformation remains central to retailers’ long term success.  Commitment to a digital strategy and investing to accelerate the journey to digital resilience is going to be the key. At the same time, it is a great opportunity for retailers to further build on their digital foundations to enable differentiation in the market.

Google Cloud is committed to leading the digital transformation of Australia’s retail sector. We’re continuing to expand our capabilities for merchants, offering tools and solutions designed specifically for the retail industry. Our aim is to empower partners with a scalable platform of innovation, digitisation and efficiency to ultimately give our retail customers the tools they need to thrive.


Our new animated series brings data centers to life

If you rely on the internet to search for the answer to a burning question, access work documents or stream your favorite TV show, you may have wondered how you can get the content you want so easily and quickly. You can thank a data center for that. 

Which may make you wonder: What exactly is a data center, and what is its purpose?

Google’s Discovering Data Centers series of short animated videos has the answers. As host of this series, I invite you to join us and learn about these expansive, supercomputer-filled warehouses that we all rely on, yet may know little about.

A loop of an animated video showing a data center campus surrounded by trees, blue sky, power lines, and wind turbines. Three small bubbles appear over the data center with images in each: a computer server to represent storage, wires to represent the power supply, and a fan to represent the cooling infrastructure.

Each video in this series helps peel back the layers on what makes data centers so fascinating: design, technology, operations and sustainability. There are times you click Start on Google Maps, edit a Google Doc or watch a YouTube video on how to fix something. By watching this series, you’ll better understand how Google’s data centers get you and billions of other users like you to that content quickly, securely and sustainably. 

Discovering Data Centers will help you understand: 

  • How data centers play a critical role in organizing your and the world’s information.
  • Data center design and how data centers are built to be sustainable. 
  • Our core principles, which show you can depend on us to be available 24/7. 

As the second season of our series gets underway, upcoming topics include: 

  • How hundreds of machines at a data center store data.
  • How our network allows data to travel through and between data centers within seconds. 
  • How encryption of data works to help secure every packet of data stored in our data centers.

To watch this series and see how data centers benefit you, visit our website. Check back monthly for new episodes where I’ll continue to reveal all the layers that make a data center hum. 

Click through the images below to read episode descriptions and take a peek at the engineering marvels that are today’s data centers.


Our Grace Hopper subsea cable has landed in the UK

Last year, we announced a new subsea cable — named Grace Hopper after the computer science pioneer — that will run between the United States, the United Kingdom and Spain. The cable will improve the resilience of the Google network that underpins our consumer and enterprise products. The 16-fibre pair Google-funded cable will connect New York (United States) to Bude (United Kingdom) and Bilbao (Spain).

Today, the Grace Hopper cable has landed in Bude, Cornwall. 

Many people around the world use Google products every day to stay in touch with friends and family, travel from point A to point B, find new customers or export products to new markets. As our first Google-funded cable to the U.K., Grace Hopper is part of our ongoing investment in the country, supporting users who rely on our products and customers using our tools to grow their business.

We know that technology is only becoming more important for the U.K. economy. The amount technology contributes to the U.K. economy has grown on average by 7% year on year since 2016. And U.K.-based venture capital investment is ranked third in the world, reaching a record high of $15 billion in 2020, despite the challenging conditions from the COVID-19 pandemic. What’s more, 10% of all current U.K. job vacancies are in tech roles, and the number of people employed in the tech sector has grown 40% in two years. With this in mind, improving the diversity and resilience of Google’s network is crucial to our ability to continue supporting one of the U.K.’s most vital sectors, as well as its long-term economic success.  

Grace Hopper represents a new generation of trans-Atlantic cable coming to the U.K. shores and is one of the first new cables to connect the U.S. and the U.K. since 2003. Moreover, with the ongoing pandemic fostering a new digital normal, Google-funded subsea cables allow us to plan and prepare for the future capacity needs of our customers, no matter where they are in the world. Grace Hopper will connect the U.K. to help meet the rapidly growing demand for high-bandwidth connectivity and services.

An image of the Google buoy landing on the sandy beach of Bude, Cornwall

Grace Hopper buoy landing on the beach in Bude, Cornwall

Alongside CurieDunantEquiano and Firmina, Grace Hopper is the latest cable to connect continents along the ocean floor with an additional layer of security beyond what’s available over the public internet. We’ve worked with established channels and experts for years to ensure that Grace Hopper will be able to achieve better reliability in global communications, and free flows of data.

Following a successful Bilbao landing earlier in September, Grace Hopper also marks our first ever Google-funded route to Spain, taking a unique path from our existing cables, such as Dunant, which connects the U.S. and France, and Havfrue, which links the U.S. and Denmark. The cable will use novel “fibre switching,” which allows us to better move traffic around outages for increased reliability. Once it is complete, Grace Hopper will carry traffic quickly and securely between the continents, increasing capacity and powering Google services like Meet, Gmail and Google Cloud.

Grace Hopper will use this new switching architecture to provide optimum levels of network flexibility and resilience to adjust to unforeseen failures or traffic patterns. The multi-directional switching architecture is a significant breakthrough for uncertain times, and will more tightly integrate the upcoming Google Cloud region in Madrid into our global infrastructure. 

With the landing of the Grace Hopper cable in Cornwall, we look forward to supporting the next great U.K. tech innovations.


An easier way to move your App Engine apps to Cloud Run

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Blue header

An easier yet still optional migration

In the previous episode of the Serverless Migration Station video series, developers learned how to containerize their App Engine code for Cloud Run using Docker. While Docker has gained popularity over the past decade, not everyone has containers integrated into their daily development workflow, and some prefer "containerless" solutions but know that containers can be beneficial. Well today's video is just for you, showing how you can still get your apps onto Cloud Run, even If you don't have much experience with Docker, containers, nor Dockerfiles.

App Engine isn't going away as Google has expressed long-term support for legacy runtimes on the platform, so those who prefer source-based deployments can stay where they are so this is an optional migration. Moving to Cloud Run is for those who want to explicitly move to containerization.

Migrating to Cloud Run with Cloud Buildpacks video

So how can apps be containerized without Docker? The answer is buildpacks, an open-source technology that makes it fast and easy for you to create secure, production-ready container images from source code, without a Dockerfile. Google Cloud Buildpacks adheres to the buildpacks open specification and allows users to create images that run on all GCP container platforms: Cloud Run (fully-managed), Anthos, and Google Kubernetes Engine (GKE). If you want to containerize your apps while staying focused on building your solutions and not how to create or maintain Dockerfiles, Cloud Buildpacks is for you.

In the last video, we showed developers how to containerize a Python 2 Cloud NDB app as well as a Python 3 Cloud Datastore app. We targeted those specific implementations because Python 2 users are more likely to be using App Engine's ndb or Cloud NDB to connect with their app's Datastore while Python 3 developers are most likely using Cloud Datastore. Cloud Buildpacks do not support Python 2, so today we're targeting a slightly different audience: Python 2 developers who have migrated from App Engine ndb to Cloud NDB and who have ported their apps to modern Python 3 but now want to containerize them for Cloud Run.

Developers familiar with App Engine know that a default HTTP server is provided by default and started automatically, however if special launch instructions are needed, users can add an entrypoint directive in their app.yaml files, as illustrated below. When those App Engine apps are containerized for Cloud Run, developers must bundle their own server and provide startup instructions, the purpose of the ENTRYPOINT directive in the Dockerfile, also shown below.

Starting your web server with App Engine (app.yaml) and Cloud Run with Docker (Dockerfile) or Buildpacks (Procfile)

Starting your web server with App Engine (app.yaml) and Cloud Run with Docker (Dockerfile) or Buildpacks (Procfile)

In this migration, there is no Dockerfile. While Cloud Buildpacks does the heavy-lifting, determining how to package your app into a container, it still needs to be told how to start your service. This is exactly what a Procfile is for, represented by the last file in the image above. As specified, your web server will be launched in the same way as in app.yaml and the Dockerfile above; these config files are deliberately juxtaposed to expose their similarities.

Other than this swapping of configuration files and the expected lack of a .dockerignore file, the Python 3 Cloud NDB app containerized for Cloud Run is nearly identical to the Python 3 Cloud NDB App Engine app we started with. Cloud Run's build-and-deploy command (gcloud run deploy) will use a Dockerfile if present but otherwise selects Cloud Buildpacks to build and deploy the container image. The user experience is the same, only without the time and challenges required to maintain and debug a Dockerfile.

Get started now

If you're considering containerizing your App Engine apps without having to know much about containers or Docker, we recommend you try this migration on a sample app like ours before considering it for yours. A corresponding codelab leading you step-by-step through this exercise is provided in addition to the video which you can use for guidance.

All migration modules, their videos (when available), codelab tutorials, and source code, can be found in the migration repo. While our content initially focuses on Python users, we hope to one day also cover other legacy runtimes so stay tuned. Containerization may seem foreboding, but the goal is for Cloud Buildpacks and migration resources like this to aid you in your quest to modernize your serverless apps!

The Google Cloud Startup Summit is coming on September 9, 2021

Posted by Chris Curtis, Startup Marketing Manager at Google Cloud

Startup Summit logo

We’re excited to announce our first-ever Google Cloud Startup Summit will be taking place on September 9, 2021.

We hope you will join us as we bring together our startup community, including startup founders, CTOs, VCs and Google experts to provide behind the scenes insights and inspiring stories of innovation. To kick off the event, we’ll be bringing in X’s Captain of Moonshots, Astro Teller, for a keynote focused on innovation. We’ll also have exciting technical and business sessions,with Google leaders, industry experts, venture investors and startup leaders. You can see the full agenda here to get more details on the sessions.

We can’t wait to see you at the Google Cloud Startup Summit at 10am PT on September 9! Register to secure your spot today.

Containerizing Google App Engine apps for Cloud Run

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Google App Engine header

An optional migration

Serverless Migration Station is a video mini-series from Serverless Expeditions focused on helping developers modernize their applications running on a serverless compute platform from Google Cloud. Previous episodes demonstrated how to migrate away from the older, legacy App Engine (standard environment) services to newer Google Cloud standalone equivalents like Cloud Datastore. Today's product crossover episode differs slightly from that by migrating away from App Engine altogether, containerizing those apps for Cloud Run.

There's little question the industry has been moving towards containerization as an application deployment mechanism over the past decade. However, Docker and use of containers weren't available to early App Engine developers until its flexible environment became available years later. Fast forward to today where developers have many more options to choose from, from an increasingly open Google Cloud. Google has expressed long-term support for App Engine, and users do not need to containerize their apps, so this is an optional migration. It is primarily for those who have decided to add containerization to their application deployment strategy and want to explicitly migrate to Cloud Run.

If you're thinking about app containerization, the video covers some of the key reasons why you would consider it: you're not subject to traditional serverless restrictions like development language or use of binaries (flexibility); if your code, dependencies, and container build & deploy steps haven't changed, you can recreate the same image with confidence (reproducibility); your application can be deployed elsewhere or be rolled back to a previous working image if necessary (reusable); and you have plenty more options on where to host your app (portability).

Migration and containerization

Legacy App Engine services are available through a set of proprietary, bundled APIs. As you can surmise, those services are not available on Cloud Run. So if you want to containerize your app for Cloud Run, it must be "ready to go," meaning it has migrated to either Google Cloud standalone equivalents or other third-party alternatives. For example, in a recent episode, we demonstrated how to migrate from App Engine ndb to Cloud NDB for Datastore access.

While we've recently begun to produce videos for such migrations, developers can already access code samples and codelab tutorials leading them through a variety of migrations. In today's video, we have both Python 2 and 3 sample apps that have divested from legacy services, thus ready to containerize for Cloud Run. Python 2 App Engine apps accessing Datastore are most likely to be using Cloud NDB whereas it would be Cloud Datastore for Python 3 users, so this is the starting point for this migration.

Because we're "only" switching execution platforms, there are no changes at all to the application code itself. This entire migration is completely based on changing the apps' configurations from App Engine to Cloud Run. In particular, App Engine artifacts such as app.yaml, appengine_config.py, and the lib folder are not used in Cloud Run and will be removed. A Dockerfile will be implemented to build your container. Apps with more complex configurations in their app.yaml files will likely need an equivalent service.yaml file for Cloud Run — if so, you'll find this app.yaml to service.yaml conversion tool handy. Following best practices means there'll also be a .dockerignore file.

App Engine and Cloud Functions are sourced-based where Google Cloud automatically provides a default HTTP server like gunicorn. Cloud Run is a bit more "DIY" because users have to provide a container image, meaning bundling our own server. In this case, we'll pick gunicorn explicitly, adding it to the top of the existing requirements.txt required packages file(s), as you can see in the screenshot below. Also illustrated is the Dockerfile where gunicorn is started to serve your app as the final step. The only differences for the Python 2 equivalent Dockerfile are: a) require the Cloud NDB package (google-cloud-ndb) instead of Cloud Datastore, and b) start with a Python 2 base image.

Image of The Python 3 requirements.txt and Dockerfile

The Python 3 requirements.txt and Dockerfile

Next steps

To walk developers through migrations, we always "START" with a working app then make the necessary updates that culminate in a working "FINISH" app. For this migration, the Python 2 sample app STARTs with the Module 2a code and FINISHes with the Module 4a code. Similarly, the Python 3 app STARTs with the Module 3b code and FINISHes with the Module 4b code. This way, if something goes wrong during your migration, you can always rollback to START, or compare your solution with our FINISH. If you are considering this migration for your own applications, we recommend you try it on a sample app like ours before considering it for yours. A corresponding codelab leading you step-by-step through this exercise is provided in addition to the video which you can use for guidance.

All migration modules, their videos (when published), codelab tutorials, START and FINISH code, etc., can be found in the migration repo. We hope to also one day cover other legacy runtimes like Java 8 so stay tuned. We'll continue with our journey from App Engine to Cloud Run ahead in Module 5 but will do so without explicit knowledge of containers, Docker, or Dockerfiles. Modernizing your development workflow to using containers and best practices like crafting a CI/CD pipeline isn't always straightforward; we hope content like this helps you progress in that direction!

Cloud Covered: What was new in July on Google Cloud

Google Cloud has stayed busy over the summer with new programs, events, tools and products making their introductions in July. Here are the most popular posts last month from the Google Cloud blog. 

Making hiring more equitable 

Last month we announced Google Cloud’s new Autism Career Program, designed to help the expanding cloud industry hire and support more talented people with autism. Working with the Stanford Neurodiversity Project, the program will break down common barriers for candidates with autism, and train Google Cloud managers and others involved in hiring processes. We’ll work effectively and empathetically with autistic candidates to ensure our interview processes are structured to include reasonable accommodations like extended interview time, providing questions in advance or conducting the interview in writing rather than verbally. Stanford will also provide coaching and ongoing support to our new Google Cloud team members with autism. 


Expanding access to Google Cloud Next

Our blog announced the opening of registration for Google Cloud Next, taking place from October 12 through 14. Our global digital conference is designed to be open and flexible, with the freedom to create your own personalized experience. Tune in for live broadcasts and keynotes about our latest launches, and learn how customers and partners use Google Cloud to meet their business challenges. Interactive, digital experiences and on-demand sessions that align with your schedule and interests will also be available, including dedicated sessions and programming for our global audiences. Next ’21 is free this year, making the experience inclusive and accessible to all. Register to get informed, be inspired and expand your expertise. 


Connecting global customers to higher performance

To help customers and the public sector in India and across Asia Pacific accelerate their digital transformation, we announced the opening of our new Google Cloud region in Delhi National Capital Region (NCR). This new data center joins 25 existing Google Cloud regions on our connected network, helping our customers better serve their users with faster and stronger performance for their applications. The new region will protect against service disruptions and will offer key Google Cloud products like Compute Engine, Google Kubernetes Engine, Cloud Bigtable and BigQuery. We also announced two new submarine cable systems linking the Middle East with southern Europe and Asia. The Blue and Raman Submarine Cable Systems will help Google users and customers around the globe communicate with friends, family, and business partners. 


Securing the cloud with new solutions

Last month we also announced a wealth of new security products and solutions that bring together the best of Google, help businesses address critical security challenges and deliver a trusted cloud:

  • Cloud IDS will help customers detect and respond to network threats. 

  • Cloud Armor Adaptive Protection uses machine learning (ML) to detect and block attacks targeting applications.

  • Chronicle will integrate with Google Cloud’s analytics platforms Looker and BigQuery to help with reporting, compliance, security-driven data science and more.

  • Autonomic Security Operations will guide businesses through the journey of protecting their assets from modern-day threats.  

  • Risk Protection Program connects our Google Cloud customers to insurers with specialized cyber security programs. 

All these Google Cloud security solutions are designed to help businesses rethink, reshape and transform their security programs.


Helping IKEA Retail (Ingka Group) recommend the next best purchase 

Finally, we shared a story about how global retail giant IKEA experimented with Google Cloud Recommendations AI to deliver a more personalized shopping experience to their customers. Recommendations AI helped IKEA customers in two ways: 

  1. Customers found products that they liked and established their preferred choice among other options more quickly, reducing the number of clicks needed in their shopping journey and increasing IKEA’s clickthrough rate by 30%.

  2. Customers found attractive and complementary products that expanded purchases from a single product to an entire home furnishing solution, giving IKEA a 2% surge in average order value. 

Along the way, Google Cloud was there to support IKEA’s testing. With their strong business results, IKEA continues to explore more places in the customer journey to use the options provided by Recommendations AI, which now powers most of IKEA’s site recommendations.

That’s a wrap for July. Stay tuned to the Google Cloud blog for all things cloud.

Cloud NDB to Cloud Datastore migration

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

An optional migration

Serverless Migration Station is a mini-series from Serverless Expeditions focused on helping users on one of Google Cloud's serverless compute platforms modernize their applications. The video today demonstrates how to migrate a sample app from Cloud NDB (or App Engine ndb) to Cloud Datastore. While Cloud NDB suffices as a current solution for today's App Engine developers, this optional migration is for those who want to consolidate their app code to using a single client library to talk to Datastore.

Cloud Datastore started as Google App Engine's original database but matured to becoming its own standalone product in 2013. At that time, native client libraries were created for the new product so non-App Engine apps as well as App Engine second generation apps could access the service. Long-time developers have been using the original App Engine service APIs to access Datastore; for Python, this would be App Engine ndb. While the legacy ndb service is still available, its limitations and lack of availability in Python 3 are why we recommend users switch to standalone libraries like Cloud NDB in the preceding video in this series.

While Cloud NDB lets users break free from proprietary App Engine services and upgrade their applications to Python 3, it also gives non-App Engine apps access to Datastore. However, Cloud NDB's primary role is a transition tool for Python 2 App Engine developers. Non-App Engine developers and new Python 3 App Engine developers are directed to the Cloud Datastore native client library, not Cloud NDB.

As a result, those with a collection of Python 2 or Python 3 App Engine apps as well as non-App Engine apps may be using completely different libraries (ndb, Cloud NDB, Cloud Datastore) to connect to the same Datastore product. Following the best practices of code reuse, developers should consider consolidating to a single client library to access Datastore. Shared libraries provide stability and robustness with code that's constantly tested, debugged, and battle-proven. Module 2 showed users how to migrate from App Engine ndb to Cloud NDB, and today's Module 3 content focuses on migrating from Cloud NDB to Cloud Datastore. Users can also go straight from ndb directly to Cloud Datastore, skipping Cloud NDB entirely.

Migration sample and next steps

Cloud NDB follows an object model identical to App Engine ndb and is deliberately meant to be familiar to long-time Python App Engine developers while use of the Cloud Datastore client library is more like accessing a JSON document store. Their querying styles are also similar. You can compare and contrast them in the "diffs" screenshot below and in the video.

The diffs between the Cloud NDB and Cloud Datastore versions of the sample app

The "diffs" between the Cloud NDB and Cloud Datastore versions of the sample app

All that said, this migration is optional and only useful if you wish to consolidate to using a single client library. If your Python App Engine apps are stable with ndb or Cloud NDB, and you don't have any code using Cloud Datastore, there's no real reason to move unless Cloud Datastore has a compelling feature inaccessible from your current client library. If you are considering this migration and want to try it on a sample app before considering for yours, see the corresponding codelab and use the video for guidance.

It begins with the Module 2 code completed in the previous codelab/video; use your solution or ours as the "START". Both Python 2 (Module 2a folder) and Python 3 (Module 2b folder) versions are available. The goal is to arrive at the "FINISH" with an identical, working app but using a completely different Datastore client library. Our Python 2 FINISH can be found in the Module 3a folder while Python 3's FINISH is in the Module 3b folder. If something goes wrong during your migration, you can always rollback to START, or compare your solution with our FINISH. We will continue our Datastore discussion ahead in Module 6 as Cloud Firestore represents the next generation of the Datastore service.

All of these learning modules, corresponding videos (when published), codelab tutorials, START and FINISH code, etc., can be found in the migration repo. We hope to also one day cover other legacy runtimes like Java 8 and others, so stay tuned. Up next in Module 4, we'll take a different turn and showcase a product crossover, showing App Engine developers how to containerize their apps and migrate them to Cloud Run, our scalable container-hosting service in the cloud. If you can't wait for either Modules 4 or 6, try out their respective codelabs or access the code samples in the table at the repo above. Migrations aren't always easy, and we hope content like this helps you modernize your apps.

Migrating from App Engine ndb to Cloud NDB

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Migrating to standalone services

Today we're introducing the first video showing long-time App Engine developers how to migrate from the App Engine ndb client library that connects to Datastore. While the legacy App Engine ndb service is still available for Datastore access, new features and continuing innovation are going into Cloud Datastore, so we recommend Python 2 users switch to standalone product client libraries like Cloud NDB.

This video and its corresponding codelab show developers how to migrate the sample app introduced in a previous video and gives them hands-on experience performing the migration on a simple app before tackling their own applications. In the immediately preceding "migration module" video, we transitioned that app from App Engine's original webapp2 framework to Flask, a popular framework in the Python community. Today's Module 2 content picks up where that Module 1 leaves off, migrating Datastore access from App Engine ndb to Cloud NDB.

Migrating to Cloud NDB opens the doors to other modernizations, such as moving to other standalone services that succeed the original App Engine legacy services, (finally) porting to Python 3, breaking up large apps into microservices for Cloud Functions, or containerizing App Engine apps for Cloud Run.

Moving to Cloud NDB

App Engine's Datastore matured to becoming its own standalone product in 2013, Cloud Datastore. Cloud NDB is the replacement client library designed for App Engine ndb users to preserve much of their existing code and user experience. Cloud NDB is available in both Python 2 and 3, meaning it can help expedite a Python 3 upgrade to the second generation App Engine platform. Furthermore, Cloud NDB gives non-App Engine apps access to Cloud Datastore.

As you can see from the screenshot below, one key difference between both libraries is that Cloud NDB provides a context manager, meaning you would use the Python with statement in a similar way as opening files but for Datastore access. However, aside from moving code inside with blocks, no other changes are required of the original App Engine ndb app code that accesses Datastore. Of course your "YMMV" (your mileage may vary) depending on the complexity of your code, but the goal of the team is to provide as seamless of a transition as possible as well as to preserve "ndb"-style access.

The difference between the App Engine ndb and Cloud NDB versions of the sample app

The "diffs" between the App Engine ndb and Cloud NDB versions of the sample app

Next steps

To try this migration yourself, hit up the corresponding codelab and use the video for guidance. This Module 2 migration sample "STARTs" with the Module 1 code completed in the previous codelab (and video). Users can use their solution or grab ours in the Module 1 repo folder. The goal is to arrive at the end with an identical, working app that operates just like the Module 1 app but uses a completely different Datastore client library. You can find this "FINISH" code sample in the Module 2a folder. If something goes wrong during your migration, you can always rollback to START, or compare your solution with our FINISH. Bonus content migrating to Python 3 App Engine can also be found in the video and codelab, resulting in a second FINISH, the Module 2b folder.

All of these learning modules, corresponding videos (when published), codelab tutorials, START and FINISH code, etc., can be found in the migration repo. We hope to also one day cover other legacy runtimes like Java 8 and others, so stay tuned! Developers should also check out the official Cloud NDB migration guide which provides more migration details, including key differences between both client libraries.

Ahead in Module 3, we will continue the Cloud NDB discussion and present our first optional migration, helping users move from Cloud NDB to the native Cloud Datastore client library. If you can't wait, try out its codelab found in the table at the repo above. Migrations aren't always easy; we hope this content helps you modernize your apps and shows we're focused on helping existing users as much as new ones.