Tag Archives: cloud

Four areas of open source contributions from Cloud Databases

Open Cloud enables you to develop software faster, innovate more easily, and scale more efficiently—while also reducing technology risk. Google has a long history of leadership in open source, and today, I want to look back at our activities around open source projects, for databases, over the past year.

Give developers the best tools to be efficient

Developers choose to build applications with managed database services on Google Cloud to benefit from velocity, scalability, security, and performance. To enable you to be most efficient and deliver your best possible work, we deliver tools and frameworks that work with your preferred development environments, no matter if you develop in the cloud or on premises. To make local testing, building and continuous integration easier for our cloud-native databases, we released emulators for Cloud Spanner, Firestore, and Cloud Bigtable so that you can test your code wherever you develop it - without the need to create or re-create cloud infrastructure with every test run.

Another area where we are helping developers is with instrumentation of Cloud SQL for easier debugging and performance tuning. With Cloud SQL Insights it is easier than ever to pinpoint underperforming SQL statements. That said, without additional instrumentation, it can be cumbersome to identify the source code or microservice that issued that SQL - let alone tying a SQL statement to a client session and its context. So we released Sqlcommenter as an open source library that will automatically add this instrumentation as SQL comments in queries that are generated by popular ORMs like Hibernate, Django, Sqlalchemy, and others (repo blog). We didn’t stop there, but merged Sqlcommenter with OpenTelemetry (blog) to add SQL insights from instrumented queries back to OpenTelemetry traces.

Lastly, we want to broaden access to our differentiated offerings, like Spanner. The recently announced Spanner PostgreSQL interface allows organizations to access Spanner’s industry-leading consistency and availability at scale using tools and skills from the popular PostgreSQL ecosystem. This new way of working with Spanner provides familiarity for developers and portability for administrators. (blog) Learn more in the documentation or sign up for the preview today.

Provide connectivity that is simple and secure

Connecting to APIs and databases from an application running in the cloud should be simple and secure. That’s why we recommend using IAM and Application Default Credentials when authenticating to other services. The Cloud SQL Proxy (repo) has been doing this and also setting up firewalls for you for a while. It works by running a local client either inside your VM or a GKE cluster. This year, we added libraries for Java (repo) and Python (repo) that can provide similar functionality without the overhead of running an extra client such as the proxy.

Cloud Spanner also offers an open source adapter for its new PostgreSQL interface (repo). This local proxy allows tools, starting with psql, to connect to a Spanner database using the PostgreSQL wire protocol.

Image 1: White pipes in datacenter

Manage cloud infrastructure with the tools of your choice

When it comes to provisioning, monitoring, and managing your cloud database services, flexibility and choice are important. We provide you with our cloud console, gcloud cli, and APIs as well as our own Deployment Manager. That said, you may prefer different ways to manage cloud infrastructure - whether through interactive tools or scripts or embedded into CI/CD pipelines that support GitOps or other controls, checks, and balances. Terraform is one of those open tools that is very popular - and we ensure that our cloud databases can be managed from it as documented in this blog about creating Spanner instances with Terraform.

If you manage the majority of your resources with Kubernetes either directly or through package managers like Helm, then our Kubernetes Config Connector (KCC) might be for you. In a nutshell, KCC exposes Google Cloud services such as Cloud SQL, Spanner, and others as Custom Resources in Kubernetes. This allows you to create and reconcile cloud resources outside of Kubernetes just like K8s native objects.

Once you are managing cloud infrastructure with CI/CD, the next step is to extend that same mechanism to manage objects within your databases such as tables, indexes, and views. To that extent we have released a Liquibase extension for Cloud Spanner.

Help you to move data with confidence

Cloud journeys often involve moving data either in a lift and shift process or sometimes replatforming to a different database. Whatever your journey, we want to simplify the process and give you the confidence that your migration is successful.

For enterprise users with Oracle databases, we have several open source projects. First, we have the Optimus Prime database assessment tool (repo) that queries your database and collects information about schemas and historic performance to be analyzed for migration complexity and consolidation potential. Our own professional services teams have been using this toolset to plan migrations to Bare Metal Solution for Oracle.

Some Oracle users are looking for opportunities to transform their workloads to fit with their bigger strategy of modernizing applications with Kubernetes. For this group we developed and open sourced the El Carro Kubernetes operator for Oracle. This not only automates database lifecycle tasks for systems running on Kubernetes, but also exposes declarative APIs for these operations.

If your application supports replatforming from Oracle to PostgreSQL, then we have a toolset for schema conversion along with dataflow pipelines that will read the output of a change data capture job and load it into a PostgreSQL database. What a great use-case for Datastream - our new serverless change data capture service.

Another case of heterogeneous database migration is to move MySQL or PostgreSQL databases to Cloud Spanner. HarbourBridge helps with the evaluation and data migration, and our latest contribution was adding support for DynamoDB as a source database. Part of every heterogeneous migration should be to validate that the source and target data are matching - we have released the Data Validation Toolkit for that use-case. DVT can connect to a number of source and target databases and compare the data on each side - giving you the confidence that your migration did not miss or change any records.

Conclusion

Whether you are migrating existing databases or you are building your next application in the cloud - we want to make your journey as comfortable and seamless as possible. Open source projects play a big role in meeting you where you are and providing you with the connectivity options, language support, and tools you want for management and migrations.

By Bjoern Rost, Product Manager, Google Cloud Databases

Migrating App Engine push queues to Cloud Tasks

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Banner image that shows the Cloud Task logo

Introduction

The previous Module 7 episode of Serverless Migration Station gave developers an idea of how App Engine push tasks work and how to implement their use in an existing App Engine ndb Flask app. In this Module 8 episode, we migrate this app from the App Engine Datastore (ndb) and Task Queue (taskqueue) APIs to Cloud NDB and Cloud Tasks. This makes your app more portable and provides a smoother transition from Python 2 to 3. The same principle applies to upgrading other legacy App Engine apps from Java 8 to 11, PHP 5 to 7, and up to Go 1.12 or newer.

Over the years, many of the original App Engine services such as Datastore, Memcache, and Blobstore, have matured to become their own standalone products, for example, Cloud Datastore, Cloud Memorystore, and Cloud Storage, respectively. The same is true for App Engine Task Queues, whose functionality has been split out to Cloud Tasks (push queues) and Cloud Pub/Sub (pull queues), now accessible to developers and applications outside of App Engine.

Migrating App Engine push queues to Cloud Tasks video

Migrating to Cloud NDB and Cloud Tasks

The key updates being made to the application:

  1. Add support for Google Cloud client libraries in the app's configuration
  2. Switch from App Engine APIs to their standalone Cloud equivalents
  3. Make required library adjustments, e.g., add use of Cloud NDB context manager
  4. Complete additional setup for Cloud Tasks
  5. Make minor updates to the task handler itself

The bulk of the updates are in #3 and #4 above, and those are reflected in the following "diff"s for the main application file:

Screenshot shows primary differences in code when switching to Cloud NDB & Cloud Tasks

Primary differences switching to Cloud NDB & Cloud Tasks

With these changes implemented, the web app works identically to that of the Module 7 sample, but both the database and task queue functionality have been completely swapped to using the standalone/unbundled Cloud NDB and Cloud Tasks libraries… congratulations!

Next steps

To do this exercise yourself, check out our corresponding codelab which leads you step-by-step through the process. You can use this in addition to the video, which can provide guidance. You can also review the push tasks migration guide for more information. Arriving at a fully-functioning Module 8 app featuring Cloud Tasks sets the stage for a larger migration ahead in Module 9. We've accomplished the most important step here, that is, getting off of the original App Engine legacy bundled services/APIs. The Module 9 migration from Python 2 to 3 and Cloud NDB to Cloud Firestore, plus the upgrade to the latest version of the Cloud Tasks client library are all fairly optional, but they represent a good opportunity to perform a medium-sized migration.

All migration modules, their videos (when available), codelab tutorials, and source code, can be found in the migration repo. While the content focuses initially on Python users, we will cover other legacy runtimes soon so stay tuned.

How to use App Engine push queues in Flask apps

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Banner image that shows the Cloud Task logo

Introduction

Since its original launch in 2008, many of the core Google App Engine services such as Datastore, Memcache, and Blobstore, have matured to become their own standalone products: for example, Cloud Datastore, Cloud Memorystore, and Cloud Storage, respectively. The same is true for App Engine Task Queues with Cloud Tasks. Today's Module 7 episode of Serverless Migration Station reviews how App Engine push tasks work, by adding this feature to an existing App Engine ndb Flask app.

App Engine push queues in Flask apps video

That app is where we left off at the end of Module 1, migrating its web framework from App Engine webapp2 to Flask. The app registers web page visits, creating a Datastore Entity for each. After a new record is created, the ten most recent visits are displayed to the end-user. If the app only shows the latest visits, there is no reason to keep older visits, so the Module 7 exercise adds a push task that deletes all visits older than the oldest one shown. Tasks execute asynchronously outside the normal application flow.

Key updates

The following are the changes being made to the application:

  1. Add use of App Engine Task Queues (taskqueue) API
  2. Determine oldest visit displayed, logging and saving that timestamp
  3. Create task to delete old visits
  4. Update web page template to display timestamp threshold
  5. Log how many and which visits (by Entity ID) are deleted

Except for #4 which occurs in the HTML template file, these updates are reflected in the "diff"s for the main application file:

Screenshot of App Engine push tasks application source code differences

Adding App Engine push tasks application source code differences

With these changes implemented, the web app now shows the end-user which visits will be deleted by the new push task:

Screenshot of VisitMe example showing last ten site visits. A red circle around older visits being deleted

Sample application output

Next steps

To do this exercise yourself, check out our corresponding codelab which leads you step-by-step through the process. You can use this in addition to the video, which can provide guidance. You can also review the push queue documentation for more information. Arriving at a fully-functioning Module 7 app featuring App Engine push tasks sets the stage for migrating it to Cloud Tasks (and Cloud NDB) ahead in Module 8.

All migration modules, their videos (when available), codelab tutorials, and source code, can be found in the migration repo. While the content focuses initially on Python users, we will cover other legacy runtimes soon so stay tuned.

Exploring serverless with a nebulous app: Deploy the same app to App Engine, Cloud Functions, or Cloud Run

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Banner image that shows the App Engine, Cloud Functions, and Cloud Run logos

Introduction

Google Cloud offers three distinct ways of running your code or application in a serverless way, each serving different use cases. Google App Engine, our first Cloud product, was created to give users the ability to deploy source-based web applications or mobile backends directly to the cloud without the need of thinking about servers or scaling. Cloud Functions came later for scenarios where you may not have an entire app, great for one-off utility functions or event-driven microservices. Cloud Run is our latest fully-managed serverless product that gives developers the flexibility of containers along with the convenience of serverless.

As all are serverless compute platforms, users recognize they share some similarities along with clear differences, and often, they ask:

  1. How different is deploying code to App Engine, Cloud Functions, or Cloud Run?
  2. Is it challenging to move from one to another if I feel the other may better fit my needs?

We're going to answer these questions today by sharing a unique application with you, one that can be deployed to all three platforms without changing any application code. All of the necessary changes are done in configuration.

More motivation

Another challenge for developers can be trying to learn how to use another Cloud product, such as this request, paraphrased from a user:

  1. I have a Google App Engine app
  2. I want to call the Cloud Translation API from that app

Sounds simple enough. This user went straight to the App Engine and Translation API documentation where they were able to get started with the App Engine Quickstart to get their app up and going, then found the Translation API setup page and started looking into permissions needed to access the API. However, they got stuck at the Identity and Access Management (IAM) page on roles, being overwhelmed at all the options but no clear path forward. In light of this, let's add a third question to preceding pair outlined earlier:

  1. How do you access Cloud APIs from a Cloud serverless platform?
Without knowing what that user was going to build, let's just implement a barebones translator, an "MVP" (minimally viable product) version of a simple "My Google Translate" Python Flask app using the Translation API, one of Google Cloud's AI/ML "building block" APIs. These APIs are backed by pre-trained machine learning models, giving developers with little or no background in AI/ML the ability to leverage the benefits of machine learning with only API calls.

The application

The app consists of a simple web page prompting the user for a phrase to translate from English to Spanish. The translated results along with the original phrase are presented along with an empty form for a follow-up translation if desired. While the majority of this app's deployments are in Python 3, there are still many users working on upgrading from Python 2, so some of those deployments are available to help with migration planning. Taking this into account, this app can be deployed (at least) eight different ways:
  1. Local (or hosted) Flask server (Python 2)
  2. Local (or hosted) Flask server (Python 3)
  3. Google App Engine (Python 2)
  4. Google App Engine (Python 3)
  5. Google Cloud Functions (Python 3)
  6. Google Cloud Run (Python 2 via Docker)
  7. Google Cloud Run (Python 3 via Docker)
  8. Google Cloud Run (Python 3 via Cloud Buildpacks)
The following is a brief glance at the files and which configurations they're for: Screenshot of Nebulous serverless sample app files

Nebulous serverless sample app files

Diving straight into the application, let's look at its primary function, translate():
@app.route('/', methods=['GET', 'POST'])
def translate(gcf_request=None):
local_request = gcf_request if gcf_request else request
text = translated = None
if local_request.method == 'POST':
text = local_request.form['text'].strip()
if text:
data = {
'contents': [text],
'parent': PARENT,
'target_language_code': TARGET[0],
}
rsp = TRANSLATE.translate_text(request=data)
translated = rsp.translations[0].translated_text
context = {
'orig': {'text': text, 'lc': SOURCE},
'trans': {'text': translated, 'lc': TARGET},
}
return render_template('index.html', **context)

Core component (translate()) of sample application


Some key app components:
  • Upon an initial request (GET), an HTML template is rendered featuring a simple form with an empty text field for the text to translate.
  • The form POSTs back to the app, and in this case, grabs the text to translate, sends the request to the Translation API, receives and displays the results to the user along with an empty form for another translation.
  • There is a special "ifdef" for Cloud Functions near the top to receive a request object because a web framework isn't used like you'd have with App Engine or Cloud Run, so Cloud Functions provides one for this reason.
The app runs identically whether running locally or deployed to App Engine, Cloud Functions, or Cloud Run. The magic is all in the configuration. The requirements.txt file* is used in all configurations, whether to install third-party packages locally, or to direct the Cloud Build system to automatically install those libraries during deployment. Beyond requirements.txt, things start to differ:
  1. App Engine has an app.yaml file and possibly an appengine_config.py file.
  2. Cloud Run has either a Dockerfile (Docker) or Procfile (Cloud Buildpacks), and possibly a service.yaml file.
  3. Cloud Functions, the "simplest" of the three, has no configuration outside of a package requirements file (requirements.txt, package.json, etc.).
The following is what you should expect to see after completing one translation request: Screenshot of My Google Translate (1990s Edition) in Incognito Window

"My Google Translate" MVP app (Cloud Run edition)

Next steps

The sample app can be run locally or on your own hosting server, but now you also know how to deploy it to each of Cloud's serverless platforms and what those subtle differences are. You also have a sense of the differences between each platform as well as what it takes to switch from one to another. For example, if your organization is moving to implement containerization into your software development workflow, you can migrate your existing App Engine apps to Cloud Run using Docker or using Cloud Buildpacks if you don't want to think about containers or Dockerfiles. Lastly, you now know how to access Cloud APIs from these platforms. Lastly, you now know how to access Cloud APIs from these platforms.

The user described earlier was overwhelmed at all the IAM roles and options available because this type of detail is required to provide the most security options for accessing Cloud services, but when prototyping, the fastest on-ramp is to use the default service account that comes with Cloud serverless platforms. These help you get that prototype working while allowing you to learn more about IAM roles and required permissions. Once you've progressed far enough to consider deploying to production, you can then follow the best practice of "least privileges" and create your own (user-managed) service accounts with the minimal permissions required so your application functions properly.

To dive in, the code and codelabs (free, self-paced, hands-on tutorials) for each deployment are available in its open source repository. An active Google Cloud billing account is required to deploy this application to each of our serverless platforms even though you can do all of them without incurring charges. More information can be found in the "Cost" section of the repo's README. We hope this sample app teaches you more about the similarities and differences between our plaforms, shows you how you can "shift" applications comfortably between them, and provides a light introduction to another Cloud API. Also check out my colleague's post featuring similar content for Node.js.

An easier way to move your App Engine apps to Cloud Run

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Blue header

An easier yet still optional migration

In the previous episode of the Serverless Migration Station video series, developers learned how to containerize their App Engine code for Cloud Run using Docker. While Docker has gained popularity over the past decade, not everyone has containers integrated into their daily development workflow, and some prefer "containerless" solutions but know that containers can be beneficial. Well today's video is just for you, showing how you can still get your apps onto Cloud Run, even If you don't have much experience with Docker, containers, nor Dockerfiles.

App Engine isn't going away as Google has expressed long-term support for legacy runtimes on the platform, so those who prefer source-based deployments can stay where they are so this is an optional migration. Moving to Cloud Run is for those who want to explicitly move to containerization.

Migrating to Cloud Run with Cloud Buildpacks video

So how can apps be containerized without Docker? The answer is buildpacks, an open-source technology that makes it fast and easy for you to create secure, production-ready container images from source code, without a Dockerfile. Google Cloud Buildpacks adheres to the buildpacks open specification and allows users to create images that run on all GCP container platforms: Cloud Run (fully-managed), Anthos, and Google Kubernetes Engine (GKE). If you want to containerize your apps while staying focused on building your solutions and not how to create or maintain Dockerfiles, Cloud Buildpacks is for you.

In the last video, we showed developers how to containerize a Python 2 Cloud NDB app as well as a Python 3 Cloud Datastore app. We targeted those specific implementations because Python 2 users are more likely to be using App Engine's ndb or Cloud NDB to connect with their app's Datastore while Python 3 developers are most likely using Cloud Datastore. Cloud Buildpacks do not support Python 2, so today we're targeting a slightly different audience: Python 2 developers who have migrated from App Engine ndb to Cloud NDB and who have ported their apps to modern Python 3 but now want to containerize them for Cloud Run.

Developers familiar with App Engine know that a default HTTP server is provided by default and started automatically, however if special launch instructions are needed, users can add an entrypoint directive in their app.yaml files, as illustrated below. When those App Engine apps are containerized for Cloud Run, developers must bundle their own server and provide startup instructions, the purpose of the ENTRYPOINT directive in the Dockerfile, also shown below.

Starting your web server with App Engine (app.yaml) and Cloud Run with Docker (Dockerfile) or Buildpacks (Procfile)

Starting your web server with App Engine (app.yaml) and Cloud Run with Docker (Dockerfile) or Buildpacks (Procfile)

In this migration, there is no Dockerfile. While Cloud Buildpacks does the heavy-lifting, determining how to package your app into a container, it still needs to be told how to start your service. This is exactly what a Procfile is for, represented by the last file in the image above. As specified, your web server will be launched in the same way as in app.yaml and the Dockerfile above; these config files are deliberately juxtaposed to expose their similarities.

Other than this swapping of configuration files and the expected lack of a .dockerignore file, the Python 3 Cloud NDB app containerized for Cloud Run is nearly identical to the Python 3 Cloud NDB App Engine app we started with. Cloud Run's build-and-deploy command (gcloud run deploy) will use a Dockerfile if present but otherwise selects Cloud Buildpacks to build and deploy the container image. The user experience is the same, only without the time and challenges required to maintain and debug a Dockerfile.

Get started now

If you're considering containerizing your App Engine apps without having to know much about containers or Docker, we recommend you try this migration on a sample app like ours before considering it for yours. A corresponding codelab leading you step-by-step through this exercise is provided in addition to the video which you can use for guidance.

All migration modules, their videos (when available), codelab tutorials, and source code, can be found in the migration repo. While our content initially focuses on Python users, we hope to one day also cover other legacy runtimes so stay tuned. Containerization may seem foreboding, but the goal is for Cloud Buildpacks and migration resources like this to aid you in your quest to modernize your serverless apps!

Containerizing Google App Engine apps for Cloud Run

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Google App Engine header

An optional migration

Serverless Migration Station is a video mini-series from Serverless Expeditions focused on helping developers modernize their applications running on a serverless compute platform from Google Cloud. Previous episodes demonstrated how to migrate away from the older, legacy App Engine (standard environment) services to newer Google Cloud standalone equivalents like Cloud Datastore. Today's product crossover episode differs slightly from that by migrating away from App Engine altogether, containerizing those apps for Cloud Run.

There's little question the industry has been moving towards containerization as an application deployment mechanism over the past decade. However, Docker and use of containers weren't available to early App Engine developers until its flexible environment became available years later. Fast forward to today where developers have many more options to choose from, from an increasingly open Google Cloud. Google has expressed long-term support for App Engine, and users do not need to containerize their apps, so this is an optional migration. It is primarily for those who have decided to add containerization to their application deployment strategy and want to explicitly migrate to Cloud Run.

If you're thinking about app containerization, the video covers some of the key reasons why you would consider it: you're not subject to traditional serverless restrictions like development language or use of binaries (flexibility); if your code, dependencies, and container build & deploy steps haven't changed, you can recreate the same image with confidence (reproducibility); your application can be deployed elsewhere or be rolled back to a previous working image if necessary (reusable); and you have plenty more options on where to host your app (portability).

Migration and containerization

Legacy App Engine services are available through a set of proprietary, bundled APIs. As you can surmise, those services are not available on Cloud Run. So if you want to containerize your app for Cloud Run, it must be "ready to go," meaning it has migrated to either Google Cloud standalone equivalents or other third-party alternatives. For example, in a recent episode, we demonstrated how to migrate from App Engine ndb to Cloud NDB for Datastore access.

While we've recently begun to produce videos for such migrations, developers can already access code samples and codelab tutorials leading them through a variety of migrations. In today's video, we have both Python 2 and 3 sample apps that have divested from legacy services, thus ready to containerize for Cloud Run. Python 2 App Engine apps accessing Datastore are most likely to be using Cloud NDB whereas it would be Cloud Datastore for Python 3 users, so this is the starting point for this migration.

Because we're "only" switching execution platforms, there are no changes at all to the application code itself. This entire migration is completely based on changing the apps' configurations from App Engine to Cloud Run. In particular, App Engine artifacts such as app.yaml, appengine_config.py, and the lib folder are not used in Cloud Run and will be removed. A Dockerfile will be implemented to build your container. Apps with more complex configurations in their app.yaml files will likely need an equivalent service.yaml file for Cloud Run — if so, you'll find this app.yaml to service.yaml conversion tool handy. Following best practices means there'll also be a .dockerignore file.

App Engine and Cloud Functions are sourced-based where Google Cloud automatically provides a default HTTP server like gunicorn. Cloud Run is a bit more "DIY" because users have to provide a container image, meaning bundling our own server. In this case, we'll pick gunicorn explicitly, adding it to the top of the existing requirements.txt required packages file(s), as you can see in the screenshot below. Also illustrated is the Dockerfile where gunicorn is started to serve your app as the final step. The only differences for the Python 2 equivalent Dockerfile are: a) require the Cloud NDB package (google-cloud-ndb) instead of Cloud Datastore, and b) start with a Python 2 base image.

Image of The Python 3 requirements.txt and Dockerfile

The Python 3 requirements.txt and Dockerfile

Next steps

To walk developers through migrations, we always "START" with a working app then make the necessary updates that culminate in a working "FINISH" app. For this migration, the Python 2 sample app STARTs with the Module 2a code and FINISHes with the Module 4a code. Similarly, the Python 3 app STARTs with the Module 3b code and FINISHes with the Module 4b code. This way, if something goes wrong during your migration, you can always rollback to START, or compare your solution with our FINISH. If you are considering this migration for your own applications, we recommend you try it on a sample app like ours before considering it for yours. A corresponding codelab leading you step-by-step through this exercise is provided in addition to the video which you can use for guidance.

All migration modules, their videos (when published), codelab tutorials, START and FINISH code, etc., can be found in the migration repo. We hope to also one day cover other legacy runtimes like Java 8 so stay tuned. We'll continue with our journey from App Engine to Cloud Run ahead in Module 5 but will do so without explicit knowledge of containers, Docker, or Dockerfiles. Modernizing your development workflow to using containers and best practices like crafting a CI/CD pipeline isn't always straightforward; we hope content like this helps you progress in that direction!

#IamaGDE: Diana Rodríguez Manrique

#IamaGDE series presents: Google Maps Platform

Welcome to #IamaGDE - a series of spotlights presenting Google Developer Experts (GDEs) from across the globe. Discover their stories, passions, and highlights of their community work.

Today, meet Diana Rodríguez— Maps, Web, Cloud, and Firebase GDE.

Google Developer Expert, Diana Rodríguez

Diana Rodríguez’s 20 years in the tech industry have been focused on community and making accessible content. She is a full-stack developer with experience in backend infrastructure, automation, and a passion for Python. A self-taught programmer, Diana also learned programming skills from attending meetups and being an active member of her local developer community. She is the first female Venezuelan GDE.

“I put a lot of myself into public speaking, workshops, and articles,” says Diana. “I want to make everything I do as open and transparent as possible.”

Diana’s first foray into working with Google Maps was in 2016, when she built an app that helped record institutional violence against women in Argentina. As a freelance developer, she uses the Google Maps Platform for her delivery services clients.

“I have plenty of clients who need not only location tracking for their delivery fleet, but also to provide specific routes,” says Diana.

“The level of interaction that’s been added to Maps has made it easier for me as a developer to work with direct clients,” says Diana, who uses the Plus Codes feature to help delivery drivers find precise locations on a map. “I’m a heavy user of plus codes. They give people in remote areas and underserved communities the chance to have location services, including emergency and delivery services.”

Getting involved in the developer community

Diana first became involved in the developer community 20 years ago, in 1999, beginning with a university user group. She attended her first Devfest in Bangkok in 2010 and has worked in multiple developer communities since then. She was a co-organizer of GDG Triangle and is now an organizer of GDG Durham in North Carolina. In 2020, she gave virtual talks to global audiences.

“It’s been great to get to know other communities and reach the far corners of the Earth,” she says.

Image of Diana Rodriguez

Favorite Google Maps Platform features and current projects

Diana is excited about the Places API and the Maps team’s continuous improvements. She says the Maps team keeps the GDEs up to date on all the latest news and takes their feedback very seriously.

“Shoutout to Claire, Alex, and Angela, who are in direct contact with us, and everyone who works with them; they have been amazing,” she says. “I look forward to showcasing more upcoming changes. What comes next will be mind-blowing, immersing people into location in a different way that is more interactive.”

Of the new features released in June 2020, which include Cloud-based maps styling and Local Context, Diana says, “Having the freedom to customize the experience a lot more is amazing.”

As a Maps GDE in 2021, Diana plans to continue working on open source tech projects that benefit the greater good, like her recently completed app for Diabetes users, ScoutX, which notifies emergency contacts when a Diabetic person’s blood glucose values are too high or too low, in case they need immediate help.

She envisions an app that expands connectivity and geolocation tracking for hikers in remote areas, using LoRaWan technologies that can withstand harsh temperatures and conditions.

“Imagine you go to Yellowstone and get lost, with no GPS signal or phone signal, but there’s a tracking device connected to a LoRaWan network sending your location,” Diana says. “It’s much easier for rescue services to find you. Rack Wireless is working on providing satellite access, as well, and having precise latitude and longitude makes mapping simple.”

In the future, Diana sees herself managing a team that makes groundbreaking discoveries and puts technologies to use to help other people.

Follow Diana on Twitter at @cotufa82

Check out Diana’s projects on GitHub

For more information on Google Maps Platform, visit our website.

For more information on Google Developer Experts, visit our website.

#IamaGDE: Diana Rodríguez Manrique

#IamaGDE series presents: Google Maps Platform

Welcome to #IamaGDE - a series of spotlights presenting Google Developer Experts (GDEs) from across the globe. Discover their stories, passions, and highlights of their community work.

Today, meet Diana Rodríguez— Maps, Web, Cloud, and Firebase GDE.

Google Developer Expert, Diana Rodríguez

Diana Rodríguez’s 20 years in the tech industry have been focused on community and making accessible content. She is a full-stack developer with experience in backend infrastructure, automation, and a passion for Python. A self-taught programmer, Diana also learned programming skills from attending meetups and being an active member of her local developer community. She is the first female Venezuelan GDE.

“I put a lot of myself into public speaking, workshops, and articles,” says Diana. “I want to make everything I do as open and transparent as possible.”

Diana’s first foray into working with Google Maps was in 2016, when she built an app that helped record institutional violence against women in Argentina. As a freelance developer, she uses the Google Maps Platform for her delivery services clients.

“I have plenty of clients who need not only location tracking for their delivery fleet, but also to provide specific routes,” says Diana.

“The level of interaction that’s been added to Maps has made it easier for me as a developer to work with direct clients,” says Diana, who uses the Plus Codes feature to help delivery drivers find precise locations on a map. “I’m a heavy user of plus codes. They give people in remote areas and underserved communities the chance to have location services, including emergency and delivery services.”

Getting involved in the developer community

Diana first became involved in the developer community 20 years ago, in 1999, beginning with a university user group. She attended her first Devfest in Bangkok in 2010 and has worked in multiple developer communities since then. She was a co-organizer of GDG Triangle and is now an organizer of GDG Durham in North Carolina. In 2020, she gave virtual talks to global audiences.

“It’s been great to get to know other communities and reach the far corners of the Earth,” she says.

Image of Diana Rodriguez

Favorite Google Maps Platform features and current projects

Diana is excited about the Places API and the Maps team’s continuous improvements. She says the Maps team keeps the GDEs up to date on all the latest news and takes their feedback very seriously.

“Shoutout to Claire, Alex, and Angela, who are in direct contact with us, and everyone who works with them; they have been amazing,” she says. “I look forward to showcasing more upcoming changes. What comes next will be mind-blowing, immersing people into location in a different way that is more interactive.”

Of the new features released in June 2020, which include Cloud-based maps styling and Local Context, Diana says, “Having the freedom to customize the experience a lot more is amazing.”

As a Maps GDE in 2021, Diana plans to continue working on open source tech projects that benefit the greater good, like her recently completed app for Diabetes users, ScoutX, which notifies emergency contacts when a Diabetic person’s blood glucose values are too high or too low, in case they need immediate help.

She envisions an app that expands connectivity and geolocation tracking for hikers in remote areas, using LoRaWan technologies that can withstand harsh temperatures and conditions.

“Imagine you go to Yellowstone and get lost, with no GPS signal or phone signal, but there’s a tracking device connected to a LoRaWan network sending your location,” Diana says. “It’s much easier for rescue services to find you. Rack Wireless is working on providing satellite access, as well, and having precise latitude and longitude makes mapping simple.”

In the future, Diana sees herself managing a team that makes groundbreaking discoveries and puts technologies to use to help other people.

Follow Diana on Twitter at @cotufa82

Check out Diana’s projects on GitHub

For more information on Google Maps Platform, visit our website.

For more information on Google Developer Experts, visit our website.

Cloud NDB to Cloud Datastore migration

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

An optional migration

Serverless Migration Station is a mini-series from Serverless Expeditions focused on helping users on one of Google Cloud's serverless compute platforms modernize their applications. The video today demonstrates how to migrate a sample app from Cloud NDB (or App Engine ndb) to Cloud Datastore. While Cloud NDB suffices as a current solution for today's App Engine developers, this optional migration is for those who want to consolidate their app code to using a single client library to talk to Datastore.

Cloud Datastore started as Google App Engine's original database but matured to becoming its own standalone product in 2013. At that time, native client libraries were created for the new product so non-App Engine apps as well as App Engine second generation apps could access the service. Long-time developers have been using the original App Engine service APIs to access Datastore; for Python, this would be App Engine ndb. While the legacy ndb service is still available, its limitations and lack of availability in Python 3 are why we recommend users switch to standalone libraries like Cloud NDB in the preceding video in this series.

While Cloud NDB lets users break free from proprietary App Engine services and upgrade their applications to Python 3, it also gives non-App Engine apps access to Datastore. However, Cloud NDB's primary role is a transition tool for Python 2 App Engine developers. Non-App Engine developers and new Python 3 App Engine developers are directed to the Cloud Datastore native client library, not Cloud NDB.

As a result, those with a collection of Python 2 or Python 3 App Engine apps as well as non-App Engine apps may be using completely different libraries (ndb, Cloud NDB, Cloud Datastore) to connect to the same Datastore product. Following the best practices of code reuse, developers should consider consolidating to a single client library to access Datastore. Shared libraries provide stability and robustness with code that's constantly tested, debugged, and battle-proven. Module 2 showed users how to migrate from App Engine ndb to Cloud NDB, and today's Module 3 content focuses on migrating from Cloud NDB to Cloud Datastore. Users can also go straight from ndb directly to Cloud Datastore, skipping Cloud NDB entirely.

Migration sample and next steps

Cloud NDB follows an object model identical to App Engine ndb and is deliberately meant to be familiar to long-time Python App Engine developers while use of the Cloud Datastore client library is more like accessing a JSON document store. Their querying styles are also similar. You can compare and contrast them in the "diffs" screenshot below and in the video.

The diffs between the Cloud NDB and Cloud Datastore versions of the sample app

The "diffs" between the Cloud NDB and Cloud Datastore versions of the sample app

All that said, this migration is optional and only useful if you wish to consolidate to using a single client library. If your Python App Engine apps are stable with ndb or Cloud NDB, and you don't have any code using Cloud Datastore, there's no real reason to move unless Cloud Datastore has a compelling feature inaccessible from your current client library. If you are considering this migration and want to try it on a sample app before considering for yours, see the corresponding codelab and use the video for guidance.

It begins with the Module 2 code completed in the previous codelab/video; use your solution or ours as the "START". Both Python 2 (Module 2a folder) and Python 3 (Module 2b folder) versions are available. The goal is to arrive at the "FINISH" with an identical, working app but using a completely different Datastore client library. Our Python 2 FINISH can be found in the Module 3a folder while Python 3's FINISH is in the Module 3b folder. If something goes wrong during your migration, you can always rollback to START, or compare your solution with our FINISH. We will continue our Datastore discussion ahead in Module 6 as Cloud Firestore represents the next generation of the Datastore service.

All of these learning modules, corresponding videos (when published), codelab tutorials, START and FINISH code, etc., can be found in the migration repo. We hope to also one day cover other legacy runtimes like Java 8 and others, so stay tuned. Up next in Module 4, we'll take a different turn and showcase a product crossover, showing App Engine developers how to containerize their apps and migrate them to Cloud Run, our scalable container-hosting service in the cloud. If you can't wait for either Modules 4 or 6, try out their respective codelabs or access the code samples in the table at the repo above. Migrations aren't always easy, and we hope content like this helps you modernize your apps.

Migrating from App Engine ndb to Cloud NDB

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Migrating to standalone services

Today we're introducing the first video showing long-time App Engine developers how to migrate from the App Engine ndb client library that connects to Datastore. While the legacy App Engine ndb service is still available for Datastore access, new features and continuing innovation are going into Cloud Datastore, so we recommend Python 2 users switch to standalone product client libraries like Cloud NDB.

This video and its corresponding codelab show developers how to migrate the sample app introduced in a previous video and gives them hands-on experience performing the migration on a simple app before tackling their own applications. In the immediately preceding "migration module" video, we transitioned that app from App Engine's original webapp2 framework to Flask, a popular framework in the Python community. Today's Module 2 content picks up where that Module 1 leaves off, migrating Datastore access from App Engine ndb to Cloud NDB.

Migrating to Cloud NDB opens the doors to other modernizations, such as moving to other standalone services that succeed the original App Engine legacy services, (finally) porting to Python 3, breaking up large apps into microservices for Cloud Functions, or containerizing App Engine apps for Cloud Run.

Moving to Cloud NDB

App Engine's Datastore matured to becoming its own standalone product in 2013, Cloud Datastore. Cloud NDB is the replacement client library designed for App Engine ndb users to preserve much of their existing code and user experience. Cloud NDB is available in both Python 2 and 3, meaning it can help expedite a Python 3 upgrade to the second generation App Engine platform. Furthermore, Cloud NDB gives non-App Engine apps access to Cloud Datastore.

As you can see from the screenshot below, one key difference between both libraries is that Cloud NDB provides a context manager, meaning you would use the Python with statement in a similar way as opening files but for Datastore access. However, aside from moving code inside with blocks, no other changes are required of the original App Engine ndb app code that accesses Datastore. Of course your "YMMV" (your mileage may vary) depending on the complexity of your code, but the goal of the team is to provide as seamless of a transition as possible as well as to preserve "ndb"-style access.

The difference between the App Engine ndb and Cloud NDB versions of the sample app

The "diffs" between the App Engine ndb and Cloud NDB versions of the sample app

Next steps

To try this migration yourself, hit up the corresponding codelab and use the video for guidance. This Module 2 migration sample "STARTs" with the Module 1 code completed in the previous codelab (and video). Users can use their solution or grab ours in the Module 1 repo folder. The goal is to arrive at the end with an identical, working app that operates just like the Module 1 app but uses a completely different Datastore client library. You can find this "FINISH" code sample in the Module 2a folder. If something goes wrong during your migration, you can always rollback to START, or compare your solution with our FINISH. Bonus content migrating to Python 3 App Engine can also be found in the video and codelab, resulting in a second FINISH, the Module 2b folder.

All of these learning modules, corresponding videos (when published), codelab tutorials, START and FINISH code, etc., can be found in the migration repo. We hope to also one day cover other legacy runtimes like Java 8 and others, so stay tuned! Developers should also check out the official Cloud NDB migration guide which provides more migration details, including key differences between both client libraries.

Ahead in Module 3, we will continue the Cloud NDB discussion and present our first optional migration, helping users move from Cloud NDB to the native Cloud Datastore client library. If you can't wait, try out its codelab found in the table at the repo above. Migrations aren't always easy; we hope this content helps you modernize your apps and shows we're focused on helping existing users as much as new ones.