Tag Archives: cloud

How can App Engine users take advantage of Cloud Functions?

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Introduction

Recently, we discussed containerizing App Engine apps for Cloud Run, with or without Docker. But what about Cloud Functions… can App Engine users take advantage of that platform somehow? Back in the day, App Engine was always the right decision, because it was the only option. With Cloud Functions and Cloud Run joining in the serverless product suite, that's no longer the case.

Back when App Engine was the only choice, it was selected to host small, single-function apps. Yes, when it was the only option. Other developers have created huge monolithic apps for App Engine as well… because it was also the only option. Fast forward to today where code follows more service-oriented or event-driven architectures. Small apps can be moved to Cloud Functions to simplify the code and deployments while large apps could be split into smaller components, each running on Cloud Functions.

Refactoring App Engine apps for Cloud Functions

Small, single-function apps can be seen as a microservice, an API endpoint "that does something," or serve some utility likely called as a result of some event in a larger multi-tiered application, say to update a database row or send a customer email message. App Engine apps require some kind web framework and routing mechanism while Cloud Function equivalents can be freed from much of those requirements. Refactoring these types of App Engine apps for Cloud Functions will like require less overhead, helps ease maintenance, and allow for common components to be shared across applications.

Large, monolithic applications are often made up of multiple pieces of functionality bundled together in one big package, such as requisitioning a new piece of equipment, opening a customer order, authenticating users, processing payments, performing administrative tasks, and so on. By breaking this monolith up into multiple microservices into individual functions, each component can then be reused in other apps, maintenance is eased because software bugs will identify code closer to their root origins, and developers won't step on each others' toes.

Migration to Cloud Functions

In this latest episode of Serverless Migration Station, a Serverless Expeditions mini-series focused on modernizing serverless apps, we take a closer look at this product crossover, covering how to migrate App Engine code to Cloud Functions. There are several steps you need to take to prepare your code for Cloud Functions:

  • Divest from legacy App Engine "bundled services," e.g., Datastore, Taskqueue, Memcache, Blobstore, etc.
  • Cloud Functions supports modern runtimes; upgrade to Python 3, Java 11, or PHP 7
  • If your app is a monolith, break it up into multiple independent functions. (You can also keep a monolith together and containerize it for Cloud Run as an alternative.)
  • Make appropriate application updates to support Cloud Functions

    The first three bullets are outside the scope of this video and its codelab, so we'll focus on the last one. The changes needed for your app include the following:

    1. Remove unneeded and/or unsupported configuration
    2. Remove use of the web framework and supporting routing code
    3. For each of your functions, assign an appropriate name and install the request object it will receive when it is called.

    Regarding the last point, note that you can have multiple "endpoints" coming into a single function which processes the request path, calling other functions to handle those routes. If you have many functions in your app, separate functions for every endpoint becomes unwieldy; if large enough, your app may be more suited for Cloud Run. The sample app in this video and corresponding code sample only has one function, so having a single endpoint for that function works perfectly fine here.

    This migration series focuses on our earliest users, starting with Python 2. Regarding the first point, the app.yaml file is deleted. Next, almost all Flask resources are removed except for the template renderer (the app still needs to output the same HTML as the original App Engine app). All app routes are removed, and there's no instantiation of the Flask app object. Finally for the last step, the main function is renamed more appropriately to visitme() along with a request object parameter.

    This "migration module" starts with the (Python 3 version of the) Module 2 sample app, applies the steps above, and arrives at the migrated Module 11 app. Implementing those required changes is illustrated by this code "diff:"

    Migration of sample app to Cloud Functions

    Next steps

    If you're interested in trying this migration on your own, feel free to try the corresponding codelab which leads you step-by-step through this exercise and use the video for additional guidance.

    All migration modules, their videos (when published), codelab tutorials, START and FINISH code, etc., can be found in the migration repo. We hope to also one day cover other legacy runtimes like Java 8 as well as content for the next-generation Cloud Functions service, so stay tuned. If you're curious whether it's possible to write apps that can run on App Engine, Cloud Functions, or Cloud Run with no code changes at all, the answer is yes. Hope this content is useful for your consideration when modernizing your own serverless applications!

Google Cloud projects: Tips and best practices

By Peter Jacobsen, Google Technical Writer

Least privilege

Always apply the principle of least privilege when you provide access to Google Cloud resources. The best practice is to grant only the most limited predefined roles or custom roles that meet your needs.

For more information, see Least privilege.

Google Cloud billing alerts

Set up Google Cloud billing alerts for your projects at specified intervals for early warning of usage patterns, and to help reduce costs.

For more information, see Create, edit, or delete budgets and budget alerts.

API quotas

API quotas protect the Google infrastructure from excessive API requests. Traffic is blocked when the level of requests reaches the daily API quota level or a per-user rate limit.

To avoid disruptions due to an API quota level that's too low, set the quota for your app or API appropriately. Note that the lead time for the increase of quotas is one month.

For more information, see API Quotas.

Checklist for production-ready enterprise workloads

Use this checklist to set up scalable, production-ready enterprise workloads. Note that the checklist assumes that you're an administrator with control over your company's Google Cloud resources.

For more information, see Google Cloud setup checklist.

Google Workspace domain ownership of projects

Google Workspace domain ownership of your group's project lets you tie it into a Google Workspace account, rather than have it tied to a personal account.

For more information, see Best practices for planning accounts and organizations.

Identity-Aware Proxy (IAP)

IAP lets you hide your website until you’re ready for people to see it. IAP establishes a central authorization layer for apps accessed by HTTPS, so you can adopt an app-level access-control model rather than use network-level firewalls. When IAP protects an app or resource, only users who have the correct Identity and Access Management (IAM) role can access it through the proxy.

For more information, see Identity-Aware Proxy overview.

Cloud Build

Cloud Build can import source code from a variety of repositories or cloud storage spaces, execute a build to your specifications, and produce artifacts, such as Docker containers or Java archives. You can configure builds to fetch dependencies and run unit tests, static analyzes, and integration tests.

For more information, see Cloud Build.

Useful Google Cloud tools and services

Google Cloud has many tools and services that can help you create and keep your projects in sync, such as:

  • Cloud Build: executes your builds on Google Cloud infrastructure.
  • Google Cloud Deploy: deploys releases continuously to Google Kubernetes Engine.
  • Container Registry: provides a single place for your team to manage Docker images and control access.
  • Artifact Registry: provides a single place for your organization to manage container images and language packages, such as Maven and npm.
  • Cloud Source Repositories: provides a single place for your team to store, manage, and track code.
  • Cloud Deployment Manager: automates the creation and management of Google Cloud resources.

Google Groups for management across projects

Google Groups can help you manage teams across projects, which includes the setup of the group access through IAM. Groups such as project teams, departments, or classmates can communicate and collaborate with Google Groups. If you want to invite a group to an event or share documents with a group, you can send a single email to everyone in the group.

For more information about how to set up a group, see Google Groups.

Watch for Google suggestions

Google provides many useful tips and suggestions for best practices within the context of your work. For example, if you go to a project that you haven't used in a while, you may get a warning like this one:

If you click the link, you see a page that tells you how to apply role recommendations to help you enforce the principle of least privilege to ensure that principals have only the permissions that they actually need. Google offers many suggestions for best practices such as this one, so watch for them as you work.

Here's an example of a useful in-console recommendation that you might see from your billing page:

If you click Learn more, you arrive at a Cloud billing checklist, which is part of a longer billing-specific checklist that you might find useful.

Here's another example found on the API & Services page:

If you click Edit settings, you arrive on a page where you can change the settings.

Prediction Framework, a time saver for Data Science prediction projects

Posted by Álvaro Lamas, Héctor Parra, Jaime Martínez, Julia Hernández, Miguel Fernandes, Pablo Gil

Acquiring high value customers using predicted Lifetime Value, taking specific actions on high propensity of churn users, generating and activating audiences based on machine learning processed signals…All of those marketing scenarios require of analyzing first party data, performing predictions on the data and activating the results into the different marketing platforms like Google Ads as frequently as possible to keep the data fresh.

Feeding marketing platforms like Google Ads on a regular and frequent basis, requires a robust, report oriented and cost reduced ETL & prediction pipeline. These pipelines are very similar regardless of the use case and it’s very easy to fall into reinventing the wheel every time or manually copy & paste structural code increasing the risk of introducing errors.

Wouldn't it be great to have a common reusable structure and just add the specific code for each of the stages?

Here is where Prediction Framework plays a key role in helping you implement and accelerate your first-party data prediction projects by providing the backbone elements of the predictive process.

Prediction Framework is a fully customizable pipeline that allows you to simplify the implementation of prediction projects. You only need to have the input data source, the logic to extract and process the data and a Vertex AutoML model ready to use along with the right feature list, and the framework will be in charge of creating and deploying the required artifacts. With a simple configuration, all the common artifacts of the different stages of this type of projects will be created and deployed for you: data extraction, data preparation (aka feature engineering), filtering, prediction and post-processing, in addition to some other operational functionality including backfilling, throttling (for API limits), synchronization, storage and reporting.

The Prediction Framework was built to be hosted in the Google Cloud Platform and it makes use of Cloud Functions to do all the data processing (extraction, preparation, filtering and post-prediction processing), Firestore, Pub/Sub and Schedulers for the throttling system and to coordinate the different phases of the predictive process, Vertex AutoML to host your machine learning model and BigQuery as the final storage of your predictions.

Prediction Framework Architecture

To get involved and start using the Prediction Framework, a configuration file needs to be prepared with some environment variables about the Google Cloud Project to be used, the data sources, the ML model to make the predictions and the scheduler for the throttling system. In addition, custom queries for the data extraction, preparation, filtering and post-processing need to be added in the deploy files customization. Then, the deployment is done automatically using a deployment script provided by the tool.

Once deployed, all the stages will be executed one after the other, storing the intermediate and final data in the BigQuery tables:

  • Extract: this step will, on a timely basis, query the transactions from the data source, corresponding to the run date (scheduler or backfill run date) and will store them in a new table into the local project BigQuery.
  • Prepare: immediately after the extract of the transactions for one specific date is available, the data will be picked up from the local BigQuery and processed according to the specs of the model. Once the data is processed, it will be stored in a new table into the local project BigQuery.
  • Filter: this step will query the data stored by the prepare process and will filter the required data and store it into the local project BigQuery. (i.e only taking into consideration new customers transactionsWhat a new customer is up to the instantiation of the framework for the specific use case. Will be covered later).
  • Predict: once the new customers are stored, this step will read them from BigQuery and call the prediction using Vertex API. A formula based on the result of the prediction could be applied to tune the value or to apply thresholds. Once the data is ready, it will be stored into the BigQuery within the target project.
  • Post_process: A formula could be applied to the AutoML batch results to tune the value or to apply thresholds. Once the data is ready, it will be stored into the BigQuery within the target project.

One of the powerful features of the prediction framework is that it allows backfilling directly from the BigQuery user interface, so in case you’d need to reprocess a whole period of time, it could be done in literally 4 clicks.

In summary: Prediction Framework simplifies the implementation of first-party data prediction projects, saving time and minimizing errors of manual deployments of recurrent architectures.

For additional information and to start experimenting, you can visit the Prediction Framework repository on Github.

Prediction Framework, a time saver for Data Science prediction projects

Posted by Álvaro Lamas, Héctor Parra, Jaime Martínez, Julia Hernández, Miguel Fernandes, Pablo Gil

Acquiring high value customers using predicted Lifetime Value, taking specific actions on high propensity of churn users, generating and activating audiences based on machine learning processed signals…All of those marketing scenarios require of analyzing first party data, performing predictions on the data and activating the results into the different marketing platforms like Google Ads as frequently as possible to keep the data fresh.

Feeding marketing platforms like Google Ads on a regular and frequent basis, requires a robust, report oriented and cost reduced ETL & prediction pipeline. These pipelines are very similar regardless of the use case and it’s very easy to fall into reinventing the wheel every time or manually copy & paste structural code increasing the risk of introducing errors.

Wouldn't it be great to have a common reusable structure and just add the specific code for each of the stages?

Here is where Prediction Framework plays a key role in helping you implement and accelerate your first-party data prediction projects by providing the backbone elements of the predictive process.

Prediction Framework is a fully customizable pipeline that allows you to simplify the implementation of prediction projects. You only need to have the input data source, the logic to extract and process the data and a Vertex AutoML model ready to use along with the right feature list, and the framework will be in charge of creating and deploying the required artifacts. With a simple configuration, all the common artifacts of the different stages of this type of projects will be created and deployed for you: data extraction, data preparation (aka feature engineering), filtering, prediction and post-processing, in addition to some other operational functionality including backfilling, throttling (for API limits), synchronization, storage and reporting.

The Prediction Framework was built to be hosted in the Google Cloud Platform and it makes use of Cloud Functions to do all the data processing (extraction, preparation, filtering and post-prediction processing), Firestore, Pub/Sub and Schedulers for the throttling system and to coordinate the different phases of the predictive process, Vertex AutoML to host your machine learning model and BigQuery as the final storage of your predictions.

Prediction Framework Architecture

To get involved and start using the Prediction Framework, a configuration file needs to be prepared with some environment variables about the Google Cloud Project to be used, the data sources, the ML model to make the predictions and the scheduler for the throttling system. In addition, custom queries for the data extraction, preparation, filtering and post-processing need to be added in the deploy files customization. Then, the deployment is done automatically using a deployment script provided by the tool.

Once deployed, all the stages will be executed one after the other, storing the intermediate and final data in the BigQuery tables:

  • Extract: this step will, on a timely basis, query the transactions from the data source, corresponding to the run date (scheduler or backfill run date) and will store them in a new table into the local project BigQuery.
  • Prepare: immediately after the extract of the transactions for one specific date is available, the data will be picked up from the local BigQuery and processed according to the specs of the model. Once the data is processed, it will be stored in a new table into the local project BigQuery.
  • Filter: this step will query the data stored by the prepare process and will filter the required data and store it into the local project BigQuery. (i.e only taking into consideration new customers transactionsWhat a new customer is up to the instantiation of the framework for the specific use case. Will be covered later).
  • Predict: once the new customers are stored, this step will read them from BigQuery and call the prediction using Vertex API. A formula based on the result of the prediction could be applied to tune the value or to apply thresholds. Once the data is ready, it will be stored into the BigQuery within the target project.
  • Post_process: A formula could be applied to the AutoML batch results to tune the value or to apply thresholds. Once the data is ready, it will be stored into the BigQuery within the target project.

One of the powerful features of the prediction framework is that it allows backfilling directly from the BigQuery user interface, so in case you’d need to reprocess a whole period of time, it could be done in literally 4 clicks.

In summary: Prediction Framework simplifies the implementation of first-party data prediction projects, saving time and minimizing errors of manual deployments of recurrent architectures.

For additional information and to start experimenting, you can visit the Prediction Framework repository on Github.

Knative applies to become a CNCF incubating project

Image of the Knative logo

In 2018, the Knative project was founded and released by Google, and was subsequently developed in close partnership with IBM, Red Hat, VMware, and SAP. The project provides a serverless experience layer on Kubernetes, providing the building blocks you need to build and deploy modern, container-based serverless applications. Over the last three years, Knative has become the most widely-installed serverless layer on Kubernetes. More recently, Knative 1.0 was released, reaching an important milestone that was made possible thanks to the contributions and collaboration of over 600 developers in the community.

Google has worked closely with key maintainers and partners on the evolution of Knative, including conformance definition and stability ahead of the 1.0 milestone. To enable the next phase of community-driven innovation in Knative, today we have submitted Knative to the Cloud Native Computing Foundation (CNCF) for consideration as an incubating project, which begins the process to donate the Knative trademark, IP, and code.

As a leader in serverless computing, we’re committed to the future of Knative, and offering Knative 1.0 conformant Cloud Run and Cloud Run For Anthos products. Finding a home in the CNCF secures Knative’s long-term future and encourages continuing and open innovation. This donation recognizes the adoption and investment in Knative from the community, and will encourage further multi-vendor innovation, broader education and training.

At Google, we believe that using open source comes with a responsibility to contribute, sustain, and improve the projects that help drive innovation and make better software. We are excited to see how developers will continue to build and innovate in serverless using Knative.

By Alexandra Bush and Edd Wilder-James, Google Open Source

Four areas of open source contributions from Cloud Databases

Open Cloud enables you to develop software faster, innovate more easily, and scale more efficiently—while also reducing technology risk. Google has a long history of leadership in open source, and today, I want to look back at our activities around open source projects, for databases, over the past year.

Give developers the best tools to be efficient

Developers choose to build applications with managed database services on Google Cloud to benefit from velocity, scalability, security, and performance. To enable you to be most efficient and deliver your best possible work, we deliver tools and frameworks that work with your preferred development environments, no matter if you develop in the cloud or on premises. To make local testing, building and continuous integration easier for our cloud-native databases, we released emulators for Cloud Spanner, Firestore, and Cloud Bigtable so that you can test your code wherever you develop it - without the need to create or re-create cloud infrastructure with every test run.

Another area where we are helping developers is with instrumentation of Cloud SQL for easier debugging and performance tuning. With Cloud SQL Insights it is easier than ever to pinpoint underperforming SQL statements. That said, without additional instrumentation, it can be cumbersome to identify the source code or microservice that issued that SQL - let alone tying a SQL statement to a client session and its context. So we released Sqlcommenter as an open source library that will automatically add this instrumentation as SQL comments in queries that are generated by popular ORMs like Hibernate, Django, Sqlalchemy, and others (repo blog). We didn’t stop there, but merged Sqlcommenter with OpenTelemetry (blog) to add SQL insights from instrumented queries back to OpenTelemetry traces.

Lastly, we want to broaden access to our differentiated offerings, like Spanner. The recently announced Spanner PostgreSQL interface allows organizations to access Spanner’s industry-leading consistency and availability at scale using tools and skills from the popular PostgreSQL ecosystem. This new way of working with Spanner provides familiarity for developers and portability for administrators. (blog) Learn more in the documentation or sign up for the preview today.

Provide connectivity that is simple and secure

Connecting to APIs and databases from an application running in the cloud should be simple and secure. That’s why we recommend using IAM and Application Default Credentials when authenticating to other services. The Cloud SQL Proxy (repo) has been doing this and also setting up firewalls for you for a while. It works by running a local client either inside your VM or a GKE cluster. This year, we added libraries for Java (repo) and Python (repo) that can provide similar functionality without the overhead of running an extra client such as the proxy.

Cloud Spanner also offers an open source adapter for its new PostgreSQL interface (repo). This local proxy allows tools, starting with psql, to connect to a Spanner database using the PostgreSQL wire protocol.

Image 1: White pipes in datacenter

Manage cloud infrastructure with the tools of your choice

When it comes to provisioning, monitoring, and managing your cloud database services, flexibility and choice are important. We provide you with our cloud console, gcloud cli, and APIs as well as our own Deployment Manager. That said, you may prefer different ways to manage cloud infrastructure - whether through interactive tools or scripts or embedded into CI/CD pipelines that support GitOps or other controls, checks, and balances. Terraform is one of those open tools that is very popular - and we ensure that our cloud databases can be managed from it as documented in this blog about creating Spanner instances with Terraform.

If you manage the majority of your resources with Kubernetes either directly or through package managers like Helm, then our Kubernetes Config Connector (KCC) might be for you. In a nutshell, KCC exposes Google Cloud services such as Cloud SQL, Spanner, and others as Custom Resources in Kubernetes. This allows you to create and reconcile cloud resources outside of Kubernetes just like K8s native objects.

Once you are managing cloud infrastructure with CI/CD, the next step is to extend that same mechanism to manage objects within your databases such as tables, indexes, and views. To that extent we have released a Liquibase extension for Cloud Spanner.

Help you to move data with confidence

Cloud journeys often involve moving data either in a lift and shift process or sometimes replatforming to a different database. Whatever your journey, we want to simplify the process and give you the confidence that your migration is successful.

For enterprise users with Oracle databases, we have several open source projects. First, we have the Optimus Prime database assessment tool (repo) that queries your database and collects information about schemas and historic performance to be analyzed for migration complexity and consolidation potential. Our own professional services teams have been using this toolset to plan migrations to Bare Metal Solution for Oracle.

Some Oracle users are looking for opportunities to transform their workloads to fit with their bigger strategy of modernizing applications with Kubernetes. For this group we developed and open sourced the El Carro Kubernetes operator for Oracle. This not only automates database lifecycle tasks for systems running on Kubernetes, but also exposes declarative APIs for these operations.

If your application supports replatforming from Oracle to PostgreSQL, then we have a toolset for schema conversion along with dataflow pipelines that will read the output of a change data capture job and load it into a PostgreSQL database. What a great use-case for Datastream - our new serverless change data capture service.

Another case of heterogeneous database migration is to move MySQL or PostgreSQL databases to Cloud Spanner. HarbourBridge helps with the evaluation and data migration, and our latest contribution was adding support for DynamoDB as a source database. Part of every heterogeneous migration should be to validate that the source and target data are matching - we have released the Data Validation Toolkit for that use-case. DVT can connect to a number of source and target databases and compare the data on each side - giving you the confidence that your migration did not miss or change any records.

Conclusion

Whether you are migrating existing databases or you are building your next application in the cloud - we want to make your journey as comfortable and seamless as possible. Open source projects play a big role in meeting you where you are and providing you with the connectivity options, language support, and tools you want for management and migrations.

By Bjoern Rost, Product Manager, Google Cloud Databases

Migrating App Engine push queues to Cloud Tasks

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Banner image that shows the Cloud Task logo

Introduction

The previous Module 7 episode of Serverless Migration Station gave developers an idea of how App Engine push tasks work and how to implement their use in an existing App Engine ndb Flask app. In this Module 8 episode, we migrate this app from the App Engine Datastore (ndb) and Task Queue (taskqueue) APIs to Cloud NDB and Cloud Tasks. This makes your app more portable and provides a smoother transition from Python 2 to 3. The same principle applies to upgrading other legacy App Engine apps from Java 8 to 11, PHP 5 to 7, and up to Go 1.12 or newer.

Over the years, many of the original App Engine services such as Datastore, Memcache, and Blobstore, have matured to become their own standalone products, for example, Cloud Datastore, Cloud Memorystore, and Cloud Storage, respectively. The same is true for App Engine Task Queues, whose functionality has been split out to Cloud Tasks (push queues) and Cloud Pub/Sub (pull queues), now accessible to developers and applications outside of App Engine.

Migrating App Engine push queues to Cloud Tasks video

Migrating to Cloud NDB and Cloud Tasks

The key updates being made to the application:

  1. Add support for Google Cloud client libraries in the app's configuration
  2. Switch from App Engine APIs to their standalone Cloud equivalents
  3. Make required library adjustments, e.g., add use of Cloud NDB context manager
  4. Complete additional setup for Cloud Tasks
  5. Make minor updates to the task handler itself

The bulk of the updates are in #3 and #4 above, and those are reflected in the following "diff"s for the main application file:

Screenshot shows primary differences in code when switching to Cloud NDB & Cloud Tasks

Primary differences switching to Cloud NDB & Cloud Tasks

With these changes implemented, the web app works identically to that of the Module 7 sample, but both the database and task queue functionality have been completely swapped to using the standalone/unbundled Cloud NDB and Cloud Tasks libraries… congratulations!

Next steps

To do this exercise yourself, check out our corresponding codelab which leads you step-by-step through the process. You can use this in addition to the video, which can provide guidance. You can also review the push tasks migration guide for more information. Arriving at a fully-functioning Module 8 app featuring Cloud Tasks sets the stage for a larger migration ahead in Module 9. We've accomplished the most important step here, that is, getting off of the original App Engine legacy bundled services/APIs. The Module 9 migration from Python 2 to 3 and Cloud NDB to Cloud Firestore, plus the upgrade to the latest version of the Cloud Tasks client library are all fairly optional, but they represent a good opportunity to perform a medium-sized migration.

All migration modules, their videos (when available), codelab tutorials, and source code, can be found in the migration repo. While the content focuses initially on Python users, we will cover other legacy runtimes soon so stay tuned.

How to use App Engine push queues in Flask apps

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Banner image that shows the Cloud Task logo

Introduction

Since its original launch in 2008, many of the core Google App Engine services such as Datastore, Memcache, and Blobstore, have matured to become their own standalone products: for example, Cloud Datastore, Cloud Memorystore, and Cloud Storage, respectively. The same is true for App Engine Task Queues with Cloud Tasks. Today's Module 7 episode of Serverless Migration Station reviews how App Engine push tasks work, by adding this feature to an existing App Engine ndb Flask app.

App Engine push queues in Flask apps video

That app is where we left off at the end of Module 1, migrating its web framework from App Engine webapp2 to Flask. The app registers web page visits, creating a Datastore Entity for each. After a new record is created, the ten most recent visits are displayed to the end-user. If the app only shows the latest visits, there is no reason to keep older visits, so the Module 7 exercise adds a push task that deletes all visits older than the oldest one shown. Tasks execute asynchronously outside the normal application flow.

Key updates

The following are the changes being made to the application:

  1. Add use of App Engine Task Queues (taskqueue) API
  2. Determine oldest visit displayed, logging and saving that timestamp
  3. Create task to delete old visits
  4. Update web page template to display timestamp threshold
  5. Log how many and which visits (by Entity ID) are deleted

Except for #4 which occurs in the HTML template file, these updates are reflected in the "diff"s for the main application file:

Screenshot of App Engine push tasks application source code differences

Adding App Engine push tasks application source code differences

With these changes implemented, the web app now shows the end-user which visits will be deleted by the new push task:

Screenshot of VisitMe example showing last ten site visits. A red circle around older visits being deleted

Sample application output

Next steps

To do this exercise yourself, check out our corresponding codelab which leads you step-by-step through the process. You can use this in addition to the video, which can provide guidance. You can also review the push queue documentation for more information. Arriving at a fully-functioning Module 7 app featuring App Engine push tasks sets the stage for migrating it to Cloud Tasks (and Cloud NDB) ahead in Module 8.

All migration modules, their videos (when available), codelab tutorials, and source code, can be found in the migration repo. While the content focuses initially on Python users, we will cover other legacy runtimes soon so stay tuned.

Exploring serverless with a nebulous app: Deploy the same app to App Engine, Cloud Functions, or Cloud Run

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Banner image that shows the App Engine, Cloud Functions, and Cloud Run logos

Introduction

Google Cloud offers three distinct ways of running your code or application in a serverless way, each serving different use cases. Google App Engine, our first Cloud product, was created to give users the ability to deploy source-based web applications or mobile backends directly to the cloud without the need of thinking about servers or scaling. Cloud Functions came later for scenarios where you may not have an entire app, great for one-off utility functions or event-driven microservices. Cloud Run is our latest fully-managed serverless product that gives developers the flexibility of containers along with the convenience of serverless.

As all are serverless compute platforms, users recognize they share some similarities along with clear differences, and often, they ask:

  1. How different is deploying code to App Engine, Cloud Functions, or Cloud Run?
  2. Is it challenging to move from one to another if I feel the other may better fit my needs?

We're going to answer these questions today by sharing a unique application with you, one that can be deployed to all three platforms without changing any application code. All of the necessary changes are done in configuration.

More motivation

Another challenge for developers can be trying to learn how to use another Cloud product, such as this request, paraphrased from a user:

  1. I have a Google App Engine app
  2. I want to call the Cloud Translation API from that app

Sounds simple enough. This user went straight to the App Engine and Translation API documentation where they were able to get started with the App Engine Quickstart to get their app up and going, then found the Translation API setup page and started looking into permissions needed to access the API. However, they got stuck at the Identity and Access Management (IAM) page on roles, being overwhelmed at all the options but no clear path forward. In light of this, let's add a third question to preceding pair outlined earlier:

  1. How do you access Cloud APIs from a Cloud serverless platform?
Without knowing what that user was going to build, let's just implement a barebones translator, an "MVP" (minimally viable product) version of a simple "My Google Translate" Python Flask app using the Translation API, one of Google Cloud's AI/ML "building block" APIs. These APIs are backed by pre-trained machine learning models, giving developers with little or no background in AI/ML the ability to leverage the benefits of machine learning with only API calls.

The application

The app consists of a simple web page prompting the user for a phrase to translate from English to Spanish. The translated results along with the original phrase are presented along with an empty form for a follow-up translation if desired. While the majority of this app's deployments are in Python 3, there are still many users working on upgrading from Python 2, so some of those deployments are available to help with migration planning. Taking this into account, this app can be deployed (at least) eight different ways:
  1. Local (or hosted) Flask server (Python 2)
  2. Local (or hosted) Flask server (Python 3)
  3. Google App Engine (Python 2)
  4. Google App Engine (Python 3)
  5. Google Cloud Functions (Python 3)
  6. Google Cloud Run (Python 2 via Docker)
  7. Google Cloud Run (Python 3 via Docker)
  8. Google Cloud Run (Python 3 via Cloud Buildpacks)
The following is a brief glance at the files and which configurations they're for: Screenshot of Nebulous serverless sample app files

Nebulous serverless sample app files

Diving straight into the application, let's look at its primary function, translate():
@app.route('/', methods=['GET', 'POST'])
def translate(gcf_request=None):
local_request = gcf_request if gcf_request else request
text = translated = None
if local_request.method == 'POST':
text = local_request.form['text'].strip()
if text:
data = {
'contents': [text],
'parent': PARENT,
'target_language_code': TARGET[0],
}
rsp = TRANSLATE.translate_text(request=data)
translated = rsp.translations[0].translated_text
context = {
'orig': {'text': text, 'lc': SOURCE},
'trans': {'text': translated, 'lc': TARGET},
}
return render_template('index.html', **context)

Core component (translate()) of sample application


Some key app components:
  • Upon an initial request (GET), an HTML template is rendered featuring a simple form with an empty text field for the text to translate.
  • The form POSTs back to the app, and in this case, grabs the text to translate, sends the request to the Translation API, receives and displays the results to the user along with an empty form for another translation.
  • There is a special "ifdef" for Cloud Functions near the top to receive a request object because a web framework isn't used like you'd have with App Engine or Cloud Run, so Cloud Functions provides one for this reason.
The app runs identically whether running locally or deployed to App Engine, Cloud Functions, or Cloud Run. The magic is all in the configuration. The requirements.txt file* is used in all configurations, whether to install third-party packages locally, or to direct the Cloud Build system to automatically install those libraries during deployment. Beyond requirements.txt, things start to differ:
  1. App Engine has an app.yaml file and possibly an appengine_config.py file.
  2. Cloud Run has either a Dockerfile (Docker) or Procfile (Cloud Buildpacks), and possibly a service.yaml file.
  3. Cloud Functions, the "simplest" of the three, has no configuration outside of a package requirements file (requirements.txt, package.json, etc.).
The following is what you should expect to see after completing one translation request: Screenshot of My Google Translate (1990s Edition) in Incognito Window

"My Google Translate" MVP app (Cloud Run edition)

Next steps

The sample app can be run locally or on your own hosting server, but now you also know how to deploy it to each of Cloud's serverless platforms and what those subtle differences are. You also have a sense of the differences between each platform as well as what it takes to switch from one to another. For example, if your organization is moving to implement containerization into your software development workflow, you can migrate your existing App Engine apps to Cloud Run using Docker or using Cloud Buildpacks if you don't want to think about containers or Dockerfiles. Lastly, you now know how to access Cloud APIs from these platforms. Lastly, you now know how to access Cloud APIs from these platforms.

The user described earlier was overwhelmed at all the IAM roles and options available because this type of detail is required to provide the most security options for accessing Cloud services, but when prototyping, the fastest on-ramp is to use the default service account that comes with Cloud serverless platforms. These help you get that prototype working while allowing you to learn more about IAM roles and required permissions. Once you've progressed far enough to consider deploying to production, you can then follow the best practice of "least privileges" and create your own (user-managed) service accounts with the minimal permissions required so your application functions properly.

To dive in, the code and codelabs (free, self-paced, hands-on tutorials) for each deployment are available in its open source repository. An active Google Cloud billing account is required to deploy this application to each of our serverless platforms even though you can do all of them without incurring charges. More information can be found in the "Cost" section of the repo's README. We hope this sample app teaches you more about the similarities and differences between our plaforms, shows you how you can "shift" applications comfortably between them, and provides a light introduction to another Cloud API. Also check out my colleague's post featuring similar content for Node.js.

An easier way to move your App Engine apps to Cloud Run

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Blue header

An easier yet still optional migration

In the previous episode of the Serverless Migration Station video series, developers learned how to containerize their App Engine code for Cloud Run using Docker. While Docker has gained popularity over the past decade, not everyone has containers integrated into their daily development workflow, and some prefer "containerless" solutions but know that containers can be beneficial. Well today's video is just for you, showing how you can still get your apps onto Cloud Run, even If you don't have much experience with Docker, containers, nor Dockerfiles.

App Engine isn't going away as Google has expressed long-term support for legacy runtimes on the platform, so those who prefer source-based deployments can stay where they are so this is an optional migration. Moving to Cloud Run is for those who want to explicitly move to containerization.

Migrating to Cloud Run with Cloud Buildpacks video

So how can apps be containerized without Docker? The answer is buildpacks, an open-source technology that makes it fast and easy for you to create secure, production-ready container images from source code, without a Dockerfile. Google Cloud Buildpacks adheres to the buildpacks open specification and allows users to create images that run on all GCP container platforms: Cloud Run (fully-managed), Anthos, and Google Kubernetes Engine (GKE). If you want to containerize your apps while staying focused on building your solutions and not how to create or maintain Dockerfiles, Cloud Buildpacks is for you.

In the last video, we showed developers how to containerize a Python 2 Cloud NDB app as well as a Python 3 Cloud Datastore app. We targeted those specific implementations because Python 2 users are more likely to be using App Engine's ndb or Cloud NDB to connect with their app's Datastore while Python 3 developers are most likely using Cloud Datastore. Cloud Buildpacks do not support Python 2, so today we're targeting a slightly different audience: Python 2 developers who have migrated from App Engine ndb to Cloud NDB and who have ported their apps to modern Python 3 but now want to containerize them for Cloud Run.

Developers familiar with App Engine know that a default HTTP server is provided by default and started automatically, however if special launch instructions are needed, users can add an entrypoint directive in their app.yaml files, as illustrated below. When those App Engine apps are containerized for Cloud Run, developers must bundle their own server and provide startup instructions, the purpose of the ENTRYPOINT directive in the Dockerfile, also shown below.

Starting your web server with App Engine (app.yaml) and Cloud Run with Docker (Dockerfile) or Buildpacks (Procfile)

Starting your web server with App Engine (app.yaml) and Cloud Run with Docker (Dockerfile) or Buildpacks (Procfile)

In this migration, there is no Dockerfile. While Cloud Buildpacks does the heavy-lifting, determining how to package your app into a container, it still needs to be told how to start your service. This is exactly what a Procfile is for, represented by the last file in the image above. As specified, your web server will be launched in the same way as in app.yaml and the Dockerfile above; these config files are deliberately juxtaposed to expose their similarities.

Other than this swapping of configuration files and the expected lack of a .dockerignore file, the Python 3 Cloud NDB app containerized for Cloud Run is nearly identical to the Python 3 Cloud NDB App Engine app we started with. Cloud Run's build-and-deploy command (gcloud run deploy) will use a Dockerfile if present but otherwise selects Cloud Buildpacks to build and deploy the container image. The user experience is the same, only without the time and challenges required to maintain and debug a Dockerfile.

Get started now

If you're considering containerizing your App Engine apps without having to know much about containers or Docker, we recommend you try this migration on a sample app like ours before considering it for yours. A corresponding codelab leading you step-by-step through this exercise is provided in addition to the video which you can use for guidance.

All migration modules, their videos (when available), codelab tutorials, and source code, can be found in the migration repo. While our content initially focuses on Python users, we hope to one day also cover other legacy runtimes so stay tuned. Containerization may seem foreboding, but the goal is for Cloud Buildpacks and migration resources like this to aid you in your quest to modernize your serverless apps!