Tag Archives: Cloud Datastore

How to use App Engine Memcache in Flask apps (Module 12)

Posted by Wesley Chun

Background

In our ongoing Serverless Migration Station series aimed at helping developers modernize their serverless applications, one of the key objectives for Google App Engine developers is to upgrade to the latest language runtimes, such as from Python 2 to 3 or Java 8 to 17. Another objective is to help developers learn how to move away from App Engine legacy APIs (now called "bundled services") to Cloud standalone equivalent services. Once this has been accomplished, apps are much more portable, making them flexible enough to:

In today's Module 12 video, we're going to start our journey by implementing App Engine's Memcache bundled service, setting us up for our next move to a more complete in-cloud caching service, Cloud Memorystore. Most apps typically rely on some database, and in many situations, they can benefit from a caching layer to reduce the number of queries and improve response latency. In the video, we add use of Memcache to a Python 2 app that has already migrated web frameworks from webapp2 to Flask, providing greater portability and execution options. More importantly, it paves the way for an eventual 3.x upgrade because the Python 3 App Engine runtime does not support webapp2. We'll cover both the 3.x and Cloud Memorystore ports next in Module 13.

Got an older app needing an update? We can help with that.

Adding use of Memcache

The sample application registers individual web page "visits," storing visitor information such as the IP address and user agent. In the original app, these values are stored immediately, and then the most recent visits are queried to display in the browser. If the same user continuously refreshes their browser, each refresh constitutes a new visit. To discourage this type of abuse, we cache the same user's visit for an hour, returning the same cached list of most recent visits unless a new visitor arrives or an hour has elapsed since their initial visit.

Below is pseudocode representing the core part of the app that saves new visits and queries for the most recent visits. Before, you can see how each visit is registered. After the update, the app attempts to fetch these visits from the cache. If cached results are available and "fresh" (within the hour), they're used immediately, but if cache is empty, or a new visitor arrives, the current visit is stored as before, and this latest collection of visits is cached for an hour. The bolded lines represent the new code that manages the cached data.

Adding App Engine Memcache usage to sample app

Wrap-up

Today's "migration" began with the Module 1 sample app. We added a Memcache-based caching layer and arrived at the finish line with the Module 12 sample app. To practice this on your own, follow the codelab doing it by-hand while following the video. The Module 12 app will then be ready to upgrade to Cloud Memorystore should you choose to do so.

In Fall 2021, the App Engine team extended support of many of the bundled services to next-generation runtimes, meaning you are no longer required to migrate to Cloud Memorystore when porting your app to Python 3. You can continue using Memcache in your Python 3 app so long as you retrofit the code to access bundled services from next-generation runtimes.

If you do want to move to Cloud Memorystore, stay tuned for the Module 13 video or try its codelab to get a sneak peek. All Serverless Migration Station content (codelabs, videos, source code [when available]) can be accessed at its open source repo. While our content initially focuses on Python users, we hope to one day cover other language runtimes, so stay tuned. For additional video content, check out our broader Serverless Expeditions series.

Cloud NDB to Cloud Datastore migration

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

An optional migration

Serverless Migration Station is a mini-series from Serverless Expeditions focused on helping users on one of Google Cloud's serverless compute platforms modernize their applications. The video today demonstrates how to migrate a sample app from Cloud NDB (or App Engine ndb) to Cloud Datastore. While Cloud NDB suffices as a current solution for today's App Engine developers, this optional migration is for those who want to consolidate their app code to using a single client library to talk to Datastore.

Cloud Datastore started as Google App Engine's original database but matured to becoming its own standalone product in 2013. At that time, native client libraries were created for the new product so non-App Engine apps as well as App Engine second generation apps could access the service. Long-time developers have been using the original App Engine service APIs to access Datastore; for Python, this would be App Engine ndb. While the legacy ndb service is still available, its limitations and lack of availability in Python 3 are why we recommend users switch to standalone libraries like Cloud NDB in the preceding video in this series.

While Cloud NDB lets users break free from proprietary App Engine services and upgrade their applications to Python 3, it also gives non-App Engine apps access to Datastore. However, Cloud NDB's primary role is a transition tool for Python 2 App Engine developers. Non-App Engine developers and new Python 3 App Engine developers are directed to the Cloud Datastore native client library, not Cloud NDB.

As a result, those with a collection of Python 2 or Python 3 App Engine apps as well as non-App Engine apps may be using completely different libraries (ndb, Cloud NDB, Cloud Datastore) to connect to the same Datastore product. Following the best practices of code reuse, developers should consider consolidating to a single client library to access Datastore. Shared libraries provide stability and robustness with code that's constantly tested, debugged, and battle-proven. Module 2 showed users how to migrate from App Engine ndb to Cloud NDB, and today's Module 3 content focuses on migrating from Cloud NDB to Cloud Datastore. Users can also go straight from ndb directly to Cloud Datastore, skipping Cloud NDB entirely.

Migration sample and next steps

Cloud NDB follows an object model identical to App Engine ndb and is deliberately meant to be familiar to long-time Python App Engine developers while use of the Cloud Datastore client library is more like accessing a JSON document store. Their querying styles are also similar. You can compare and contrast them in the "diffs" screenshot below and in the video.

The diffs between the Cloud NDB and Cloud Datastore versions of the sample app

The "diffs" between the Cloud NDB and Cloud Datastore versions of the sample app

All that said, this migration is optional and only useful if you wish to consolidate to using a single client library. If your Python App Engine apps are stable with ndb or Cloud NDB, and you don't have any code using Cloud Datastore, there's no real reason to move unless Cloud Datastore has a compelling feature inaccessible from your current client library. If you are considering this migration and want to try it on a sample app before considering for yours, see the corresponding codelab and use the video for guidance.

It begins with the Module 2 code completed in the previous codelab/video; use your solution or ours as the "START". Both Python 2 (Module 2a folder) and Python 3 (Module 2b folder) versions are available. The goal is to arrive at the "FINISH" with an identical, working app but using a completely different Datastore client library. Our Python 2 FINISH can be found in the Module 3a folder while Python 3's FINISH is in the Module 3b folder. If something goes wrong during your migration, you can always rollback to START, or compare your solution with our FINISH. We will continue our Datastore discussion ahead in Module 6 as Cloud Firestore represents the next generation of the Datastore service.

All of these learning modules, corresponding videos (when published), codelab tutorials, START and FINISH code, etc., can be found in the migration repo. We hope to also one day cover other legacy runtimes like Java 8 and others, so stay tuned. Up next in Module 4, we'll take a different turn and showcase a product crossover, showing App Engine developers how to containerize their apps and migrate them to Cloud Run, our scalable container-hosting service in the cloud. If you can't wait for either Modules 4 or 6, try out their respective codelabs or access the code samples in the table at the repo above. Migrations aren't always easy, and we hope content like this helps you modernize your apps.

Migrating from App Engine ndb to Cloud NDB

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Migrating to standalone services

Today we're introducing the first video showing long-time App Engine developers how to migrate from the App Engine ndb client library that connects to Datastore. While the legacy App Engine ndb service is still available for Datastore access, new features and continuing innovation are going into Cloud Datastore, so we recommend Python 2 users switch to standalone product client libraries like Cloud NDB.

This video and its corresponding codelab show developers how to migrate the sample app introduced in a previous video and gives them hands-on experience performing the migration on a simple app before tackling their own applications. In the immediately preceding "migration module" video, we transitioned that app from App Engine's original webapp2 framework to Flask, a popular framework in the Python community. Today's Module 2 content picks up where that Module 1 leaves off, migrating Datastore access from App Engine ndb to Cloud NDB.

Migrating to Cloud NDB opens the doors to other modernizations, such as moving to other standalone services that succeed the original App Engine legacy services, (finally) porting to Python 3, breaking up large apps into microservices for Cloud Functions, or containerizing App Engine apps for Cloud Run.

Moving to Cloud NDB

App Engine's Datastore matured to becoming its own standalone product in 2013, Cloud Datastore. Cloud NDB is the replacement client library designed for App Engine ndb users to preserve much of their existing code and user experience. Cloud NDB is available in both Python 2 and 3, meaning it can help expedite a Python 3 upgrade to the second generation App Engine platform. Furthermore, Cloud NDB gives non-App Engine apps access to Cloud Datastore.

As you can see from the screenshot below, one key difference between both libraries is that Cloud NDB provides a context manager, meaning you would use the Python with statement in a similar way as opening files but for Datastore access. However, aside from moving code inside with blocks, no other changes are required of the original App Engine ndb app code that accesses Datastore. Of course your "YMMV" (your mileage may vary) depending on the complexity of your code, but the goal of the team is to provide as seamless of a transition as possible as well as to preserve "ndb"-style access.

The difference between the App Engine ndb and Cloud NDB versions of the sample app

The "diffs" between the App Engine ndb and Cloud NDB versions of the sample app

Next steps

To try this migration yourself, hit up the corresponding codelab and use the video for guidance. This Module 2 migration sample "STARTs" with the Module 1 code completed in the previous codelab (and video). Users can use their solution or grab ours in the Module 1 repo folder. The goal is to arrive at the end with an identical, working app that operates just like the Module 1 app but uses a completely different Datastore client library. You can find this "FINISH" code sample in the Module 2a folder. If something goes wrong during your migration, you can always rollback to START, or compare your solution with our FINISH. Bonus content migrating to Python 3 App Engine can also be found in the video and codelab, resulting in a second FINISH, the Module 2b folder.

All of these learning modules, corresponding videos (when published), codelab tutorials, START and FINISH code, etc., can be found in the migration repo. We hope to also one day cover other legacy runtimes like Java 8 and others, so stay tuned! Developers should also check out the official Cloud NDB migration guide which provides more migration details, including key differences between both client libraries.

Ahead in Module 3, we will continue the Cloud NDB discussion and present our first optional migration, helping users move from Cloud NDB to the native Cloud Datastore client library. If you can't wait, try out its codelab found in the table at the repo above. Migrations aren't always easy; we hope this content helps you modernize your apps and shows we're focused on helping existing users as much as new ones.

Google Cloud Datastore simplifies pricing, cuts cost dramatically for most use-cases

Google Cloud Datastore is a highly-scalable NoSQL database for web and mobile applications. Today we’re announcing much simpler pricing, and as a result, many users will see significant cost-savings for this database service.

Along with the simpler pricing model, there’ll be a more transparent method of calculating stored data in Cloud Datastore. The new pricing and storage calculations will go into effect on July 1st, 2016. For the majority of our customers, this will effectively result in a price reduction.


New pricing structure

We’ve listened to your feedback and will be simplifying our pricing. The new pricing will go into effect on July 1st, 2016, regardless of how you access Datastore. Not only is it simpler, but also the majority of our customers will see significant cost savings. This change removes the disincentive our current pricing imposes on using the powerful indexing features, freeing developers from over-optimizing index usage.

We’re simplifying pricing for entity writes, reads and deletes by moving from internal operation counting to a more direct entity counting model as follows:

Writes: In the current pricing, writing a single entity translated into one or more write operations depending on the number and type of indexes. In the new pricing, writing a single entity only costs 1 write regardless of indexes and will now cost $0.18 per 100,000. This means writes are more affordable for people using multiple indexes. You can use as many indexes as your application needs without increases in write costs. Since on average the vast majority of Entity writes previously translated to more than 4 write operations per entity, this represents significant costs savings for developers.

Reads: In the current pricing, some queries would charge a read operation per entity retrieved plus an extra read operation for the query. In the new pricing, you'll only be charged per entity retrieved. Small ops (projections and keys-only queries) will stay the same in only charging a single read for the entire query. The cost per Entity read stays the same as the old per operation cost of $0.06 per 100,000. This means that most developers will see reduced costs in reading entities.

Deletes: In the current pricing model, deletes translated into 2 or more writes depending on the number and type of indexes. In the new pricing, you'll only be charged a delete operation per entity deleted. Deletes are charged at the rate of $0.02 per 100,000. This means deletes are now discounted by at least 66% and often by more.

Free Quota: The free quota limit for Writes is now 20,000 requests per day since we no longer charge multiple write operations per entity written. Deletes now fall under their own free tier of 20,000 requests per day. Over all, this means more free requests per day for the majority of applications.

Network: Standard Network costs will apply.


New storage usage calculations

To coincide with our pricing changes on July 1st, Cloud Datastore will also use a new method for calculating bytes stored. This method will be transparent to developers so you can accurately calculate storage costs directly from the property values and indexes of the Entity. This new method will also result in decreased storage costs for the majority of customers.

Our current method relies heavily on internal implementation details that can change, so we’re moving to a fixed system calculated directly from the user data submitted. As the new calculation method gets finalized, we’ll post the specific details so developers can use it to estimate storage costs.

Building what’s next

With simpler pricing for Cloud Datastore, you can spend less time micro-managing indexes and focus more on building what’s next.

Learn more about Google Cloud Datastore or check out the our getting started guide.

- Posted by Dan McGrath, Product Manager, Google Cloud Platform