Tag Archives: google cloud

Cloud Covered: What was new in March on Google Cloud

Spring brings new growth and possibilities, and with COVID-19 vaccinations underway, this spring feels even more hopeful than usual. In the spirit of spring, the most popular Google Cloud blog posts from last month focused on the new: features, resources, innovations and awards. Here’s our recap.

Our popular cheat sheet helps you learn Google Cloud technologies in four words or less.
Back by popular demand, our developer’s cheat sheet summarizes Google Cloud products, each in four words or less, for a quick, handy reference. You can print the cheat sheet and post it by your desk, or make it your desktop wallpaper. If you see a product that piques your interest, learn more about it on our GitHub page. Or check out a couple of other resources mentioned in the same blog: GCP Sketchnotes describe each Google Cloud product in a format that combines art and technology. The video series Cloud Bytes summarizes individual Google Cloud products in less than two minutes. 

Google Workspace showed off new features.
A mantra that captures the last year of work might be “flexibility in the face of change.” Last month we announced new features in Google Workspace that will help in all the ways work gets done in an ever-changing world. Many features will contribute to what we call collaboration equity, or the ability to contribute equally, regardless of location, role, experience level, language or device preference. We also launched a new offering, Google Workspace Frontline, to open up safe and secure communication and collaboration channels between frontline workers and corporate teams. Finally, we shared that Google Assistant can now be used with Google Workspace for tasks like joining a meeting or sending a message. 

Users can now include carbon emissions in their app’s location choice.
We recently set a new sustainability goal: running our business on carbon-free energy 24/7, everywhere, by 2030. Decarbonizing our data center electricity supply is the critical next step in realizing that carbon-free future and providing Google Cloud customers with the cleanest cloud in the industry. Last month, we were excited to share news about our new Carbon Free Energy Percentage (CFE%), which will help our customers select Google Cloud regions based on the carbon-free energy supplying them. This way, our customers can incorporate carbon emissions into decisions on where to locate their services across our infrastructure. 

Undersea cables connect the world.
Speaking of infrastructure, Google works hard to build technologies that connect people, geographies and businesses. Last month, we announced our new investment in Echo, a subsea cable that will run from California to Singapore, with a stopover in Guam, with plans to also land in Indonesia. Additional landings are possible in the future. Echo will be the first-ever cable to connect the U.S. to Singapore with direct fiber pairs over an express route. It will help users connect even faster to applications running in Google Cloud regions in the area, home to some of the world’s most vibrant financial and technology centers. 

Google Cloud rode the Forrester Wave of recognition.
Google was named a Leader in The Forrester Wave™: Cloud Data Warehouse, Q1 2021 report. Using feedback from our customers as one of their inputs, Forrester measured and scored BigQuery, our cloud data warehouse for analyzing lots of data quickly, and gave it a 5 out of 5 across 19 different criteria. Forrester said, “Customers like Google’s frequency of data warehouse releases, business value, future proof architecture, high-end scale, geospatial capabilities, strong AI/ML capabilities, good security capabilities, and broad analytical use cases.”  

That’s a wrap for March! Stay tuned to the Google Cloud blog for all things cloud.

Cloud Covered: What was new in March on Google Cloud

Spring brings new growth and possibilities, and with COVID-19 vaccinations underway, this spring feels even more hopeful than usual. In the spirit of spring, the most popular Google Cloud blog posts from last month focused on the new: features, resources, innovations and awards. Here’s our recap.

Our popular cheat sheet helps you learn Google Cloud technologies in four words or less.
Back by popular demand, our developer’s cheat sheet summarizes Google Cloud products, each in four words or less, for a quick, handy reference. You can print the cheat sheet and post it by your desk, or make it your desktop wallpaper. If you see a product that piques your interest, learn more about it on our GitHub page. Or check out a couple of other resources mentioned in the same blog: GCP Sketchnotes describe each Google Cloud product in a format that combines art and technology. The video series Cloud Bytes summarizes individual Google Cloud products in less than two minutes. 

Google Workspace showed off new features.
A mantra that captures the last year of work might be “flexibility in the face of change.” Last month we announced new features in Google Workspace that will help in all the ways work gets done in an ever-changing world. Many features will contribute to what we call collaboration equity, or the ability to contribute equally, regardless of location, role, experience level, language or device preference. We also launched a new offering, Google Workspace Frontline, to open up safe and secure communication and collaboration channels between frontline workers and corporate teams. Finally, we shared that Google Assistant can now be used with Google Workspace for tasks like joining a meeting or sending a message. 

Users can now include carbon emissions in their app’s location choice.
We recently set a new sustainability goal: running our business on carbon-free energy 24/7, everywhere, by 2030. Decarbonizing our data center electricity supply is the critical next step in realizing that carbon-free future and providing Google Cloud customers with the cleanest cloud in the industry. Last month, we were excited to share news about our new Carbon Free Energy Percentage (CFE%), which will help our customers select Google Cloud regions based on the carbon-free energy supplying them. This way, our customers can incorporate carbon emissions into decisions on where to locate their services across our infrastructure. 

Undersea cables connect the world.
Speaking of infrastructure, Google works hard to build technologies that connect people, geographies and businesses. Last month, we announced our new investment in Echo, a subsea cable that will run from California to Singapore, with a stopover in Guam, with plans to also land in Indonesia. Additional landings are possible in the future. Echo will be the first-ever cable to connect the U.S. to Singapore with direct fiber pairs over an express route. It will help users connect even faster to applications running in Google Cloud regions in the area, home to some of the world’s most vibrant financial and technology centers. 

Google Cloud rode the Forrester Wave of recognition.
Google was named a Leader in The Forrester Wave™: Cloud Data Warehouse, Q1 2021 report. Using feedback from our customers as one of their inputs, Forrester measured and scored BigQuery, our cloud data warehouse for analyzing lots of data quickly, and gave it a 5 out of 5 across 19 different criteria. Forrester said, “Customers like Google’s frequency of data warehouse releases, business value, future proof architecture, high-end scale, geospatial capabilities, strong AI/ML capabilities, good security capabilities, and broad analytical use cases.”  

That’s a wrap for March! Stay tuned to the Google Cloud blog for all things cloud.

How we’re working with governments on climate goals

When it comes to sustainability, we have a responsibility to work together — across governments, businesses and communities — and take action now. As the former Federal Chief Sustainability Officer for the U.S. government, I know firsthand the positive impact of technology companies and governments working together to address climate change. 

I’m thrilled to see the 24/7 carbon-free energy commitment for Federal buildings in President Biden’s proposed infrastructure plan, and am heartened by localized efforts, like Des Moines City Council’s similar commitment to a 24/7 carbon-free goal. At Google, we know the hard work it takes to get there. We were the first major company to become carbon neutral in 2007, and in 2017 we became the first company of our size to match 100% of our annual electricity use with renewable energy, something we’ve achieved three years in a row. We also recently set our most ambitious goal yet: operating our data centers and campuses on carbon-free energy 24/7 by 2030. 

Meeting these ambitious goals can seem daunting — especially as the urgency to act intensifies. Still, I’m confident that together we can make progress. That optimism is informed by areas where we’ve already seen significant positive impact through technology. 


Creating the cleanest cloud 

We have the cleanest cloud in the industry, serving governments at the federal, state and local level —  a feat I’m proud of because of the impact that can have, not only on our customers here in the U.S. but around the world. In fact, International Data Corporation estimates cloud computing could save a billion metric tonnes of CO2 emissions by 2024. 

We spent years making our cloud regions and data centers more efficient to reduce our carbon footprint and our customers’ carbon footprint. Today, Google data centers are twice as energy efficient as typical enterprise data centers and deliver around seven times more computing power than five years ago using the same amount of electrical power. As part of this journey, we used machine learning to reduce energy consumption for data center cooling by 30%. Now, Google Cloud and DeepMind aredeveloping an Industrial Adaptive Controls platform to deliver Machine Learning-enabled energy savings on a global scale by autonomously controlling Heating, Ventilation, and Air Conditioning (HVAC) systems in commercial buildings, data centers, and industrial facilities.

We recently became the first cloud service to share datathat helps customers to fully decarbonize apps and infrastructure, through insights on how often each Google Cloud region was supplied by carbon-free energy on an hourly basis. And already, Google Cloud helps government agencies across the U.S. lower IT costs and reduce their carbon footprints — from the Navy and the Department of Energy, to states and cities like Rhode Island, West Virginia and Pittsburgh.


Working with local governments 

Half of Earth’s population lives in cities, which is also where 70% of the world’s emissions originate. Local governments need access to technology that will help them build and act on climate action plans.

To help, in 2018, we partnered with the Global Covenant of Mayors for Climate and Energy to launch the Environmental Insights Explorer (EIE). EIE is a free tool that helps cities estimate emissions from buildings and transportation, understand their rooftop solar potential, and measure air quality and tree canopy coverage.

In 2020 alone, we helped 285 cities leverage EIE in their climate action planning efforts. Houston set an ambitious rooftop solar target and the City of Los Angeles used insights to inform their strategy to plant 90,000 trees. We’ve made EIE data available to more than 3,000 cities, helping them measure, plan and track progress toward climate action plans. Our goal is to help over 500 cities eliminate 1 gigaton of carbon emissions annually by 2030 and beyond, the equivalent to the annual emissions of Japan. We plan to expand EIE to thousands more cities and we’ll continue to work with local governments and share our own learnings in support of our collective decarbonization goals.  


Advocating for a sustainable future

One of the areas where government agencies can lead by example is through sustainable federal procurement — something President Biden has emphasized as a critical step in tackling climate change. This will require government agencies to consider more efficient uses of energy and water in federal contracts for goods, works or services. We’re actively working with governments to help them understand how they can benefit from our clean cloud to achieve their sustainability goals and serve their citizens with the lowest environmental impact possible. 

There’s also an opportunity to incorporate sustainability criteria into Congress’ oversight of government agencies through the Federal Information Technology Acquisition Reform Act (FITARA) Scorecard. This would allow agencies to learn best practices from each other, while also promoting partnerships with companies that focus on innovation and sustainability.

We’re committed to partnering with governments around the world to provide our technology and insights to drive progress in the government's sustainability efforts. You can learn more about our sustainability efforts and join us on this mission

Modernizing your Google App Engine applications

Posted by Wesley Chun, Developer Advocate, Google Cloud

Modernizing your Google App Engine applications header

Next generation service

Since its initial launch in 2008 as the first product from Google Cloud, Google App Engine, our fully-managed serverless app-hosting platform, has been used by many developers worldwide. Since then, the product team has continued to innovate on the platform: introducing new services, extending quotas, supporting new languages, and adding a Flexible environment to support more runtimes, including the ability to serve containerized applications.

With many original App Engine services maturing to become their own standalone Cloud products along with users' desire for a more open cloud, the next generation App Engine launched in 2018 without those bundled proprietary services, but coupled with desired language support such as Python 3 and PHP 7 as well as introducing Node.js 8. As a result, users have more options, and their apps are more portable.

With the sunset of Python 2, Java 8, PHP 5, and Go 1.11, by their respective communities, Google Cloud has assured users by expressing continued long-term support of these legacy runtimes, including maintaining the Python 2 runtime. So while there is no requirement for users to migrate, developers themselves are expressing interest in updating their applications to the latest language releases.

Google Cloud has created a set of migration guides for users modernizing from Python 2 to 3, Java 8 to 11, PHP 5 to 7, and Go 1.11 to 1.12+ as well as a summary of what is available in both first and second generation runtimes. However, moving from bundled to unbundled services may not be intuitive to developers, so today we're introducing additional resources to help users in this endeavor: App Engine "migration modules" with hands-on "codelab" tutorials and code examples, starting with Python.

Migration modules

Each module represents a single modernization technique. Some are strongly recommended, others less so, and, at the other end of the spectrum, some are quite optional. We will guide you as far as which ones are more important. Similarly, there's no real order of modules to look at since it depends on which bundled services your apps use. Yes, some modules must be completed before others, but again, you'll be guided as far as "what's next."

More specifically, modules focus on the code changes that need to be implemented, not changes in new programming language releases as those are not within the domain of Google products. The purpose of these modules is to help reduce the friction developers may encounter when adapting their apps for the next-generation platform.

Central to the migration modules are the codelabs: free, online, self-paced, hands-on tutorials. The purpose of Google codelabs is to teach developers one new skill while giving them hands-on experience, and there are codelabs just for Google Cloud users. The migration codelabs are no exception, teaching developers one specific migration technique.

Developers following the tutorials will make the appropriate updates on a sample app, giving them the "muscle memory" needed to do the same (or similar) with their applications. Each codelab begins with an initial baseline app ("START"), leads users through the necessary steps, then concludes with an ending code repo ("FINISH") they can compare against their completed effort. Here are some of the initial modules being announced today:

  • Web framework migration from webapp2 to Flask
  • Updating from App Engine ndb to Google Cloud NDB client libraries for Datastore access
  • Upgrading from the Google Cloud NDB to Cloud Datastore client libraries
  • Moving from App Engine taskqueue to Google Cloud Tasks
  • Containerizing App Engine applications to execute on Cloud Run

Examples

What should you expect from the migration codelabs? Let's preview a pair, starting with the web framework: below is the main driver for a simple webapp2-based "guestbook" app registering website visits as Datastore entities:

class MainHandler(webapp2.RequestHandler):
'main application (GET) handler'
def get(self):
store_visit(self.request.remote_addr, self.request.user_agent)
visits = fetch_visits(LIMIT)
tmpl = os.path.join(os.path.dirname(__file__), 'index.html')
self.response.out.write(template.render(tmpl, {'visits': visits}))

A "visit" consists of a request's IP address and user agent. After visit registration, the app queries for the latest LIMIT visits to display to the end-user via the app's HTML template. The tutorial leads developers a migration to Flask, a web framework with broader support in the Python community. An Flask equivalent app will use decorated functions rather than webapp2's object model:

@app.route('/')
def root():
'main application (GET) handler'
store_visit(request.remote_addr, request.user_agent)
visits = fetch_visits(LIMIT)
return render_template('index.html', visits=visits)

The framework codelab walks users through this and other required code changes in its sample app. Since Flask is more broadly used, this makes your apps more portable.

The second example pertains to Datastore access. Whether you're using App Engine's ndb or the Cloud NDB client libraries, the code to query the Datastore for the most recent limit visits may look like this:

def fetch_visits(limit):
'get most recent visits'
query = Visit.query()
visits = query.order(-Visit.timestamp).fetch(limit)
return (v.to_dict() for v in visits)

If you decide to switch to the Cloud Datastore client library, that code would be converted to:

def fetch_visits(limit):
'get most recent visits'
query = DS_CLIENT.query(kind='Visit')
query.order = ['-timestamp']
return query.fetch(limit=limit)

The query styles are similar but different. While the sample apps are just that, samples, giving you this kind of hands-on experience is useful when planning your own application upgrades. The goal of the migration modules is to help you separate moving to the next-generation service and making programming language updates so as to avoid doing both sets of changes simultaneously.

As mentioned above, some migrations are more optional than others. For example, moving away from the App Engine bundled ndb library to Cloud NDB is strongly recommended, but because Cloud NDB is available for both Python 2 and 3, it's not necessary for users to migrate further to Cloud Datastore nor Cloud Firestore unless they have specific reasons to do so. Moving to unbundled services is the primary step to giving users more flexibility, choices, and ultimately, makes their apps more portable.

Next steps

For those who are interested in modernizing their apps, a complete table describing each module and links to corresponding codelabs and expected START and FINISH code samples can be found in the migration module repository. We are also working on video content based on these migration modules as well as producing similar content for Java, so stay tuned.

In addition to the migration modules, our team has also setup a separate repo to support community-sourced migration samples. We hope you find all these resources helpful in your quest to modernize your App Engine apps!

Baking up the future with Mars Wrigley & Cloud AI

Baking is something that I like to do to relax; spending leisurely evenings creating something beautiful with as much care and precision as possible. Baking with AI—blending my day job as a machine learning engineer with one of my favorite hobbies—makes this experience even better. Plus, the serendipitous nature of building the model, arriving at a new and unique recipe and then testing it in the kitchen, is incredibly fun.

So, when legendary confectioner Mars Wrigley approached us for a Maltesers + AI kitchen collaboration, I jumped at the chance. Maltesers are a popular British candy made by Mars. They have an airy malted milk center with a delicious chocolate coating.

Like so many others, I jumped on the baking bandwagon during the pandemic and baked up a storm throughout 2020. According to Google Search Trends, in 2021 people searched for “baking” 44% more when compared to the same time last year. You might even say the trend continues to rise.

But what were people actually searching for in the UK and could it inspire a new recipe for Mars Wrigley? I discovered that one of the top searched questions recently regarding “sweet and salty” was “Is Marmite sweet or savory?” Knowing how popular Marmite is in the UK, I decided it must be included in my newest creation. My goal was to use machine learning to create the base recipe for this dessert, and then find tasty ways to incorporate both Maltesers and Marmite.

I created a new machine learning model and fed it hundreds of existing recipes for cakes, cookies, scones and traybakes. After the model learned the combinations of ingredients that make up these baked goods, I had it generate recipes for each of them. Unlike most ML models, this one required testing in the kitchen to make sure the ingredient combinations generated by the model produced baked goods you’d actually want to eat. Picture me and my laptop in a cloud of flour, tasting batter and frosting, and eating way too many Maltesers along the way.

Delicious Maltesers® AI Cakes (4d6172730a)

Maltesers® AI Cakes

I’m skipping a lot of the ML magic! For a technical deep dive on how the AI works, see this blog post. For baking enthusiasts, these pictures share a little more of the process. Midway through baking the cake, I added three surprise Maltesers in the middle, and a cookie layer on top (which becomes the bottom when you flip these out). Yum!

Where’s the Marmite, I hear you say!? I whipped up a Marmite-infused buttercream topping. It’s delicious! Don’t believe me? See the recipe below, give it a whirl and let me know what you think. Share photos on Twitter or Instagram using the hashtag #BakeAgainstTheMachine, and head over to BakeAgainsttheMachine.co.uk to learn more about this project from Mars and find more special Maltesers recipes. I can’t wait to see your creations!


Recipe for Malteser AI cake
Recipe for Malteser AI cake 2


India’s Google Developer Groups meet up to ace their Google Cloud Certifications

Posted by Biswajeet Mallik, Program Manager, Google Developers India.

Image from Cloud Community Days India

Earlier this year, ten Google Developer Groups in India came together to host Google Cloud Community Days India, a two day event helping developers study for their upcoming Cloud Certification exams. To address the rising demand for professional certifications, the virtual event hosted over 63,000 developers, covered four main exam areas, and welcomed nine speakers. This was the second edition to the event series which started in 2019 in India.

By providing expert learning materials and mentorship, the event uniquely prepared developers for the Associate Cloud Engineer, Professional Data Engineer, Professional Cloud Machine Learning Engineer, and Professional Cloud Architect exams. Learn more below.

Acing the four key certifications

The Cloud Community Days event focused on helping developers study for four milestone certifications, tailored to engineers at four different stages of their career. The goal: help Google Developer Group members obtain the right credentials to improve their job prospects.

The event broke participants into breakout sessions based on which exam they were preparing to take. Since the certifications targeted professionals of all skill levels, study groups ranged from early career associates to late career executives. The learning groups were organized around the following certifications:

  1. Associate Cloud Engineer:

    This learning session was created to help early career developers complete the first stepping stone exam. In particular, learning materials and speakers were curated to guide participants who had no prior experience, or very little, working on the Google Cloud Platform.

    Workshops were mainly dedicated to assisting programmers who were familiar with building different applications but wished to show employers that they could deploy them on Google Cloud Platform.

    Watch more from: Day 1, here. And day 2, here.

  2. Professional Data Engineers:

    The next group brought together were data practitioners with special interests in data visualization and decision making. Workshops and learning activities helped these developers hone their large scale data and data driven decision making abilities.

    Improving these skills are essential for passing the Professional Data Engineers certification and growing a programmer’s early career.

    Watch more from: Day 1, here. And day 2, here.

  3. Professional Cloud Machine Learning Engineer:

    For these sessions, the Google Developer Group Cloud community paired experienced programmers with a significant interest in ML to form their study groups. The main driver in these learning activities was to help seasoned developers gain a deeper understanding of how to utilize Google Cloud ML services.

    With significant emphasis being placed on machine learning in the ecosystem right now, Google Developer Group community leaders felt this certification could help developers make the leap into new leadership roles.

    Watch more from: Day 1, here. And day 2, here.

  4. Professional Cloud Architect:

    Lastly, this event paired experienced Cloud executives and professionals working in leading capacities for their organizations. For these sessions, speakers and activities had a specific scope: help high level professions be at the forefront of Google Cloud Platforms innovative capabilities.

    Specifically, the Professional Cloud Architect Certification was created to help senior software engineers better design, scale and develop highly secure and robust applications.

    Day 1, here. And day 2, here.

Reactions from the community

Overall, the community put together these resources to help developers feel more confident in their abilities, obtain tangible credentials, and in turn increase access to better job opportunities. As two participants recalled the event,

“The session on Qwiklabs was so helpful, and taught me how to anticipate problems and then solve them. Cloud Community Days inspired me to take the next step with DevOps and Google Cloud.”

“This was the first time I attended the Google Developer Group event! It is an awesome package for learning in one place. All the fun activities were engaging and the panelist discussion was also very insightful. I feel proud to be a part of this grand GDG event.”

Start learning with Google Developer Groups

With Google Developer Groups, find a space to learn alongside a group of curious developers, all coming together to advance their careers from withinside a caring community of peers.

Want to know more about what Cloud Community days were like? Then watch their live recording below.

Ready to find a community event near you? Then get started at gdg.community.dev

How my startup uses AI to reimagine water utilities

History repeats itself, but it doesn’t have to. I was inspired to launch my startup, Varuna, when Austin Water released its first-ever boil water warning in 2018 — a moment eerily similar to the massive winter storm in Texas just a few weeks ago. Because the water utility companies didn’t have enough real-time data to measure water quality in individual neighborhoods, they took the blanket approach of asking all of the city’s 950,000 residents to boil any water ingested through drinking or cooking. After several days of substantially reducing water usage — and seeing more than 625,000 plastic bottles of water handed out across the city — I set out to find a solution. 


A systems engineer by trade and a problem-solver by nature, I repurposed our dishwasher’s sensor to create my first water-quality measurement device. Excited, I called up my Chicago-based friend and former employee Jamail Carter to talk about my idea. We agreed that water quality issues like the crisis in Flint are symptoms of a bigger problem: operational inefficiencies within water utilities. 


When technicians don’t have real-time visibility into what’s going on across the water distribution system, utilities companies either splurge on a single sensor bound to one location or rely on manual measurement, which can be costly and time-consuming. By simply getting access to the right information, each community water system in the U.S. could save thousands of dollars — and lives — annually for every sample collection point they have on-site. 


After months of prototyping and research, Jamail and I launched Varuna, named for the Vedic deity associated with water, truth and enlightenment. The platform provides cities and towns with Google AI-powered alerts, recommendations and predictions to reduce inefficiencies and violations in their water management operations. With a series of connected sensors in the distribution systems, Varuna reduces the number of times technicians need to collect water samples to lab test for quality issues. Google Maps Platform provides the “where” to the what and the why of water quality contamination issues, while Google Cloud gives users a way to access this information whenever they need it—all essential for adopting a proactive, preventive approach to water treatment.


Varuna is founded on the belief that when people know better, they do better. Research shows that water systems in communities of color have a disproportionate amount of EPA violations. By taking away excuses and providing key information, we can positively impact underserved communities. That’s why we first piloted programs in historically diverse locations across Louisiana, Texas, New Jersey and Alabama — and are tackling Chicago and New York City next. 


As a Black immigrant founder building a startup in Texas, I understand firsthand the frustration of being denied access to needed resources. Despite the inherent humanity of Varuna’s mission and our proven entrepreneurial track record, Jamail and I faced systemic obstacles as we attempted to raise capital and network in a predominately white industry. Less than 3% of U.S. venture capital funding went to Black-led companies in 2020, despite the fact that 10% of American companies are Black-owned, according to U.S. Census data.


Thankfully, doors are getting opened — forced open in some cases — that have been previously closed to teams like ours. Receiving a $100,000 cash award from theGoogle for Startups Black Founders Fund last October wasn’t just a financial investment; it was a vote of confidence. Only three months after being selected for the Black Founders Fund, we've raised an additional $1.6 million, added two team members and a design agency partner, all while redesigning and halving the cost of our hardware. When you fund Black founders, you not only create equal access to economic opportunity, but also empower us to create real change with our tech, one glass of clean water at a time. 

Cloud Covered: What was new in Google Cloud in February

Last month, Google Cloud introduced new tools and resources to help vaccinate communities, avoid email scams, connect the world (under the sea!) and build with new technologies. Check out the most popular Cloud blog posts from February.  

Technology helps get more vaccines into more communities.

To help the global challenge of immunizing millions of people during the COVID-19 pandemic, we announced Google Cloud’s Intelligent Vaccine Impact solution. Powered by several Google Cloud technologies, this effort will help state and local governments create successful public health strategies. The Intelligent Vaccine Impact solution will also increase vaccine availability and equitable access to those who need it, and assist governments in building awareness, confidence and acceptance of vaccines. This builds on our foundation of projects supporting state and local health agencies and governments during the pandemic. 

Here’s how to understand and avoid email scams. 

Every day, we stop more than 100 million harmful emails from reaching Gmail users. Last month we shared news about a recent study we conducted with Stanford University about email scams, including common patterns and risk factors for abuse. We found that where you live, what devices you use and whether your information appeared in previous third-party data breaches can increase your odds of being a target. We also shared tips to prevent being a target, including completing a Security Checkup for personalized security advice. You can also enroll in Google's Advanced Protection program, which provides Google's strongest security to users at increased risk of targeted online attacks. Or use the Enhanced Safe Browsing Protection in Google Chrome to increase your defenses against dangerous websites and downloads on the web. 

5G will speed up our network future. 

Last month we announced a collaboration with Intel to develop ideas, products and services for communications service providers to help them benefit from new 5G connectivity. 5G is the next generation of mobile network that provides higher data speeds, real-time responses and better  connectivity. Our efforts with Intel will help businesses build systems and processes that use 5G and edge computing, which brings applications and data storage closer to the location where it is needed, to save bandwidth and improve computer response times. 

A global network lives under the sea.

We were excited to announce our new Dunant submarine cable system, which crosses the Atlantic Ocean between the U.S. and France, is now ready for service. The Dunant system — named after the founder of the Red Cross and first recipient of the Nobel Peace Prize — expands Google’s global network to add dedicated capacity while connecting to other network systems in the region. Made possible in partnership with SubCom, a global partner for undersea data transport, the Dunant system delivers data across the ocean at a record-breaking capacity of 250 terabits per second (Tbps) — enough to transmit the entire digitized Library of Congress three times every second. 

APIs help software communicate.

Last month we also announced the launch of Apigee X, a major release of our application programming interface (API) management platform. APIs let multiple pieces of software work together, no matter their systems and programs, making it easier to collaborate with other teams or publicly over the web. Apigee X seamlessly weaves together Google Cloud’s expertise in APIs, artificial intelligence (AI), security and networking to help businesses deliver secure and high-performance APIs at global scale.  

That’s a wrap for February! Stay tuned to the Google Cloud blog for all things cloud.

Mainframe modernization antipatterns

Posted by Travis Webb

This blog post describes common pitfalls and antipatterns to consider when migrating your mainframe workloads. It also helps you to understand and avoid them. Migrating or modernizing your mainframe workloads is complex and challenging, even under ideal conditions. If you avoid the antipatterns discussed in this document, you increase the odds of a successful transformation.

This blog post is useful whether you're planning to migrate your mainframe workloads to Google Cloud, to on-premises virtual machines, or to another cloud provider. It demonstrates how to remedy certain mainframe migration antipatterns using technology offerings from Google. In principle, however, you could apply these remedies to many kinds of transformations with different target platforms and architectures.

This blog post describes three common antipatterns:

  • Big bang rewrite antipatterns
  • Lift-and-shift migration antipatterns
  • In-place modernization antipatterns

These approaches can work in some narrow circumstances when migrating mainframe workloads. Avoid them, however, because they have a high probability of failure. For each antipattern discussed, you are given an overview of the antipattern, the typical rationale used to justify it, and the business and technical reasons that lead to failure.

Big bang rewrite antipatterns

In a big bang rewrite, you or your team manually rewrite and re-architect the legacy mainframe code into a modern language using modern design patterns. For example, you might form a development team to build a new Java application that replicates the business logic from a collection of legacy COBOL programs. Senior engineers who are familiar with the system often teach junior engineers the rationale behind the business logic to preserve institutional knowledge. The result is a new codebase using new programming languages and new documentation on a new platform.

Of the three antipatterns discussed in this document, the big bang rewrite requires the largest investment of capital and time to achieve success. It is capital-intensive and time-intensive because most organizations can’t resist the temptation to re-engineer and to improve business logic.

Rationale

Re-engineering your systems using modern technologies allows for future innovation. Your senior engineers are moving on—to management, competitors, or retirement—and you need to transfer institutional knowledge to incoming staff. You expect those incoming staffers to re-engineer the system using the latest programming best practices. These less experienced engineers can rewrite module by module, and take advantage of current development methodologies and tools. Because you have all the code, you have an exact specification for what the new software needs to do, and can test against it. Access to the original code lets you compress the decades of investment into your original mainframe software into a modern application. At the same time, you are transferring institutional knowledge from your senior engineers to your junior engineers. At the end of the process, you'll have a new system consisting of well-engineered software built against modern design patterns and best practices.

This case is compelling and can help to convince your IT decision-makers. Though the approach appears rational, there are hidden pitfalls and risks that your team doesn’t recognize at the outset. Risks like budget overruns, unanticipated complexity, and staff turnover can derail a significant rewrite before realizing the benefits. As a result, big bang rewrites rarely equal the best-case scenarios presented to stakeholders. Often, they fail.

Risks, pitfalls, and outcomes

Big bang rewrites often suffer from the second system effect. Early in the project, they fall behind in schedule and budget. While you quickly develop prototypes, getting them to function in the same way as the original code is a long-tail effort that most teams underestimate. This unanticipated setback leads to the first major decision point in your project: How do I overcome these challenges but still achieve the outcomes that I need to make the project successful?

The first option: Continue to diligently plod the long path and adhere exactly to the original functionality. However, matching the new system precisely to the original functionality always takes longer than expected. This is true because the original code provides little or no improvement in productivity over a conventional specification. That means a significant engineering investment to understand the original code and reproduce it.

The second option: Implement the business logic differently. However, changes in business logic necessarily require changes to the business processes and downstream systems on which the original business logic depends. For example, you could have a web application that depends on the idiosyncratic behavior of your mainframe applications. Rather than incorporate these idiosyncrasies into the new, rewritten application, it is tempting to simplify and improve this behavior. However, that adds scope to the project. The chain reaction of further changes that are required in downstream systems introduce additional risk and prolong the rewrite effort.

If your production mainframe system requires ongoing maintenance or updates during the rewrite, you can compound these problems. For example, you might have a rules engine that powers a billing system on your mainframe. To support a new product launch, you need to add a feature to the rules engine to accommodate a new customer billing type. You also need to implement this new type in the current system and replicate it in the new systempossibly after the billing component was rewritten and tested. This maintenance and update scenario can occur many times during a big bang rewrite, setting the project back at each step, and increasing the odds of failure.

Even for companies that have the tenacity to see through a multi-year transformation effort, the raw cost of a rewrite is often prohibitive. When compared to all other approaches, a big bang rewrite is the costliest way to modernize your mainframe software. Often it has the least convincing return on investment (ROI) when factoring in the risks, unanticipated costs, and delays.

Lift-and-shift migration antipatterns

A lift-and-shift migration is an established method of moving an application from one system to another with minimal changes and downtime. It's commonly used to migrate virtual machines running on commodity hardware to virtual machines in a public cloud. You can take a similar approach with your mainframe migration.

Mainframe platforms are based on proprietary hardware rather than x86-based commodity hardware. Therefore, you must emulate your mainframe environment on x86-based machines. Doing so is required to move your applications directly from the mainframe into the cloud, as you would with virtual machines. To run your applications in the emulated environment, you recompile them using a compiler provided by your emulation vendor.

Rationale

Lift-and-shift migration is often seen as the quickest way to get from an on-premises environment to the cloud. You can apply this same thinking to mainframe workloads. Strategic IT decisions are often most palatable when facing a key transition, such as a hardware refresh. Mainframe hardware investments are capital-intensive. Financing the purchase often adds debt or lease liabilities to your company's balance sheet. By moving to the public cloud, mainframe workloads can scale both up and down to optimize resource use and operational cost. When compared to other migration or modernization options, you can make a strong business case that a lift-and-shift migration provides the quickest ROI and carries the lowest risk.

Risks, pitfalls, and outcomes

The business risks of a lift-and-shift migration appear small compared to other approaches, but the potential benefits are even smaller. The benefits of migrating off the mainframe platform to the cloud don’t materialize, because you remain locked into the same mainframe ecosystem, but now with an extra dependency on an emulation layer. That dependency can result in a new set of technical challenges. Challenges that are often unfamiliar to the teams maintaining the mainframe software. Unfamiliarity can lead to additional reliance on a new, single-vendor cloud ecosystem.

By not changing your mainframe software, you avoid solving many important problems: scarce and shrinking mainframe talent, a static ecosystem, a lack of agility, and an inability to innovate. You're now running your legacy workloads in the cloud, but remain locked out of cloud innovations due to your continued reliance on proprietary platforms.

In this antipattern, the cost benefits that you relied on to justify the investment don’t materialize. While you might spend less after combining your cloud infrastructure costs with your new, ongoing, emulation software license fees, your savings don’t justify the investment. The outcome is that you've taken all the risks inherent in any migration, but have realized few of the benefits, if any.

In-place modernization antipatterns

In an in-place modernization, you focus on improving the quality, maintainability, and testability of your software while keeping it on your mainframe computers. You might choose this antipattern because you see mainframes as part of your future and know that you must modernize your application software accordingly.

You can rewrite your application software to use modern languages that run on the mainframe, or you can re-architect it in place. For a partial cloud-like experience you can install orchestration technologies, like Kubernetes.

Rationale

Mainframe software presents challenges related to maintainability, innovation, agility, and extensibility. By re-architecting and re-engineering this software to align with modern standards and design patterns, you can avoid many of the pitfalls that disrupt large replatforming efforts. Moving off the mainframe is the single largest risk. By avoiding that move, you can improve the odds that your project succeeds. Of all the mainframe modernization approaches you might consider, an in-place modernization appears to be the lowest risk. There's no migration component, so there's no risk of downtime.

There is an ecosystem of vendors offering tools to help with mainframe development using modern methodologies. Therefore, the risk of being left to support the software on your own is low. An in-place modernization often takes longer than a lift-and-shift migration or a code conversion. By modernizing slowly, however, you afford your teams the time they need to learn new development processes. When you re-engineer and re-architect the codebase, you can perform a more rational analysis to better understand whether the mainframe is the appropriate long-term platform.

Risks, pitfalls, and outcomes

An in-place modernization suffers from many of the same challenges as the big bang rewrite. Any approach involving manually updating your mainframe software can have budget and time constraints. These efforts also often suffer from the second-system effect. Performance and correctness issues inevitably arise because rewriting business logic in a new language requires extensive testing before it aligns with the previous functionality. When management learns more about the modest benefits gained by running updated software on the same mainframe platform, expect their willingness to see through such a drawn-out and costly transformation to wane.

The biggest issue with an in-place modernization is that the ideal outcome leaves you many of the same problems that you started with. The mainframe is more than a piece of hardware. Using mainframes encompasses a talent pool, a software platform, and a vendor ecosystem. The trend for each of these variables is moving in the wrong direction. Every year the talent pool shrinks, the software platform becomes more isolated, and the vendor ecosystem consolidates.

Finding help

Google Cloud offers various options and resources for you to find the necessary help and support to best use Google Cloud services:

There are more resources to help you to migrate workloads to Google Cloud in the Google Cloud migration center.

For more information about these resources, see the finding help section of Migration to Google Cloud: Getting started.

What’s next?

New safety and engagement features in Google Meet

Over the past year, video conferencing became an essential tool for teaching, learning and staying connected. As part of our commitment to building products and programs to expand learning for everyone, we're bringing new features to Meet to help educators keep virtual classes secure and students engaged. 

Helping teachers keep virtual classes safe 

Our first priority with Google Meet is to make sure meetings are safe and secure. Last year we launched a number of tools to help with this, including security controls so only intended participants are let into meetings and advanced safety locks to block anonymous users and let teachers control who can chat and present within a meeting. In the coming months, we’ll be adding to that list.

Teachers will soon have the option to end meetings for everyone on the call, preventing students from staying on after the teacher has left — including in breakout rooms. 

End meeting for all in Google Meet

Getting everyone’s attention when class is deep in discussion can be tough, so we're also giving teachers an easy way to mute all participants at once. Rolling out over the next few weeks, “mute all” will help educators keep class on track. And since sometimes it's important to teach without interruption, launching in the coming months, meeting hosts will be able to control when students can unmute themselves.

Gif of muting all in Google Meet

In the coming months, educators using tablets or mobile phones to teach will also have access to key moderation controls, like who can join their meetings or use the chat or share their screen, directly from their iOS or Android devices. 

Moderator controls on mobile with Google Meet

For many teachers, Google Classroom is an essential tool for managing class. Later this year, Classroom and Meet will work together even better, so every meeting created from Classroom is even safer by default. When meetings are generated from Classroom, students won’t be able to join before the teacher. Meet will also know who’s on the Classroom roster, so only students and teachers in the class will be able to join. And every teacher in Classroom will be a meeting host by default, so if there are multiple teachers, they’ll be able to share the load of managing the class. And later this year, meetings that aren’t started from Classroom will also support multiple hosts, making it easier to partner with others helping facilitate the class.

Classroom integrations with Google Meet

Greater visibility and control for admins 

In the coming months, we’ll be launching new settings in the Admin console so school leaders can set policies for who can join their school’s video calls, and whether people from their school can join video calls from other schools. This will make it easier to facilitate things like student-to-student connections across districts, professional development opportunities for educators and external speakers visiting a class. 

Admin controls in Google Meet

The Google Meet audit log is also now available in the Admin console. In the coming months, we’ll be adding more information to these logs — like an external participant's email address — so admins can better understand how people are using Meet at their school. For educators with  Education Standard or Education Plus licenses, we’re also making improvements to the investigation tool. Admins can now access Meet logs in the investigation tool, so they can identify, triage and take action on security and privacy issues. And later this year, admins will be able to end any meeting within their school from the investigation tool as well. 

Engagement and inclusivity in Meet

Over the past six months, we've launched features like breakout rooms, hand raising, digital whiteboards and customized backgrounds. Later this year, students will be able to more easily engage and express themselves with emoji reactions in Meet. They’ll be able to pick emoji skin tones to best represent them, and react in class in a lightweight, non-disruptive way. Teachers and admins will have full control over when reactions can be used.

Emoji reactions in Google Meet

Because unreliable internet connections can make remote teaching and learning more challenging, we're also improving Meet to work better if you have low bandwidth. Rolling out in the coming months, this can help keep class on track when internet connections are weaker. 

We’ve also made significant improvements to the performance of Meet on Chromebooks. These include audio, video and reliability optimizations, better performance while multitasking and more. 

Gif of Google Meet on a Chromebook

We’re also making additional improvements for educators with Teaching and Learning Upgrade or Education Plus licenses. Rolling out over the next few months, educators will be able to set up breakout rooms ahead of time in Google Calendar. This will make it easier for teachers to prepare for differentiated learning, be thoughtful about group dynamics and avoid losing valuable time setting up breakout rooms during class. 

Breakout rooms in Google Meet

And to help students who weren’t able to attend class stay up to date, later this year educators will be able to receive meeting transcripts. They’ll be able to easily share transcripts with students, review what was discussed during class or maintain a record for future reference. 

Meeting transcripts in Google Meet

Whether by expanding professional development opportunities, livestreaming events or facilitating live-translated parent-teacher conferences, Meet can help your community stay connected. And while many recent improvements to Meet are focused on making distance learning possible, we're also dedicated to making it the best tool for school communities — now, and into the future.