Tag Archives: google cloud

Delivering on our $1B commitment in Africa

Last year our CEO, Sundar Pichai, announced that Google would invest $1 billion in Africa over the next five years to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa’s digital transformation.

Africa’s internet economy has the potential to grow to $180 billion by 2025 – 5.2% of the continent’s GDP. To support this growth, over the last year we’ve made progress on helping to enable affordable access and on building products for every African user – helping businesses build their online presence, supporting entrepreneurs spur next-generation technologies, and helping nonprofits to improve lives across the continent.

We’d like to share how we’re delivering on our commitment and partnering with others – policymakers, non-profits, businesses and creators – to make the internet more useful to more people in Africa.

Introducing the first Google Cloud region in Africa

Today we’re announcing our intent to establish a Google Cloud region in South Africa – our first on the continent. South Africa will be joining Google Cloud’s global network of 35 cloud regions and 106 zones worldwide.

The future cloud region in South Africa will bring Google Cloud services closer to our local customers, enabling them to innovate and securely deliver faster, more reliable experiences to their own customers, helping to accelerate their growth. According to research by AlphaBeta Economics for Google Cloud, the South Africa cloud region will contribute more than a cumulative USD 2.1 billion to the country’s GDP, and will support the creation of more than 40,000 jobs by 2030.

Image shows Director for Cloud in Africa, Niral Patel, next to a heading that announces Google's intent to establish its first Cloud region in Africa

Niral Patel, Director for Cloud in Africa announces Google's intention to establish Google's first Cloud region in Africa

Along with the cloud region, we are expanding our network through the Equiano subsea cable and building Dedicated Cloud Interconnect sites in Johannesburg, Cape Town, Lagos and Nairobi. In doing so, we are building full scale Cloud capability for Africa.

Supporting African entrepreneurs

We continue to support African entrepreneurs in growing their businesses and developing their talent. Our recently announced second cohort of the Black Founders Fund builds on the success of last year’s cohort, who raised $97 million in follow-on funding and have employed more than 500 additional staff since they were selected. We’re also continuing our support of African small businesses through the Hustle Academy and Google Business Profiles, and helping job seekers learn skills through Developer Scholarships and Career Certifications.

We’ve also continued to support nonprofits working to improve lives in Africa, with a $40 million cash and in-kind commitment so far. Over the last year this has included:

  • $1.5M investment in Career Certifications this year bringing our total Google.org funding to more than $3M since 2021
  • A $3 million grant to support AirQo in expanding their work monitoring air quality from Kampala to ten cities in five countries on the continent;
  • A team of Googlers who have joined the Tony Elumelu Foundation for 6 months, full-time and pro-bono. The team helped build a new training web and app interface to support the next million African entrepreneurs to grow and fund their businesses.

Across all our initiatives, we continue to work closely with our partners – most recently with the UN to launch the Global Africa Business Initiative (GABI), aimed at accelerating Africa’s economic growth and sustainable development.

Building more helpful products for Africa

We recently announced plans to open the first African product development centre in Nairobi. The centre will develop and build better products for Africans and the world.

Today, we’re launching voice typing support for nine more African languages (isiNdebele, isiXhosa, Kinyarwanda, Northern Sotho, Swati, Sesotho, Tswana, Tshivenda and Xitsonga) in Gboard, the Google keyboard – while 24 new languages are now supported on Google Translate, including Lingala, which is spoken by more than 45 million people across Central Africa.

To make Maps more useful, Street View imagery in Kenya, South Africa, Senegal and Nigeria has had a refresh with nearly 300,000 more kilometres of imagery now helping people virtually explore and navigate neighbourhoods. We’re also extending the service to Rwanda, meaning that Street View is now available in 11 African countries.

In addition to expanding the AI Accra Research Centre earlier this year, theOpen Buildings Project, which mapped buildings across the African continent using machine learning and satellite imagery, is expanding to South and Southeast Asia and is a great example of the AI centre creating solutions for Africa that are useful across the world.

Delivering on our promise

We remain committed to working with our partners in building for Africa together, and helping to unlock the benefits of the digital economy for more people by providing useful products, programmes and investments. We’re doing this by partnering with African organisations, businesses and entrepreneurs. It’s the talent and drive of the individuals in the countries, communities and businesses of Africa that will power Africa’s economic growth.

Source: Translate


Meet three Asia-Pacific schools evolving digital education

Access to education is one of the most important enablers for a child’s future success. School resources can often be limited, especially with the sudden shift to remote learning during COVID. Our team works on creating digital solutions for schools to provide a great learning experience for students, whether it’s collaborating across countries or keeping track of academic records.

Increasingly, we’re seeing schools around the world adopt tools like Chromebooks and Google Workspace for Education, transforming the ways teachers deliver lessons, and students learn. And, Asia-Pacific is home to some of the largest user bases for these tools. Let’s meet three teachers from Thailand, Japan, and Australia who have adopted Google for Education tools. 

Tell us about your school, and its mission.

Pradchayakorn Hodmalee, Deputy Director, Princess Chulabhorn Science High School Loei: We’re a science-focused school in Thailand where students collaborate on projects with peers from affiliate schools across Thailand and Japan. We create our own standardized curriculum, for which our teachers regularly travel from different regions to meet and jointly design syllabuses and exams. We also take pride in organizing overseas immersion trips to Japan for our students.

Emil Zankov, Leader of Innovation and Enterprise, Pedare Christian College: We’re located in Golden Grove, South Australia. We focus on ‘bringing industry to the classroom’ - and work hard to build an environment where teamwork, friendly competition, and the messy play of learning are at the forefront. These are things that happen out in the real world, and are what the industry looks at when it comes to hiring.

Shinsuke Nakamura, English teacher, Kochi Prefectural Sakawa High School: We’re located in Kochi prefecture, in a small sunny town on Japan’s Shikoku island. Kochi recognizes that students have different motivations and ambitions. So our vision is to create a personalized journey of learning for every student throughout their lifelong education, even as they leave our school and start at a new one.

Why did you start using Google for Education tools?

Emil: What Google for Education tools allow teachers to do is really focus on what they want their students to learn, while the technology sits in the background. We chose Workspace and Chromebooks because of their simplicity and reliability. Knowing that they are going to work time and time again is critical as teachers have very little patience when it comes to technology. In addition, Google’s cloud solutions allow students to easily share their work, which helps them focus on creating content rather than dealing with tech difficulties.

Pradchayakorn: With the pandemic’s travel restrictions, in-person meetings among teachers and student trips to Japan had to be canceled. So we quickly rolled out Workspace to enable teachers to continue communicating and working closely with their peers through tools like Google Meet, Drive and Forms. It’s actually more efficient, as teachers no longer have to commute to another province for meetings. To replace the immersion trip to Japan, we used Workspace tools to organize a four-month-long collaborative project with an affiliate school in Japan. We wanted to ensure our students still have the opportunity to work with peers from overseas and learn from others who may not be similar to them.

Shinsuke: Our vision of seamless, personalized learning meant that we needed to keep digital records of our students’ daily learnings through their elementary, junior high and high schools, so that teachers can continuously track their progress and help them work towards their personal goals. To help with that, the Kochi Prefectural Board of Education gives a Chromebook to every student in its public schools, which are sturdy, secure and easy to use. We also deploy Google Workspace for Education Plus and Cloud solutions in our school to help teachers easily access their students’ records and tailor their teaching methods for each student.

Photo of student smiling and looking at a Chromebook screen

In one sentence, what does the future of education look like for you?

Shinsuke: If high schools could see what these students have learned (in elementary and junior high) and the achievements they’ve made each year, we can create a seamless journey of learning.

Pradchayakorn: Schools will be less and less about learning in a fixed physical setting, and classrooms will no longer have restrictions on how many students can attend.

Emil: It’s having the industry and students collaborate on real-world problems, breaking the notion that the real world is only outside of school.

Google Cloud & Kotlin GDE Kevin Davin helps others learn in the face of challenges

Posted by Kevin Hernandez, , Developer Relations Community Manager

Kevin Davin speaking at the SnowCamp Conference in 2019

Kevin Davin has always had a passion for learning and helping others learn, no matter their background or unique challenges they may face. He explains, “I want to learn something new every day, I want to help others learn, and I’m addicted to learning.” This mantra is evident in everything he does from giving talks at numerous conferences to helping people from underrepresented groups overcome imposter syndrome and even helping them become GDEs. In addition to learning, Kevin is also passionate about diversity and inclusion efforts, partly inspired by navigating the world with partial blindness.

Kevin has been a professional programmer for 10 years now and has been in the field of Computer Science for about 20 years. Through the years, he has emphasized the importance of learning how and where to learn. For example, while he learned a lot while he was studying at a university, he was able to learn just as much through his colleagues. In fact, it was through his colleagues that he picked up lessons in teamwork and the ability to learn from people with different points of view and experience. Since he was able to learn so much from those around him, Kevin also wanted to pay it forward and started volunteering at a school for people with disabilities. Guided by the Departmental Centers for People with Disabilities, the aim of the program is to teach coding languages and reintegrate students into a technical profession. During his time at this center, Kevin helped students practice what they learned and ultimately successfully transition into a new career.

During these experiences, Kevin was always involved in the developer community through open-source projects. It was through these projects that he learned about the GDE program and was connected to Google Developer advocates. Kevin was drawn to the GDE program because he wanted to share his knowledge with others and have direct access to Google in order to become an advocate on behalf of developers. In 2016, he discovered Kubernetes and helped his company at the time move to Google Cloud. He always felt like this model was the right solution and invested a lot of time to learn it and practice it. “Google Cloud is made for developers. It’s like a Lego set because you can take the parts you want and put it together,” he remarked.

The GDE program has given him access to the things he values most: being a part of a developer community, being an advocate for developers, helping people from all backgrounds feel included, and above all, an opportunity to learn something new every day. Kevin’s parting advice for hopeful GDEs is: “Even if you can’t reach the goal of being a GDE now, you can always get accepted in the future. Don’t be afraid to fail because without failure, you won’t learn anything.” With his involvement in the program, Kevin hopes to continue connecting with the developer community and learning while supporting diversity efforts.

Learn more about Kevin on Twitter & LinkedIn.

The Google Developer Experts (GDE) program is a global network of highly experienced technology experts, influencers, and thought leaders who actively support developers, companies, and tech communities by speaking at events and publishing content.

Migrating from App Engine Blobstore to Cloud Storage (Module 16)

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Introduction and background

The most recent Serverless Migration Station video demonstrated how to add use of the App Engine's Blobstore service to a sample Python 2 App Engine app, kicking off the first of a 2-part series on migrating away from Blobstore. In today's Module 16 video, we complete this journey, arriving at Cloud Storage. Moving away from proprietary App Engine services like Blobstore makes apps more portable, giving them enough flexibility to:


Showing App Engine users how to migrate to Cloud Storage

As described previously, a Blobstore for Python 2 dependency on webapp made the Module 15 content more straightforward to implement if it was still using webapp2. To completely modernize this app here in Module 16, the following migrations should be carried out:

  • Migrate from webapp2 (and webapp) to Flask
  • Migrate from App Engine NDB to Cloud NDB
  • Migrate from App Engine Blobstore to Cloud Storage
  • Migrate from Python 2 to Python (2 and) 3

Performing the migrations

Prior to modifying the application code, a variety of configuration updates need to be made. Updates applying only to Python 2 feature a "Py2" designation while those migrating to Python 3 will see "Py3" annotations.

  1. Remove the built-in Jinja2 library from app.yaml—Jinja2 already comes with Flask, so remove use of the older built-in version which may possibly conflict with the contemporary Flask version you're using. (Py2)
  2. Use of Cloud client libraries (such as those for Cloud NDB and Cloud Storage) require a pair of built-in libraries, grpcio and setuptools, so add those to app.yaml (Py2)
  3. Remove everything in app.yaml except for a valid runtime (Py3)
  4. Add Cloud NDB and Cloud Storage client libraries to requirements.txt (Py2 & Py3)
  5. Create an appengine_config.py supporting both built-in (those in app.yaml) and non built-in (those in requirements.txt) libraries used (Py2)

The Module 15 app already migrated away from webapp2's (Django) templating system to Jinja2. This is useful when migrating to Flask because Jinja2 is Flask's default template system. Switching from App Engine NDB to Cloud NDB is fairly straightforward as the latter was designed to be mostly compatible with the original. The only change visible in this sample app is to move Datastore calls into Python with blocks.

The most significant changes occur when moving the upload and download handlers from webapp to Cloud Storage. The video and corresponding codelab go more in-depth into the necessary changes, but in summary, these are the updates required in the main application:

  1. webapp2 is replaced by Flask. Instead of using the older built-in version of Jinja2, use the version that comes with Flask.
  2. App Engine Blobstore and NDB are replaced by Cloud NDB and Cloud Storage, respectively.
  3. The webapp Blobstore handler functionality is replaced by a combination of the io standard library module plus components from Flask and Werkzeug. Furthermore, the handler classes and methods are replaced by Flask functions.
  4. The main handler class and corresponding GET and POST methods are all replaced by a single Flask function.

The results

With all the changes implemented, the original Module 15 app still operates identically in Module 16, starting with a form requesting a visit artifact followed by the most recents visits page:
The sample app's artifact prompt page

The sample app's most recent visits page.

The only difference is that four migrations have been completed where all of the "infrastructure" is now taken care of by non-App Engine legacy services. Furthermore, the Module 16 app could be either a Python 2 or 3 app. As far as the end-user is concerned, "nothing happened."

Migrating sample app from App Engine Blobstore to Cloud Storage

Wrap-up

Module 16 featured four different migrations, modernizing the Module 15 app from using App Engine legacy services like NDB and Blobstore to Cloud NDB and Cloud Storage, respectively. While we recommend users move to the latest offerings from Google Cloud, migrating from Blobstore to Cloud Storage isn't required, and should you opt to do so, can do it on your own timeline. In addition to today's video, be sure to check out the Module 16 codelab which leads you step-by-step through the migrations discussed.

In Fall 2021, the App Engine team extended support of many of the bundled services to 2nd generation runtimes (that have a 1st generation runtime), meaning you are no longer required to migrate to Cloud Storage when porting your app to Python 3. You can continue using Blobstore in your Python 3 app so long as you retrofit the code to access bundled services from next-generation runtimes.

If you're using other App Engine legacy services be sure to check out the other Migration Modules in this series. All Serverless Migration Station content (codelabs, videos, source code [when available]) can be accessed at its open source repo. While our content initially focuses on Python users, the Cloud team is working on covering other language runtimes, so stay tuned. For additional video content, check out our broader Serverless Expeditions series.

Google Dev Library Letters — 12th Issue

Posted by Garima Mehra, Program Manager

‘Google Dev Library Letters’ is curated to bring you some of the latest projects developed with Google tech submitted to Google Dev Library Platform. We hope this brings you the inspiration you need for your next project!


Android

Shape your Image: Circle, Rounded Square, or Cuts at the corner in Android by Sriyank Siddhartha

Using the MDC library, shape images in just a few lines of code by using ShapeableImageView.


Foso/Ktorfit by Jens Klingenberg

HTTP client / Kotlin Symbol Processor for Kotlin Multiplatform (Js, Jvm, Android, Native, iOS) using KSP and Ktor clients inspired by Retrofit.

These Lionesses have byte – could analytics help them lift the trophy?

On Sunday, England will face Germany in the final act of this summer’s tournament—one that has pitted the top teams from across Europe against one another, and inspired a generation.

It will take grit, determination and a stunning backheel here or there for England to win. But technology plays a part too. The Football Association’s partnership with Google Cloud has been a vital part of the picture for the lead up to the competition, giving coaches and performance staff access to data and processing muscle that help it select the best squad available at any one time.

The FA’s Player Performance System (PPS) is a central component of Helix—an application and development suite developed by The FA. Helix has been hosted on Google Cloud for the last five years and is used by the Technical Directorate staff associated with both the England women’s and men’s football teams. It provides them with secure access to databases, processes, functions, and compute resources that analyse large volumes of data. It also integrates with visualisation tools to give coaches and performance staff multiple views of data that provides unique insightscustomised to end users’ requirements.

This data can include anything from player profiles, to scouting reports, to medical information, to club and international fixtures and results. It also brings in research from metrics pulled from wearable devices, which track players’ training volume and intensity, to allow coaches to better manage their workloads. Coaches also have access to players’ sleep, nutrition, recovery, and mental health data.

“What it allows our users to do is pull together disparate information that they may not be used to seeing side-by-side. This helps us to generate new insights, and hopefully give us an edge when it comes to competitions,” said Craig Donald, CIO at The FA.

Helix provides multi-dimensional insight

Helix tracks more than 3,500 professional footballers and stores more than 22 million player data points collected from competitive games and training sessions. The platform relies on various Google Cloud tools, glued together by a complex microservice system, which is used to update the data being collected, analysed, and stored. Google Cloud Storage is also used to host The FA’s video archives of competitive games. As many as 400 games a day make their way into The FA database, each one creating up to a 5GB file size and 600MB of video tracking data.

Image of three Lioness football players with the middle one holding a football.  Data points are circled in yellow, red and blue howcasing how Google Cloud technology is used to look at performance.

This means the FA has faster, more convenient access to data, plus greater insight into player and team performance, which can aid in both the selection and choice of tactics in any given fixture. The additional power and capacity of the GCP hosting infrastructure helps The FA quickly and cost effectively scale up its analytics capabilities to handle additional data sets during forthcoming competitions.

It often seems in football that everybody has their own idea of the best players to pick and the tactics to adopt. But the combination of granular data metrics and cloud architecture deployed by The FA and Google Cloud might actually give a genuine expert the knowledge to back up those opinions.

But does it mean the Lionesses will win on Sunday? Tune in to find out.

Machine Learning Communities: Q2 ‘22 highlights and achievements

Posted by Nari Yoon, Hee Jung, DevRel Community Manager / Soonson Kwon, DevRel Program Manager

Let’s explore highlights and accomplishments of vast Google Machine Learning communities over the second quarter of the year! We are enthusiastic and grateful about all the activities by the global network of ML communities. Here are the highlights!

TensorFlow/Keras

TFUG Agadir hosted #MLReady phase as a part of #30DaysOfML. #MLReady aimed to prepare the attendees with the knowledge required to understand the different types of problems which deep learning can solve, and helped attendees be prepared for the TensorFlow Certificate.

TFUG Taipei hosted the basic Python and TensorFlow courses named From Python to TensorFlow. The aim of these events is to help everyone learn about the basics of Python and TensorFlow, including TensorFlow Hub, TensorFlow API. The event videos are shared every week via Youtube playlist.

TFUG New York hosted Introduction to Neural Radiance Fields for TensorFlow users. The talk included Volume Rendering, 3D view synthesis, and links to a minimal implementation of NeRF using Keras and TensorFlow. In the event, ML GDE Aritra Roy Gosthipaty (India) had a talk focusing on breaking the concepts of the academic paper, NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis into simpler and more ingestible snippets.

TFUG Turkey, GDG Edirne and GDG Mersin organized a TensorFlow Bootcamp 22 and ML GDE M. Yusuf Sarıgöz (Turkey) participated as a speaker, TensorFlow Ecosystem: Get most out of auxiliary packages. Yusuf demonstrated the inner workings of TensorFlow, how variables, tensors and operations interact with each other, and how auxiliary packages are built upon this skeleton.

TFUG Mumbai hosted the June Meetup and 110 folks gathered. ML GDE Sayak Paul (India) and TFUG mentor Darshan Despande shared knowledge through sessions. And ML workshops for beginners went on and participants built up machine learning models without writing a single line of code.

ML GDE Hugo Zanini (Brazil) wrote Realtime SKU detection in the browser using TensorFlow.js. He shared a solution for a well-known problem in the consumer packaged goods (CPG) industry: real-time and offline SKU detection using TensorFlow.js.

ML GDE Gad Benram (Portugal) wrote Can a couple TensorFlow lines reduce overfitting? He explained how just a few lines of code can generate data augmentations and boost a model’s performance on the validation set.

ML GDE Victor Dibia (USA) wrote How to Build An Android App and Integrate Tensorflow ML Models sharing how to run machine learning models locally on Android mobile devices, How to Implement Gradient Explanations for a HuggingFace Text Classification Model (Tensorflow 2.0) explaining in 5 steps about how to verify the model is focusing on the right tokens to classify text. He also wrote how to finetune a HuggingFace model for text classification, using Tensorflow 2.0.

ML GDE Karthic Rao (India) released a new series ML for JS developers with TFJS. This series is a combination of short portrait and long landscape videos. You can learn how to build a toxic word detector using TensorFlow.js.

ML GDE Sayak Paul (India) implemented the DeiT family of ViT models, ported the pre-trained params into the implementation, and provided code for off-the-shelf inference, fine-tuning, visualizing attention rollout plots, distilling ViT models through attention. (code | pretrained model | tutorial)

ML GDE Sayak Paul (India) and ML GDE Aritra Roy Gosthipaty (India) inspected various phenomena of a Vision Transformer, shared insights from various relevant works done in the area, and provided concise implementations that are compatible with Keras models. They provide tools to probe into the representations learned by different families of Vision Transformers. (tutorial | code)

JAX/Flax

ML GDE Aakash Nain (India) had a special talk, Introduction to JAX for ML GDEs, TFUG organizers and ML community network organizers. He covered the fundamentals of JAX/Flax so that more and more people try out JAX in the near future.

ML GDE Seunghyun Lee (Korea) started a project, Training and Lightweighting Cookbook in JAX/FLAX. This project attempts to build a neural network training and lightweighting cookbook including three kinds of lightweighting solutions, i.e., knowledge distillation, filter pruning, and quantization.

ML GDE Yucheng Wang (China) wrote History and features of JAX and explained the difference between JAX and Tensorflow.

ML GDE Martin Andrews (Singapore) shared a video, Practical JAX : Using Hugging Face BERT on TPUs. He reviewed the Hugging Face BERT code, written in JAX/Flax, being fine-tuned on Google’s Colab using Google TPUs. (Notebook for the video)

ML GDE Soumik Rakshit (India) wrote Implementing NeRF in JAX. He attempts to create a minimal implementation of 3D volumetric rendering of scenes represented by Neural Radiance Fields.

Kaggle

ML GDEs’ Kaggle notebooks were announced as the winner of Google OSS Expert Prize on Kaggle: Sayak Paul and Aritra Roy Gosthipaty’s Masked Image Modeling with Autoencoders in March; Sayak Paul’s Distilling Vision Transformers in April; Sayak Paul & Aritra Roy Gosthipaty’s Investigating Vision Transformer Representations; Soumik Rakshit’s Tensorflow Implementation of Zero-Reference Deep Curve Estimation in May and Aakash Nain’s The Definitive Guide to Augmentation in TensorFlow and JAX in June.

ML GDE Luca Massaron (Italy) published The Kaggle Book with Konrad Banachewicz. This book details competition analysis, sample code, end-to-end pipelines, best practices, and tips & tricks. And in the online event, Luca and the co-author talked about how to compete on Kaggle.















ML GDE Ertuğrul Demir (Turkey) wrote Kaggle Handbook: Fundamentals to Survive a Kaggle Shake-up covering bias-variance tradeoff, validation set, and cross validation approach. In the second post of the series, he showed more techniques using analogies and case studies.













TFUG Chennai hosted ML Study Jam with Kaggle and created study groups for the interested participants. More than 60% of members were active during the whole program and many of them shared their completion certificates.

TFUG Mysuru organizer Usha Rengaraju shared a Kaggle notebook which contains the implementation of the research paper: UNETR - Transformers for 3D Biomedical Image Segmentation. The model automatically segments the stomach and intestines on MRI scans.

TFX

ML GDE Sayak Paul (India) and ML GDE Chansung Park (Korea) shared how to deploy a deep learning model with Docker, Kubernetes, and Github actions, with two promising ways - FastAPI (for REST) and TF Serving (for gRPC).

ML GDE Ukjae Jeong (Korea) and ML Engineers at Karrot Market, a mobile commerce unicorn with 23M users, wrote Why Karrot Uses TFX, and How to Improve Productivity on ML Pipeline Development.

ML GDE Jun Jiang (China) had a talk introducing the concept of MLOps, the production-level end-to-end solutions of Google & TensorFlow, and how to use TFX to build the search and recommendation system & scientific research platform for large-scale machine learning training.

ML GDE Piero Esposito (Brazil) wrote Building Deep Learning Pipelines with Tensorflow Extended. He showed how to get started with TFX locally and how to move a TFX pipeline from local environment to Vertex AI; and provided code samples to adapt and get started with TFX.

TFUG São Paulo (Brazil) had a series of online webinars on TensorFlow and TFX. In the TFX session, they focused on how to put the models into production. They talked about the data structures in TFX and implementation of the first pipeline in TFX: ingesting and validating data.

TFUG Stockholm hosted MLOps, TensorFlow in Production, and TFX covering why, what and how you can effectively leverage MLOps best practices to scale ML efforts and had a look at how TFX can be used for designing and deploying ML pipelines.

Cloud AI

ML GDE Chansung Park (Korea) wrote MLOps System with AutoML and Pipeline in Vertex AI on GCP official blog. He showed how Google Cloud Storage and Google Cloud Functions can help manage data and handle events in the MLOps system.

He also shared the Github repository, Continuous Adaptation with VertexAI's AutoML and Pipeline. This contains two notebooks to demonstrate how to automate to produce a new AutoML model when the new dataset comes in.

TFUG Northwest (Portland) hosted The State and Future of AI + ML/MLOps/VertexAI lab walkthrough. In this event, ML GDE Al Kari (USA) outlined the technology landscape of AI, ML, MLOps and frameworks. Googler Andrew Ferlitsch had a talk about Google Cloud AI’s definition of the 8 stages of MLOps for enterprise scale production and how Vertex AI fits into each stage. And MLOps engineer Chris Thompson covered how easy it is to deploy a model using the Vertex AI tools.

Research

ML GDE Qinghua Duan (China) released a video which introduces Google’s latest 540 billion parameter model. He introduced the paper PaLM, and described the basic training process and innovations.

ML GDE Rumei LI (China) wrote blog postings reviewing papers, DeepMind's Flamingo and Google's PaLM.

Introducing Earth Engine for governments and businesses

We’re at a unique inflection point in our relationship with the planet. We face existential climate threats — a growing crisis already manifesting in extreme weather events, coupled with the loss of nature resulting from human activities such as deforestation. But at the same time, the world is mobilizing around climate action. Citizens are demanding progress, and governments and companies are making unprecedented commitments to transform how we live on this planet — from policy decisions to business practices. Over the years, one of the top climate challenges I’ve heard from businesses, governments and organizations is that they’re drowning in data but thirsty for insights.

So starting today, we’re making Google Earth Engine available to businesses and governments worldwide as an enterprise-grade service through Google Cloud. With access to reliable, up-to-date insights on how our planet is changing, organizations will be better equipped to move their sustainability efforts forward.

Google Earth Engine, which originally launched to scientists and NGOs in 2010, is a leading technology for planetary-scale environmental monitoring. Google Earth Engine combines data from hundreds of satellites and earth observation datasets with powerful cloud computing to show timely, accurate, high-resolution insights about the state of the world’s habitats and ecosystems — and how they’re changing over time. With one of the largest publicly available data catalogs and a global data archive that goes back 50 years and updates every 15 minutes, it’s possible to detect trends and understand correlations between human activities and environmental impact. This technology is already beginning to bring greater transparency and traceability to commodity supply chains, supporting climate resilience and allowing for more sustainable management of natural resources such as forests and water.

Earth Engine will be available at no charge to government researchers, least-developed countries, tribal nations and news organizations. And it will remain available at no cost for nonprofit organizations, research scientists, and other impact users for their non-commercial and research projects.

Earth Engine will also be available to startups that are a part of the Google for Startups Cloud Program. Through this initiative we provide funded startups with access to dedicated mentors, industry experts, product and technical support, and Cloud cost coverage (up to $100,000) for each of the first two years and more.

How organizations are using Earth Engine

Since we announced the preview of Earth Engine in Google Cloud last October, we’ve been working with dozens of companies and organizations across industries — from consumer packaged goods and insurance companies to agriculture technology and the public sector — to use Earth Engine’s satellite imagery and geospatial data in incredible ways.

Land cover change over time from Dynamic World

Dynamic World, a global machine learning derived land classification over time available in Earth Engine's public data catalog, was developed in partnership with World Resources Institute (WRI).

For example, Regrow, a company that helps large consumer packaged goods corporations decarbonize their agricultural practices, started using Earth Engine to report and verify regenerative and sustainable techniques. Through Earth Engine’s analysis of historical and satellite imagery, Regrow can generate granular field data at the state or country levels across millions of acres of farmland around the world.

As climate change causes shifts in biodiversity, Earth Engine is helping communities adapt to the effects of these changes, such as new mosquito outbreaks. SC Johnson partnered with Google Cloud to use Earth Engine to develop a publicly accessible, predictive model of when and where mosquito populations are emerging nationwide. The forecast accounts for billions of individual weather data points and over 60 years of mosquito knowledge in forecasting models.

Animated gif showing the Off!Cast, SC Johnson’s mosquito forecasting tool. A zip code is entered into the tool to show a 7-day forecast that indicates medium, high and very-high.

For organizations that may not have resources dedicated to working with Earth Engine, we’ve continued to grow our partner network to support them. For example, our partner NGIS worked with Rainforest Trust to get action-oriented and tailored insights that can help them conserve 39 million acres of tropical forests around the world.

It’s not too late to protect and restore a livable planet for ourselves and generations to come. Climate change experts have declared the next ten years the ‘Decade of Action’, a critical time to act in order to curb the effects of climate change. Making a global difference will require a transformational change from everyone, including businesses and governments. With Google Earth Engine, we hope to help organizations contribute to this change.

Introducing Earth Engine for governments and businesses

We’re at a unique inflection point in our relationship with the planet. We face existential climate threats — a growing crisis already manifesting in extreme weather events, coupled with the loss of nature resulting from human activities such as deforestation. But at the same time, the world is mobilizing around climate action. Citizens are demanding progress, and governments and companies are making unprecedented commitments to transform how we live on this planet — from policy decisions to business practices. Over the years, one of the top climate challenges I’ve heard from businesses, governments and organizations is that they’re drowning in data but thirsty for insights.

So starting today, we’re making Google Earth Engine available to businesses and governments worldwide as an enterprise-grade service through Google Cloud. With access to reliable, up-to-date insights on how our planet is changing, organizations will be better equipped to move their sustainability efforts forward.

Google Earth Engine, which originally launched to scientists and NGOs in 2010, is a leading technology for planetary-scale environmental monitoring. Google Earth Engine combines data from hundreds of satellites and earth observation datasets with powerful cloud computing to show timely, accurate, high-resolution insights about the state of the world’s habitats and ecosystems — and how they’re changing over time. With one of the largest publicly available data catalogs and a global data archive that goes back 50 years and updates every 15 minutes, it’s possible to detect trends and understand correlations between human activities and environmental impact. This technology is already beginning to bring greater transparency and traceability to commodity supply chains, supporting climate resilience and allowing for more sustainable management of natural resources such as forests and water.

Earth Engine will be available at no charge to government researchers, least-developed countries, tribal nations and news organizations. And it will remain available at no cost for nonprofit organizations, research scientists, and other impact users for their non-commercial and research projects.

Earth Engine will also be available to startups that are a part of the Google for Startups Cloud Program. Through this initiative we provide funded startups with access to dedicated mentors, industry experts, product and technical support, and Cloud cost coverage (up to $100,000) for each of the first two years and more.

How organizations are using Earth Engine

Since we announced the preview of Earth Engine in Google Cloud last October, we’ve been working with dozens of companies and organizations across industries — from consumer packaged goods and insurance companies to agriculture technology and the public sector — to use Earth Engine’s satellite imagery and geospatial data in incredible ways.

Land cover change over time from Dynamic World

Dynamic World, a global machine learning derived land classification over time available in Earth Engine's public data catalog, was developed in partnership with World Resources Institute (WRI).

For example, Regrow, a company that helps large consumer packaged goods corporations decarbonize their agricultural practices, started using Earth Engine to report and verify regenerative and sustainable techniques. Through Earth Engine’s analysis of historical and satellite imagery, Regrow can generate granular field data at the state or country levels across millions of acres of farmland around the world.

As climate change causes shifts in biodiversity, Earth Engine is helping communities adapt to the effects of these changes, such as new mosquito outbreaks. SC Johnson partnered with Google Cloud to use Earth Engine to develop a publicly accessible, predictive model of when and where mosquito populations are emerging nationwide. The forecast accounts for billions of individual weather data points and over 60 years of mosquito knowledge in forecasting models.

Animated gif showing the Off!Cast, SC Johnson’s mosquito forecasting tool. A zip code is entered into the tool to show a 7-day forecast that indicates medium, high and very-high.

For organizations that may not have resources dedicated to working with Earth Engine, we’ve continued to grow our partner network to support them. For example, our partner NGIS worked with Rainforest Trust to get action-oriented and tailored insights that can help them conserve 39 million acres of tropical forests around the world.

It’s not too late to protect and restore a livable planet for ourselves and generations to come. Climate change experts have declared the next ten years the ‘Decade of Action’, a critical time to act in order to curb the effects of climate change. Making a global difference will require a transformational change from everyone, including businesses and governments. With Google Earth Engine, we hope to help organizations contribute to this change.

A bigger piece of the pi: Finding the 100-trillionth digit

The 100-trillionth decimal place of π (pi) is 0. A few months ago, on an average Tuesday morning in March, I sat down with my coffee to check on the program that had been running a calculation from my home office for 157 days. It was finally time — I was going to be the first and only person to ever see the number. The results were in and it was a new record: We’d calculated the most digits of π ever — 100 trillion to be exact.

Calculating π — or finding as many digits of it as possible — is a project that mathematicians, scientists and engineers around the world have worked on for thousands of years, myself included. The well-known approximation 3.14 is believed to have been found by Archimedes around the year 250 BCE. Computer scientist Donald Knuth wrote "human progress in calculation has traditionally been measured by the number of decimal digits of π" in his book “The Art of Computer Programming” (Dr. Knuth even wrote about me in the book). In the past, people would manually — meaning without calculators or computers — determine the digits of pi. Today, we use computers to do this calculation, which helps us learn how much faster they’ve become. It’s one of the few ways to measure how much progress we're making across centuries, including before the invention of electronic computers.

An illustration of pie crust stretching from the Earth to the moon. Above it reads: "100 trillion inches of pie crust stretches from Earth to the moon an back ~3,304 times."

As a developer advocate at Google Cloud, part of my job is to create demos and run experiments that show the cool things developers can do with our platform; one of those things, you guessed it, is using a program to calculate digits of pi. Breaking the record of π was my childhood dream, so a few years ago I decided to try using Google Cloud to take on this project. I also wanted to see how much data processing these computers could handle. In 2019, I became the third woman tobreak this world record, with a π calculation of 31.4 trillion digits.

But I couldn’t stop there, and I decided to try again. And now we have a new record of 100 trillion decimal places. This shows us, again, just how far computers have come: In three years, the computers have calculated three times as many numbers. What’s more, in 2019, it took the computers 121 days to get to 31.4 million digits. This time, it took them 157 days to get to 100 trillion — more than twice as fast as the first project.

A illustrated chart showing how quickly we reached the new pi record compared to the last time in 2019.

But let’s look back farther than my 2019 record: The first world record of computing π with an electronic computer was in 1949, which calculated 2,037 decimal places. It took humans thousands of years to reach the two-thousandth place, and we've reached the 100 trillionth decimal just 73 years later. Not only are we adding more digits than all the numbers in the past combined, but we're spending less and less time hitting new milestones.

An illustration of a person holding a phone and tapping on the screen. Above it reads: "The 82,000 terabytes of data processed during calculations is the equivalent of 160,156 Pixel 6 Pros with max storage (512 GB)."

I used the same tools and techniques as I did in 2019 (for more details, we have a technical explanation in the Google Cloud blog), but I was able to hit the new number more quickly thanks to Google Cloud’s infrastructure improvements in compute, storage and networking. One of the most remarkable phenomena in computer science is that every year we have made incremental progress, and in return we have reaped exponentially faster compute speeds. This is what’s made a lot of the recent computer-assisted research possible in areas like climate science and astronomy.

An illustration of a person with a megaphone. Above it reads: "If you read all 100 trillion digits out loud, one second at a time, it would take you 3,170,929 years to read the whole thing."

Back when I hit that record in 2019 — and again now — many people asked "what's next?" And I’m happy to say that the scientific community just keeps counting. There's no end to π, it’s a transcendental number, meaning it can't be written as a finite polynomial. Plus, we don't see an end to the evolution of computing. Like the introduction of electronic computers in the 1940s and discovery of faster algorithms in the 1960-80s, we could still see another fundamental shift that keeps the momentum going.

So, like I said: I’ll just keep counting.