Tag Archives: google cloud

Helping fashion brands make more sustainable decisions

The fashion industry is one of the largest contributors to the global climate and ecological crisis — accounting for up to 8% of global greenhouse gas emissions. Much of this impact occurs at the raw materials stage of the supply chain, like when cotton is farmed or trees are cut down to create viscose. But when brands source these materials, they often have little to no visibility on the environmental impact of them.

In 2019, we set out to create a tool that would give companies the data they need to make more responsible sourcing decisions. Today we’re announcing the first version of the Global Fibre Impact Explorer (GFIE), and we’re inviting other brands to get involved. The tool, which is built on Google Earth Engine and uses Google Cloud computing, assesses the environmental risk of different fibers across regions as it relates to environmental factors such as air pollution, biodiversity, climate and greenhouse gasses, forestry and water use.

With this tool, brands will easily be able to identify environmental risks across more than 20 fiber types — including natural, cellulosic and synthetics materials.The tool will also provide brands with recommendations for targeted and regionally specific risk reduction activities including opportunities to work with farmers, producers and communities, such as investing in regenerative agriculture practices

The GFIE dashboard where brands can upload their fiber portfolio data and get recommendations to reduce risk across key environmental categories.

The GFIE dashboard where brands can upload their fiber portfolio data and get recommendations to reduce risk across key environmental categories.

Spooling it all together: Working with fashion brands and conservation experts

We worked with Stella McCartney, a luxury fashion brand and leader in sustainability, to understand the industry's needs and to test the platform. Using the tool alongside their existing sustainability efforts, Stella McCartney’s team was able to identify cotton sources in Turkey that were facing increased water and climate risks. This affirms the need for investing in local farming communities that focus on regenerative practices, such as water management and soil regeneration. Other brands and retailers — including Adidas, Allbirds, H&M Group and VF Corporation — have helped test and refine the tool to make sure it can be useful to everyone in the industry. And an external council of global experts have reviewed the GFIE methodology and data.

The GFIE was born out of a partnership between Google and the WWF, and is built to complement existing tools focused on industry impact and risk analysis. With the initial development phase complete, Google and WWF are now transitioning GFIE to Textile Exchange, a global non-profit focused on positively impacting climate through accelerating the use of preferred fibers across the global textile industry. As the official host of the GFIE, Textile Exchange will continue the development of the tool, onboard new brands and work towards an industry launch in 2022.

If you’re a part of a fashion brand or industry group and want access to this tool, please register your interest at globalfibreimpact.com.

Upload massive lists of products to Merchant Center using Centimani

Posted by Hector Parra, Jaime Martínez, Miguel Fernandes, Julia Hernández

Merchant Center lets merchants manage how their in-store and online product inventory appears on Google. It allows them to reach hundreds of millions of people looking to buy products like yours each day.


To upload their products, merchants can make use of feeds, that is, files with a list of products in a specific format. These can be shared with Merchant Center in different ways: using Google Sheets, SFTP or FTP shares, Google Cloud Storage or manually through the user interface. These methods work great for the majority of cases. But, if a merchant's product list grows over time, they might reach the usage limits of the feeds. Depending on the case, quota extensions could be granted, but if the list continues to grow, it might reach a point where feeds no longer support that scale, and the Content API for Shopping would become the recommended way to go forward.


The main issue is, if a merchant is recommended to stop using feeds and start using the Content API due to scale problems, it means that the number of products is massive, and trying to use the Content API directly will give them usage and quota errors, as the QPS and products per call limits will be exceeded.


For this specific use case, Centimani becomes critical in helping merchants handle the upload process through the Content API in a controlled manner, avoiding any overload of the API.


Centimani is a configurable massive file processor able to split text files in chunks, process them following a strategic pattern and store the results in BigQuery for reporting. It provides configurable options for chunk size and number of retries, and takes care of exponential backoff to ensure all requests have enough retries to overcome potential temporary issues or errors. Centimani comes with two operators: Google Ads Offline Conversions Uploader, and Merchant Center Products Uploader, but it can be extended to other uses easily.


Centimani uses Google Cloud as its platform, and makes use of Cloud Storage for storing the data, Cloud Functions to do the data processing and the API calls, Cloud Tasks to coordinate the execution of each call, and BigQuery to store the audit information for reporting.

Centimani Architecture

To start using Centimani, a couple of configuration files need to be prepared with information about the Google Cloud Project to be used (including the element names), the credentials to access the Merchant Center accounts and how the load will be distributed (e.g., parallel executions, number of products per call). Then, the deployment is done automatically using a deployment script provided by the tool.


After the tool is deployed, a cloud function will be monitoring the input bucket in Cloud Storage, and every time a file is uploaded there, it will be processed. The tool uses the name of the file to select the operator that is going to be used (“MC” indicates Merchant Center Products Uploader), and the particular configuration to use (multiple configurations can be used to connect to Merchant Center accounts with different access credentials).


Whenever a file is uploaded, it will be sliced in parts if it is greater than the number of products allowed per call, they will be stored in the output bucket in Cloud Storage, and Cloud Tasks will start launching the API calls until all files are processed. Any file with errors will be stored in a folder called “slices_failed” to help troubleshoot any issues found in the process. Also, all the information about the executions will be stored temporarily in Datastore and then moved to BigQuery, where it can be used for monitoring the whole process from a centralized place.


Centimani Status Dashboard Architecture

Centimani provides an easy way for merchants to start using the Content API for Shopping to manage their products, without having to deal with the complexity of keeping the system under the limits.


For more information you can visit the Centimani repository on Github.


Machine Learning Communities: Q3 ‘21 highlights and achievements

Posted by HyeJung Lee, DevRel Community Manager and Soonson Kwon, DevRel Program Manager

Let’s explore highlights and achievements of vast Google Machine Learning communities by region for the last quarter. Activities of experts (GDE, professional individuals), communities (TFUG, TensorFlow user groups), students (GDSC, student clubs), and developers groups (GDG) are presented here.

Key highlights

Image shows a banner for 30 days of ML with Kaggle

30 days of ML with Kaggle is designed to help beginners study ML using Kaggle Learn courses as well as a competition specifically for the participants of this program. Collaborated with the Kaggle team so that +30 the ML GDEs and TFUG organizers participated as volunteers as online mentors as well as speakers for this initiative.

Total 16 of the GDE/GDSC/TFUGs run community organized programs by referring to the shared community organize guide. Houston TensorFlow & Applied AI/ML placed 6th out of 7573 teams — the only Americans in the Top 10 in the competition. And TFUG Santiago (Chile) organizers participated as well and they are number 17 on the public leaderboard.

Asia Pacific

Image shows Google Cloud and Coca-Cola logos

GDE Minori MATSUDA (Japan)’s project on Coca-Cola Bottlers Japan was published on Google Cloud Japan Blog covering creating an ML pipeline to deploy into real business within 2 months by using Vertex AI. This is also published on GCP blog in English.

GDE Chansung Park (Korea) and Sayak Paul (India) published many articles on GCP Blog. First, “Image search with natural language queries” explained how to build a simple image parser from natural language inputs using OpenAI's CLIP model. From this second “Model training as a CI/CD system: (Part I, Part II)” post, you can learn more about why having a resilient CI/CD system for your ML application is crucial for success. Last, “Dual deployments on Vertex AI” talks about end-to-end workflow using Vertex AI, TFX and Kubeflow.

In China, GDE Junpeng Ye used TensorFlow 2.x to significantly reduce the codebase (15k → 2k) on WeChat Finder which is a TikTok alternative in WeChat. GDE Dan lee wrote an article on Understanding TensorFlow Series: Part 1, Part 2, Part 3-1, Part 3-2, Part 4

GDE Ngoc Ba from Vietnam has contributed AI Papers Reading and Coding series implementing ML/DL papers in TensorFlow and creates slides/videos every two weeks. (videos: Vit Transformer, MLP-Mixer and Transformer)

A beginner friendly codelabs (Get started with audio classification ,Go further with audio classification) by GDSC Sookmyung (Korea) learning to customize pre-trained audio classification models to your needs and deploy them to your apps, using TFlite Model Maker.

Cover image for Mat Kelcey's talk on JAX at the PyConAU event

GDE Matthew Kelcey from Australia gave a talk on JAX at PyConAU event. Mat gave an overview to fundamentals of JAX and an intro to some of the libraries being developed on top.

Image shows overview for the released PerceiverIO code

In Singapore, TFUG Singapore dived back into some of the latest papers, techniques, and fields of research that are delivering state-of-the-art results in a number of fields. GDE Martin Andrews included a brief code walkthrough for the released PerceiverIO code at perceiver- highlighting what JAX looks like, how Haiku relates to Sonnet, but also the data loading stuff which is done via tf.data.

Machine Learning Experimentation with TensorBoard book cover

GDE Imran us Salam Mian from Pakistan published a book "Machine Learning Experimentation with TensorBoard".

India

GDE Aakash Nain has published the TF-JAX tutorial series from Part 4 to Part 8. Part 4 gives a brief introduction about JAX (What/Why), and DeviceArray. Part 5 covers why pure functions are good and why JAX prefers them. Part 6 focuses on Pseudo Random Number Generation (PRNG) in Numpy and JAX. Part 7 focuses on Just In Time Compilation (JIT) in JAX. And Part 8 covers vmap and pmap.

Image of Bhavesh's Google Cloud certificate

GDE Bhavesh Bhatt published a video about his experience on the Google Cloud Professional Data Engineer certification exam.

Image shows phase 1 and 2 of the Climate Change project using Vertex AI

Climate Change project using Vertex AI by ML GDE Sayak Paul and Siddha Ganju (NVIDIA). They published a paper (Flood Segmentation on Sentinel-1 SAR Imagery with Semi-Supervised Learning) and open-sourced the project with regard to NASA Impact's ETCI competition. This project made four NeurIPS workshops AI for Science: Mind the Gaps, Tackling Climate Change with Machine Learning, Women in ML, and Machine Learning and the Physical Sciences. And they finished as the first runners-up (see Test Phase 2).

Image shows example of handwriting recognition tutorial

Tutorial on handwriting recognition was contributed to Keras example by GDE Sayak Paul and Aakash Kumar Nain.

Graph regularization for image classification using synthesized graphs by GDE Sayak Pau was added to the official examples in the Neural Structured Learning in TensorFlow.

GDE Sayak Paul and Soumik Rakshit shared a new NLP dataset for multi-label text classification. The dataset consists of paper titles, abstracts, and term categories scraped from arXiv.

North America

Banner image shows students participating in Google Summer of Code

During the GSoC (Google Summer of Code), some GDEs mentored or co-mentored students. GDE Margaret Maynard-Reid (USA) mentored TF-GAN, Model Garden, TF Hub and TFLite products. You can get some of her experience and tips from the GDE Blog. And you can find GDE Sayak Paul (India) and Googler Morgan Roff’s GSoC experience in (co-)mentoring TensorFlow and TF Hub as well.

A beginner friendly workshop on TensorFlow with ML GDE Henry Ruiz (USA) was hosted by GDSC Texas A&M University (USA) for the students.

Screenshot from Youtube video on how transformers work

Youtube video Self-Attention Explained: How do Transformers work? by GDE Tanmay Bakshi from Canada explained how you can build a Transformer encoder-based neural network to classify code into 8 different programming languages using TPU, Colab with Keras.

Europe

GDG / GDSC Turkey hosted AI Summer Camp in cooperation with Global AI Hub. 7100 participants learned about ML, TensorFlow, CV and NLP.

Screenshot from slide presentation titled Why Jax?

TechTalk Speech Processing with Deep Learning and JAX/Trax by GDE Sergii Khomenko (Germany) and M. Yusuf Sarıgöz (Turkey). They reviewed technologies such as Jax, TensorFlow, Trax, and others that can help boost our research in speech processing.

South/Central America

Image shows Custom object detection in the browser using TensorFlow.js

On the other side of the world, in Brazil, GDE Hugo Zanini Gomes wrote an article about “Custom object detection in the browser using TensorFlow.js” using the TensorFlow 2 Object Detection API and Colab was posted on the TensorFlow blog.

Screenshot from a talk about Real-time semantic segmentation in the browser - Made with TensorFlow.js

And Hugo gave a talk about Real-time semantic segmentation in the browser - Made with TensorFlow.js covered using SavedModels in an efficient way in JavaScript directly enabling you to get the reach and scale of the web for your new research.

Data Pipelines for ML was talked about by GDE Nathaly Alarcon Torrico from Bolivia explained all the phases involved in the creation of ML and Data Science products, starting with the data collection, transformation, storage and Product creation of ML models.

Screensho from TechTalk “Machine Learning Competitivo: Top 1% en Kaggle (Video)

TechTalk “Machine Learning Competitivo: Top 1% en Kaggle (Video)“ was hosted by TFUG Santiago (Chile). In this talk the speaker gave a tour of the steps to follow to generate a model capable of being in the top 1% of the Kaggle Leaderboard. The focus was on showing the libraries and“ tricks ”that are used to be able to test many ideas quickly both in implementation and in execution and how to use them in productive environments.

MENA

Screenshot from workshop about Recurrent Neural Networks

GDE Ruqiya Bin Safi (Saudi Arabia) had a workshop about Recurrent Neural Networks : part 1 (Github / Slide) at the GDG Mena. And Ruqiya gave a talk about Recurrent Neural Networks: part 2 at the GDG Cloud Saudi (Saudi Arabia).

AI Training with Kaggle by GDSC Islamic University of Gaza from Palestine. It is a two month training covering Data Processing, Image Processing and NLP with Kaggle.

Sub-Saharan Africa

TFUG Ibadan had two TensorFlow events : Basic Sentiment analysis with Tensorflow and Introduction to Recommenders Systems with TensorFlow”.

Image of Yannick Serge Obam Akou's TensorFlow Certificate

Article covered some tips to study, prepare and pass the TensorFlow developer exam in French by ML GDE Yannick Serge Obam Akou (Cameroon).

Extend Google Apps Script with your API library to empower users

Posted by Keith Einstein, Product Manager

Banner image that shows the Cloud Task logo

Google is proud to announce the availability of the DocuSign API library for Google Apps Script. This newly created library gives all Apps Script users access to the more than 400 endpoints DocuSign has to offer so they can build digital signatures into their custom solutions and workflows within Google Workspace.

The Google Workspace Ecosystem

Last week at Google Cloud Next ‘21, in the session “How Miro, DocuSign, Adobe and Atlassian are helping organizations centralize their work”, we showcased a few partner integrations called add-ons, found on Google Workspace Marketplace. The Google Workspace Marketplace helps developers connect with the more than 3 billion people who use Google Workspace—with a stunning 4.8 billion apps installed to date. That incredible demand is fueling innovation in the ecosystem, and we now have more than 5,300 public apps available in the Google Workspace Marketplace, plus thousands more private apps that customers have built for themselves. As a developer, one of the benefits of an add-on is that it allows you to surface your application in a user-friendly manner that helps people reclaim their time, work more efficiently, and adds another touchpoint for them to engage with your product. While building an add-on enables users to frictionlessly engage with your product from within Google Workspace, to truly unlock limitless potential innovative companies like DocuSign are beginning to empower users to build the unique solutions they need by providing them with a Google Apps Script Library.

Apps Script enables Google Workspace customization

Many users are currently unlocking the power of Google Apps Script by creating the solutions and automations they need to help them reclaim precious time. Publishing a Google Apps Script Library is another great opportunity to bring a product into Google Workspace and gain access to those creators. It gives your users more choices in how they integrate your product into Google Workspace, which in turn empowers them with the flexibility to solve more business challenges with your product’s unique value.

Apps Script libraries can make the development and maintenance of a script more convenient by enabling users to take advantage of pre-built functionality and focus on the aspects that unlock unique value. This allows innovative companies to make available a variety of functionality that Apps Script users can use to create custom solutions and workflows with the features not found in an off-the-shelf app integration like a Google Workspace Add-on or Google Chat application.

The DocuSign API Library for Apps Script

One of the partners we showcased at Google Cloud Next ‘21 was DocuSign. The DocuSign eSignature for Google Workspace add-on has been installed almost two-million times. The add-on allows you to collect signatures or sign agreements from inside Gmail, Google Drive or Google Docs. While collecting signatures and signing agreements are some of the most common areas in which a user would use DocuSign eSignature inside Google Workspace, there are many more features to DocuSign’s eSignature product. In fact, their eSignature API has over 400 endpoints. Being able to go beyond those top features normally found in an add-on and into the rest of the functionality of DocuSign eSignature is where an Apps Script Library can be leveraged.

And that’s exactly what we’re partnering to do. Recently, DocuSign’s Lead API Product Manager, Jeremy Glassenberg (a Google Developer Expert for Google Workspace) joined us on the Totally Unscripted podcast to talk about DocuSign’s path to creating an Apps Script Library. At the DocuSign Developer Conference, on October 27th, Jeremy will be teaming up with Christian Schalk from our Google Cloud Developer Relations team to launch the DocuSign Apps Script Library and showcase how it can be used.

With the DocuSign Apps Script Library, users around the world who lean on Apps Script to build their workplace automations can create customized DocuSign eSignature processes. Leveraging the Apps Script Library in addition to the DocuSign add-on empowers companies who use both DocuSign and Google Workspace to have a more seamless workflow, increasing efficiency and productivity. The add-on allows customers to integrate the solution instantly into their Google apps, and solve for the most common use cases. The Apps Script Library allows users to go deep and solve for the specialized use cases where a single team (or knowledge worker) may need to tap into a less commonly used feature to create a unique solution.

See us at the DocuSign Developer Conference

The DocuSign Apps Script Library is now available in beta and if you’d like to know more about it drop a message to [email protected]. And be sure to register for the session on "Building a DocuSign Apps Script Library with Google Cloud", Oct 27th @ 10:00 AM. For updates and news like this about the Google Workspace platform, please subscribe to our developer newsletter.

Migrating App Engine push queues to Cloud Tasks

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Banner image that shows the Cloud Task logo

Introduction

The previous Module 7 episode of Serverless Migration Station gave developers an idea of how App Engine push tasks work and how to implement their use in an existing App Engine ndb Flask app. In this Module 8 episode, we migrate this app from the App Engine Datastore (ndb) and Task Queue (taskqueue) APIs to Cloud NDB and Cloud Tasks. This makes your app more portable and provides a smoother transition from Python 2 to 3. The same principle applies to upgrading other legacy App Engine apps from Java 8 to 11, PHP 5 to 7, and up to Go 1.12 or newer.

Over the years, many of the original App Engine services such as Datastore, Memcache, and Blobstore, have matured to become their own standalone products, for example, Cloud Datastore, Cloud Memorystore, and Cloud Storage, respectively. The same is true for App Engine Task Queues, whose functionality has been split out to Cloud Tasks (push queues) and Cloud Pub/Sub (pull queues), now accessible to developers and applications outside of App Engine.

Migrating App Engine push queues to Cloud Tasks video

Migrating to Cloud NDB and Cloud Tasks

The key updates being made to the application:

  1. Add support for Google Cloud client libraries in the app's configuration
  2. Switch from App Engine APIs to their standalone Cloud equivalents
  3. Make required library adjustments, e.g., add use of Cloud NDB context manager
  4. Complete additional setup for Cloud Tasks
  5. Make minor updates to the task handler itself

The bulk of the updates are in #3 and #4 above, and those are reflected in the following "diff"s for the main application file:

Screenshot shows primary differences in code when switching to Cloud NDB & Cloud Tasks

Primary differences switching to Cloud NDB & Cloud Tasks

With these changes implemented, the web app works identically to that of the Module 7 sample, but both the database and task queue functionality have been completely swapped to using the standalone/unbundled Cloud NDB and Cloud Tasks libraries… congratulations!

Next steps

To do this exercise yourself, check out our corresponding codelab which leads you step-by-step through the process. You can use this in addition to the video, which can provide guidance. You can also review the push tasks migration guide for more information. Arriving at a fully-functioning Module 8 app featuring Cloud Tasks sets the stage for a larger migration ahead in Module 9. We've accomplished the most important step here, that is, getting off of the original App Engine legacy bundled services/APIs. The Module 9 migration from Python 2 to 3 and Cloud NDB to Cloud Firestore, plus the upgrade to the latest version of the Cloud Tasks client library are all fairly optional, but they represent a good opportunity to perform a medium-sized migration.

All migration modules, their videos (when available), codelab tutorials, and source code, can be found in the migration repo. While the content focuses initially on Python users, we will cover other legacy runtimes soon so stay tuned.

HLTH: Building on our commitments in health

Tonight kicked off the HLTH event in Boston that brings together leaders across health to discuss healthcare's most pressing problems and how we can tackle them to improve care delivery and outcomes.

Over the past two years, the pandemic shined a light on the importance of our collective health — and the role the private sector, payers, healthcare delivery organizations, governments and public health play in keeping communities healthy. For us at Google, we saw Search, Maps and YouTube become critical ways for people to learn about COVID-19. So we partnered with public health organizations to provide information that helped people stay safe, find testing and get vaccinated. In addition, we provided healthcare organizations, researchers and non-profits with tools, data and resources to support pandemic response and research efforts.

As I mentioned on the opening night of HLTH, Google Health is our company-wide effort to help billions of people be healthier by leaning on our strengths: organizing information and developing innovative technology. Beyond the pandemic, we have an opportunity to continue helping people to address health more holistically through the Google products they use every day and equipping healthcare teams with tools and solutions that help them improve care.

Throughout the conference, leaders from Google Health will share more about the work we’re doing and the partnerships needed across the health industry to improve health outcomes.

Meeting people in their everyday moments and empowering them to be healthier

People are increasingly turning to technology to manage their daily health and wellbeing — from using wearables and apps to track fitness goals, to researching conditions and building community around those with similar health experiences. At Google, we’re working to connect people with accurate, timely and actionable information and tools that can help them manage their health and achieve their goals.

On Monday, Dr. Garth Graham, who leads healthcare and public health partnerships for YouTube, will join the panel “Impactful Health Information Sharing” to discuss video as a powerful medium to connect people with engaging and high-quality health information. YouTube has been working closely with organizations, like the American College of Physicians, the National Alliance on Mental Illness and Mass General Brigham, to increase authoritative video content.

On Tuesday, Fitbit’s Dr. John Moore will join a panel on “The Next Generation of Health Consumers” focusing on how tools and technologies can help people take charge of their health and wellness between doctors’ visits — especially for younger generations. Regardless of age, there’s a huge opportunity for products like Fitbit to deliver daily, actionable insights into issues that can have a huge impact on overall health, like fitness, stress and sleep.

Helping health systems unlock the potential of healthcare data

Across Google Health, we’re building solutions and tools to help unlock the potential of healthcare data and transform care delivery. Care Studio, for example, helps clinicians at the point of care by bringing together patient information from different EHR systems into an integrated view. We’ve been piloting this tool at select hospital sites in the U.S. and soon clinicians in the pilot will have access to the Care Studio Mobile app so they can quickly access the critical patient information they need, wherever they are — whether that’s bedside, at clinic or in a hospital corridor.

In addition to Care Studio, we’re developing solutions that will bring greater interoperability to healthcare data, helping organizations deliver better care. Hear more from Aashima Gupta, Google Cloud’s global head of healthcare solutions, at HLTH in two sessions. On Monday, October 18, Aashima will discuss how digital strategies can reboot healthcare operations, and on Tuesday, October 19 she will join the panel “Turning of the Data Tides” to discuss different approaches to data interoperability and patient access to health records.

Building for everyone

Where people live, work and learn can greatly impact their experience with health. Behind many of our products and initiatives are industry experts and leaders who are making sure we build for everyone, and create an inclusive environment for that work to take place. During the Women at HLTH Luncheon on Tuesday, Dr. Ivor Horn, our Director of Health Equity, will share her career journey rooted in advocacy, entrepreneurship and activism.

From our early days as a company, Google has sought to improve the lives of as many people as possible. Helping people live healthier lives is one of the most impactful ways we can do that. It will take more than a single feature, product or initiative to improve health outcomes for everyone. If we work together across the healthcare industry and embed health into all our work, we can make the greatest impact.

For more information about speakers at HLTH, check out the full agenda.

Using cloud technology for the good of the planet

Editor’s Note: This article was originally published on our Google Cloud blog.

Climate change is a global issue that is getting more urgent by the year, with the past decade recorded as the hottest since records began 140 years ago. The global IT infrastructure contributes to the global carbon footprint, with an estimated 1% of the global electricity consumption attributed to data centers alone.

The good news is that companies are capable of changing course and taking action for the environment. To create the world’s cleanest cloud, here’s a look at what Google Cloud has been focusing on over the past two decades.


Renewable energy and climate neutrality

Computer centers, offices, and infrastructure will continue to require a lot of electricity in the years to come. And sourcing clean energy will become all the more important for companies to pave the way for a renewable future. As the world’s largest corporate purchaser of renewable energy, Google’s mission isn’t just to use carbon-free energy internally, but to make it available to consumers everywhere.

Regular milestones reinforce this mission. In 2007, Google became the first climate-neutral company. In 2017, it became the first company compensating 100% of its energy consumption with renewable energy. Not to mention the years prior: by now, Google has invested enough in high-quality climate compensations to compensate for all its emissions since the company was founded in 1998.

Looking ahead to the future, Google recently announced its commitment to become the first major company to operate fully carbon-free by 2030. That means: 100% carbon-free energy, 24/7.


Smart and efficient data centers

Data centers play an important role in this sustainability strategy. The more efficiently they operate, the more sustainably customers can use Google Cloud solutions. Energy-saving servers, highly efficient computer chips, and innovative water supply solutions for cooling systems are just a few examples of efficiency-enhancing measures in Google’s data centers.

Google Cloud is committed to using these technologies as part of a comprehensive sustainability strategy. But it’s not enough to be efficient on paper, it must be measurable too. That’s why Google calculates a so-called Power Usage Effectiveness value. The result: on average, a Google data center is twice as energy efficient as a typical enterprise data center.


Waste prevention with a circular economy

In a circular economy, materials, components, and products are manufactured in such a way that they can be reused, repaired, or recycled. It’s based on three core principles, which Google follows: designing out waste and pollution, keeping products and materials in use, and promoting healthy materials and safe chemistry. In 2019, Google found a new purpose for 90% of the waste products from its global data center operations and 19% of the components used for server upgrades were refurbished inventory.


Using AI to reduce food waste

Because companies can reduce their ecological footprint with advanced technologies, Google Cloud seeks to make our tools as user-friendly as possible. Many of our solutions put a strong emphasis on sustainability.

Sustainability was a top of mind for French retail group Carrefour, for example, when it established a partnership with Google Cloud in 2018. The problem? Every year, European supermarkets throw away more than four million tons of food. That’s ten kilograms per EU citizen. To reduce food waste, Carrefour and Google Cloud started a joint project for an AI solution that enables precise forecasts for the demands for fresh products in each store. This minimizes waste, as well as costs because employees get the right information they need to fill shelves depending on the existing demand.


Working toward a sustainable future, together

Another partnership, which uses technology to drive sustainability, exists between Google Cloud, WWF Sweden and the British fashion label Stella McCartney. The fashion industry is responsible for about 20% of global water waste and 10% of greenhouse gas emissions. The result of this collaboration: a tool that gives fashion labels a better overview of their supply chains and delivers actionable insights on how to reduce emissions and waste when procuring raw materials.

Sustainable actions have a real impact on our environment, and they also require teamwork. That’s why Google Cloud develops tools and technologies that help other companies and organizations worldwide to become active and create a more sustainable future for our planet.

Find out more on our sustainability page.

We analyzed 80 million ransomware samples – here’s what we learned

Leaders at organizations across the globe are witnessing the alarming rise of ransomware threats, leaving them with the sobering thought that an attack on their business may be not a matter of if, but when.

The stakes are becoming higher. Hackers aren’t just demanding money, they’re threatening to reveal sensitive or valuable information if companies don’t pay up or if they contact law enforcement authorities. For example, if you run a healthcare organization, the impact can be even more dire - as evidenced by this new report on ransomware attacks that finds attacks against hospitals have resulted in delays in tests and procedures, patients being kept longer, and even death.

One of the main challenges to stopping ransomware attacks is the lack of comprehensive visibility into how these attacks spread and evolve. Leaders are often left with bits and pieces of information that don’t add up.

VirusTotal’s first Ransomware Activity Report provides a holistic view of ransomware attacks by combining more than 80 million potential ransomware-related samples submitted over the last year and a half. This report is designed to help researchers, security practitioners and the general public understand the nature of ransomware attacks while enabling cyber professionals to better analyze suspicious files, URLs, domains and IP addresses. Sharing insights behind how attacks develop is essential to anticipating their evolution and detecting cybersecurity threats across the globe.

Of the 140 countries that submitted ransomware samples, Israel was far and away an outlier, with the highest number of submissions and nearly a 600 percent increase in the number of submissions compared to its baseline. Israel was followed by South Korea, Vietnam, China, Singapore, India, Kazakhstan, Philippines, Iran and the UK as the most affected territories based on the number of submissions to VirusTotal.

Geographical distribution of ransomware-related submissions

Geographical distribution of ransomware-related submissions

We saw peaks of ransomware activity in the first two quarters of 2020, primarily due to the ransomware-as-a-service group GandCrab (though its prevalence decreased dramatically in the second half of the year). Another sizable peak occurred in July 2021, driven by the Babuk ransomware family – a ransomware operation launched at the beginning of 2021 that was behind the attack on the Washington DC Metropolitan Police Department.

At least 130 different ransomware families were active in 2020 and the first half of 2021 — grouped by 30,000 clusters of malware that looked and operated in a similar fashion. With 6,000 clusters, GandCrab was the most active family - followed by Babuk, Cerber, Matsnu, Congur, Locky, Teslacrypt, Rkor and Reveon.

Ransomware Activity of 100 Most Active Ransomware Families

Activity of 100 most active ransomware families

While these big campaigns come and go, there is a constant baseline of ransomware activity of approximately 100 ransomware families that never stops. Attackers are using a range of approaches, including well-known botnet malware and other Remote Access Trojans (RATs) as vehicles to deliver their ransomware. In most cases, they are using fresh or new ransomware samples for their campaigns. This broad collection of activity provides vital insights into ransomware growth, evolution and impact on organizations of all sizes, and provides the bread crumbs needed for businesses and governments to be much more proactive in building cybersecurity into their infrastructure.

How We Are Keeping Your Business Safe From This Threat

At Google, our platforms and products have to be secure by default, and have been designed to keep businesses protected from cybersecurity attacks, including the growing threat of ransomware.

Our Chrome OS cloud-first platform has had no reported ransomware attacks — ever — on any business, education or consumer Chrome OS device. Developed with built-in and proactive security, Chrome OS blocks executables that ransomware often hides in, and system files are kept in a read-only partition ensuring the OS can’t be modified by apps or extensions. Additionally, the cloud-first nature of Chrome OS means that your data and files are backed up in the cloud and recoverable if an attack were to happen.

We are committed to offering the industry’s most trusted cloud, and have developed solutions that help companies adhere to the five pillars of NIST’s Cybersecurity Framework - from identification to recovery. For example, our Cloud Asset Inventory helps businesses identify and monitor all their assets in one place. With email at the heart of many ransomware attacks, Google Workspace’s advanced phishing and malware protection provides controls to quarantine emails, defends against anomalous attachment types and protects from inbound spoofing emails. Chronicle, Google Cloud’s threat detection platform, allows businesses to find and analyze threats faster within their infrastructure and applications, whether that's on Google Cloud or anywhere else. With engineered-in capabilities and additional solutions, we also make it simple and efficient to respond and recover in the event of an incident.

With better data from crowdsourced intelligence platforms like VirusTotal, C-level decision makers can proactively ensure a more robust range of security solutions are implemented, and that multi-layered approaches to security become standard across all organizations. It’s the only way to keep our businesses, schools, hospitals and governments safe against ransomware attacks.

To learn more about how Google can help your organization solve its cybersecurity challenges check out our Google Cybersecurity Action Team.

Helping companies tackle climate change with Earth Engine

Recent wildfires, floods and other natural disasters remind us that everyone has to take action to move the needle on climate change — from scientists and researchers to governments at all levels and businesses of all sizes.

Google Earth Engine combines satellite imagery and geospatial data with powerful computing to help people and organizations understand how the planet is changing, how human activity contributes to those changes and what actions they can take. Over the past decade, academics, scientists and NGOs have used Earth Engine and its earth observation data to make meaningful progress on climate research, natural resources protection, carbon emissions reduction and other sustainability goals. It has made it possible for organizations to monitor global forest loss in near real-time and has helped over 160 countries map and protect freshwater ecosystems.

Today, we’re expanding Earth Engine with a commercial offering for select customers in preview as a part of its integration with Google Cloud Platform. Organizations in the public sector and businesses can now use insights from Earth Engine to solve sustainability-related problems, such as building sustainable supply chains, committing to deforestation-free lending, preparing for recovery from weather-related events and reducing operational water use. To learn more about how Earth Engine can help your organization meet its sustainability goals,fill out this form.

Timelapse of satellite imagery showing the Aral Sea’s surface water shrinking from 1984 to 2020.

Surface water change visualization enabled by Earth Engine (shown here: Aral Sea from 1984-2020).

This new offering puts over 50 petabytes of geospatial open data into the hands of business and government leaders. Google Cloud customers and partners can bring together earth observation data with their own data as well as other useful datasets, train models to analyze at scale, and derive meaningful insights about real-world impact. By combining Earth Engine’s powerful platform with Google Cloud’s distinctive data analytics tools and AI technology, we’re bringing the best of Google together.

Already, businesses and organizations across the public sector, agriculture, financial services and consumer goods industries are using insights from this data to improve their operations, better manage and mitigate their risks while also preserving natural resources. For example, consumer goods company Unilever plans to achieve a deforestation-free supply chain for palm oil and other commodities by 2023. With insights from Google Earth Engine and its internal supply chain sourcing information, they can model the source of palm oil to their mills. The U.S. Department of Agriculture is also using Earth Engine to eliminate the overhead of managing vast amounts of geospatial data. This will enable their agency, the National Agricultural Statistics Service, to focus on the analyses of 315 million acres of croplands across the United States. We look forward to seeing more impactful use cases and quantifiable progress towards sustainability goals that Earth Engine will continue to power across organizations.

The time for businesses to act on climate is now, but the advanced analytics resources and sustainability knowledge needed to make change can be hard to access. To make sure businesses can make the most out of Google Earth Engine, we’re working with partners, like NGIS and Climate Engine, to help businesses identify and manage risks related to climate change.

It will take all of us working together to make a difference. Earth Engine will continue to be free for scientists, researchers and NGOs, just as it has always been. We hope that putting Google Earth Engine into the hands of more businesses, organizations and people will multiply the positive impact we can have together on our people and planet.

Next ‘21: Must-see Google Workspace sessions for developers and creators

Posted by Charles Maxson, Developer Advocate

Banner image that shows the Google Workspace logo

Google Workspace offers a broad set of tools and capabilities that empowers creators and developers of all experience levels to build a wide range of custom productivity solutions. For professional developers looking to integrate their own app experiences into Workspace, the platform enables deep integrations with frameworks like Google Workspace Add-ons and Chat apps, as well as deep access to the full suite of Google Workspace apps via numerous REST APIs. And for citizen developers on the business side or developers looking to build solutions quickly and easily, tools like Apps Script and AppSheet make it simple to customize, extend, and automate workflows directly within Google Workspace.

At Next ‘21 we have 7 sessions you won’t want to miss that cover the breadth of the platform. From no-code and low-code solutions to content for developers looking to publish in the Google Workspace Marketplace and reach the more than 3 billion users in Workspace, Next ‘21 has something for everyone.

1. See what’s new in Google Workspace

Matthew Izatt, Product Manager, Google Cloud

Erika Trautman, Director Product Management, Google Cloud

Join us for an interactive demo and see the latest Google Workspace innovations in action. As the needs of our users shifted over the past year, we’ve delivered entirely new experiences to help people connect, create, and collaborate—across Gmail, Drive, Meet, Docs, and the rest of the apps. You’ll see how Google Workspace meets the needs of different types of users with thoughtfully designed experiences that are easy to use and easy to love, Then, we’ll go under the hood to show you the range of ways to build powerful integrations and apps for Google Workspace using tools that span from no-code to professional grade.

2. Developer Platform State of the Union: Google Workspace

Charles Maxson, Developer Advocate, Google Cloud

Steven Bazyl, Developer Relations Engineer, Google Cloud

Google Workspace offers a comprehensive developer platform to support every developer who’s on a journey to customize and enhance Google Workspace. In this session, take a deeper dive into the new tools, technologies, and advances across the Google Workspace developer platform that can help you create even better integrations, extensions, and workflows. We’ll focus on updates for Google Apps Script, Google Workspace Add-ons, Chat apps, APIs, AppSheet, and Google Workspace Marketplace.

3. How Miro, Docusign, Adobe and Atlassian are helping organizations centralize their work

Matt Izatt, Group Product Manager, Google Cloud

David Grabner, Product Lead, Apps & Integrations, Miro

Integrations make Google Workspace the hub for your work and give users more value by bringing all their tools into one space. Our ecosystem allows users to connect industry-leading software and custom-built applications with Google Workspace to centralize important information from the tools you use every day. And integrations are not limited to Gmail, Docs, or your favorite Google apps – they’re also available for Chat. With Chat apps, users can seamlessly blend conversations with automation and timely information to accelerate teamwork directly from within a core communication tool.

In this session, we’ll briefly review the Google Workspace platform and how Miro and Atlassian are helping organizations centralize their work and keep important information a mouse click or a tap away.

4. Learn how customers are empowering their workforce to customize Google Workspace

Charles Maxson, Developer Advocate, Google Cloud

Aspi Havewala, Global Head of Digital Workplace, Verizon

Organizations small and large are seeing their needs grow increasingly diverse as they pursue digital transformation projects. Many of our customers are empowering their workforces by allowing them to build advanced workflows and customizations using Google Apps Script. It’s a powerful low-code development platform included with Google Workspace that makes it fast and easy to build custom business solutions for your favorite Google Workspace applications – from macro automations to custom functions and menus. In this session, we’ll do a quick overview of the Apps Script platform and hear from customers who are using it to enable their organizations.

5. Transform your business operations with no-code apps

Arthur Rallu, Product Manager, Google Cloud

Paula Bell, Business Process Analyst, Kentucky Power Company, American Electric Power

Building business apps has become something anyone can do. Don’t believe us? Join this session to learn how Paula Bell, who self describes as a person with “zero coding experience” built a series of mission-critical apps on AppSheet that revolutionized how Kentucky Power, a branch of American Electric Power, runs their field operations.

6. How AppSheet helps you work smarter with Google Workspace

Mike Procopio, Senior Staff Software Engineer, Google Cloud

Millions of Google Workspace users are looking for new ways to reclaim time and work smarter within Google Workspace. AppSheet, Google Workspace’s first-party extensibility platform, will be announcing several new features that will allow people to automate and customize their work within their Google Workspace environment – all without having to write a line of code.

Join this session to learn how you can use these new features to work smarter in Google Workspace.

7. How to govern an innovative workforce and reduce Shadow IT

Kamila Klimek, Product Manager, Google Cloud

Jacinto Pelayo, Chief Executive Officer, Evenbytes

For organizations focused on growth, finding new ways that employees can use technology to work smarter and innovate is key to their success. But enabling employees to create their own solutions comes at a cost that IT is keenly aware of. The threats of external hacks, data leaks, and shadow IT make it difficult for IT to find a solution that gives them the control and visibility they need, while still empowering their workforce. AppSheet was built with these challenges in mind.

Join our session to learn how you can use AppSheet to effectively govern your workforce and reduce security threats, all while giving employees the tools to make robust, enterprise-grade applications.

To learn more about these sessions and to register, visit the Next ‘21 website and also check out my playlist of Next ‘21 content.