Tag Archives: google cloud

Ask a Techspert: What’s a subsea cable?

Whenever I try to picture the internet at work, I see little pixels of information moving through the air and above our heads in space, getting where they need to go thanks to 5G towers and satellites in the sky. But it’s a lot deeper than that — literally. Google Cloud’s Vijay Vusirikala recently talked with me about why the coolest part of the internet is really underwater. So today, we’re diving into one of the best-kept secrets in submarine life: There wouldn’t be an internet without the ocean.

First question: How does the internet get underwater?

We use something called a subsea cable that runs along the ocean floor and transmits bits of information.

What’s a subsea cable made of?

These cables are about the same diameter as the average garden hose, but on the inside they contain thin optical fibers. Those fibers are surrounded by several layers of protection, including two layers of ultra-high strength steel wires, water-blocking structures and a copper sheath. Why so much protection? Imagine the pressure they are under. These cables are laid directly on the sea bed and have tons of ocean water on top of them! They need to be super durable.

Two photographs next to each other, the first showing a cable with outer protection surrounding it. The second photograph shows a stripped cable with copper wires and optical fibers inside.

A true inside look at subsea cables: On the left, a piece of the Curie subsea cable showing the additional steel armoring for protection close to the beach landing. On the right, a cross-sectional view of a typical deep water subsea cable showing the optical fibers, copper sheath, and steel wires for protection.

Why are subsea cables important?

Subsea cables are faster, can carry higher traffic loads and are more cost effective than satellite networks. Subsea cables are like a highway that has the right amount of lanes to handle rush-hour traffic without getting bogged down in standstill jams. Subsea cables combine high bandwidths (upwards of 300 to 400 terabytes of data per second) with low lag time. To put that into context, 300 to 400 terabytes per second is roughly the same as 17.5 million people streaming high quality videos — at the same time!

So when you send a customer an email, share a YouTube video with a family member or talk with a friend or coworker on Google Meet, these underwater cables are like the "tubes" that deliver those things to the recipient.

Plus, they help increase internet access in places that have had limited connectivity in the past, like countries in South America and Africa. This leads to job creation and economic growth in the places where they’re constructed.

How many subsea cables are there?

There are around 400 subsea cables criss-crossing the planet in total. Currently, Google invests in 19 of them — a mix of cables we build ourselves and projects we’re a part of, where we work together with telecommunications providers and other companies.

Video introducing Curie, a subsea cable.
10:25

Wow, 400! Does the world need more of them?

Yes! Telecommunications providers alongside technology companies are still building them around the world. At Google, we invest in subsea cables for a few reasons: One, our Google applications and Cloud services keep growing. This means more network demand from people and businesses in every country around the world. And more demand means building more cables and upgrading existing ones, which have less capacity than their modern counterparts.

Two, you cannot have a single point of failure when you're on a mission to connect the world’s information and make it universally accessible. Repairing a subsea cable that goes down can take weeks, so to guard against this we place multiple cables in each cross section. This gives us sufficient extra cable capacity so that services aren’t affected for people around the world.

What’s your favorite fact about subsea cables?

Three facts, if I may!

First, I love that we name many of our cables after pioneering women, like Curie for Marie Curie, which connects California to Chile, and Grace Hopper, which links the U.S., Spain and the U.K. Firmina, which links the U.S., Argentina, Brazil and Uruguay, is named after Brazil’s first novelist, Maria Firmina dos Reis.

Second, I’m proud that the cables are kind to their undersea homes. They’re environmentally friendly and are made of chemically inactive materials that don't harm the flora and fauna of the ocean, and they generally don’t move around much! We’re very careful about where we place them; we study each beach’s marine life conditions and we adjust our attachment timeline so we don’t disrupt a natural lifecycle process, like sea turtle nesting season. For the most part they’re stationary and don't disrupt the ocean floor or marine life. Our goal is to integrate into the underwater landscape, not bother it.

And lastly, my favorite fact is actually a myth: Most people think sharks regularly attack our subsea cables, but I’m aware of exactly one shark attack on a subsea cable that took place more than 15 years ago. Truly, the most common problems for our cables are caused by people doing things like fishing, trawling (which is when a fishing net is pulled through the water behind a boat) and anchor drags (when a ship drifts without holding power even though it has been anchored).

Year in review: the Google Workspace Platform 2021

Posted by Charles Maxson, Developer Advocate

In 2021, we saw many changes and improvements to the Google Workspace Platform geared at helping developers build new solutions to keep up with the challenges of how we worked, like hybrid and fully remote office work. More than ever, we needed tools for virtual collaboration and digital processes to keep our work going. As paper processes in the office were less viable and we continued to go see digital transformations become necessary, many new custom solutions like desk reservation systems and automated test logging have evolved.

2021 was also a year for Platform milestones, Google Workspace grew to more than 3 billion users globally, we reached more than 5,300 public apps in the Google Workspace Marketplace, and we crossed over 4.8 billion apps installed (up from 1 billion in 2020)! We were also busy bringing Platform innovations and improving our developer experience to help building for Google Workspace easier and faster. Here’s a look at some of the key enhancements the Google Workspace Platform brought to the developer community.

Google Cloud Champion Innovators program

Community building is one of the most effective ways to support developers, which is why we created Google Cloud Innovators.This new community program was designed for developers and technical practitioners using Google Cloud and we welcome everyone.

And when we say everyone, it’s not just professional developers, data scientists, or student developers and hobbyists, we also mean non-technical end users. The growing Google community has something for everyone.

GWAO Alternate Runtimes goes GA

Google Workspace Add-ons are customized applications that tightly integrate with Google Workspace applications, and can be found in the Google Workspace Marketplace, or built specifically for your own domain. The development of these applications were limited to using Apps Script, our native scripting language for the Google Workspace Platform. With the launch of Alternate Runtimes you can now develop add-ons with your preferred hosting infrastructure, development tool chain, source control system, coding language, and code libraries; it was a highly requested update from the developer community, opening up the Platform to many new developer scenarios.

Card Builder UI Application

The GWAO Card Builder tool allows you to visually design the user interfaces for your Google Workspace Add-ons and Google Chat apps projects. It is a must-have for Google Workspace developers using either Apps Script or Alternate Runtimes, enabling you to prototype and design Card UIs super fast without hassle and errors of hand coding JSON or Apps Script on your own.

Card Builder tool for building Google Workspace Add-ons and Chat Apps

Recommended for Google Workspace

This program showcases a selection of market-leading applications built by software vendors across a wide range of categories, including project management, customer support, and finance in our Google Workspace Marketplace. These apps undergo rigorous usability and security testing to make sure they meet our requirements for high quality integrations. They must also have an exemplary track record of user satisfaction, reliability, and privacy.

Recommended for Google Workspace program showcases high quality applications

Chat Slash Commands and Dialogs

Slash commands simplify the way users interact with your Chat bot, offering them a visual leading way to discover and execute your bot’s primary features. As a developer, slash commands are straightforward to implement, and essential in offering a better bot experience. In addition to Slash Commands, Dialogs were a new capability introduced to the Chat App framework that allows developers to build user interfaces to capture inputs and parameters in a structured, reliable way. This was a tremendous step forward for bot usability because it simplified and streamlined the process of users interacting with bot commands. Now with dialogs, users can be led visually to supply inputs via prompts, versus having to rely on wrapping bot commands with natural language inputs.

Forms API beta

Google Forms enables easy creation and distribution of forms, surveys, and quizzes. Forms is used for a wide variety of use cases across business operations, customer management, event planning and logistics, education, and more. With the Google Forms API Beta announcement, developers were able to provide programmatic access for managing forms and acting on responses, empowering developers to build powerful integrations on top of Forms.

Google Workspace Marketplace updates

We made many updates to the Google Workspace Marketplace to improve both the user and developer experience. We added updates to the application detail page that included pricing and when the listing was last updated. The homepage also saw improvements with various curated categories by the Google team under Editor’s Choice. Finally, we launched the marketplace badges for developers to promote their published applications on websites and marketing channels. Oh, and we also had a logo update if you hadn’t noticed.

Google Workspace Marketplace Badges for application promotion

Farewell 2021 and here’s to welcoming in 2022

2021 brought us many innovations to the Google Workspace Platform to help developers address the needs of their users and it also brought more empowerment to knowledge workers to build the solutions they needed with our no-code and low-code platforms. These are just the highlights for the Google Workspace Platform and we look forward to more innovation in 2022. To keep up with all the news about the Platform, please subscribe to our newsletter.

How to get started in cloud computing

Posted by Google Cloud training & certifications team

Validated cloud skills are in demand. With Google Cloud certifications, employers know that certified individuals have proven knowledge of various professional roles within the cloud industry. Google Cloud certifications have also been recognized as some of the highest-paying IT certifications for the past several years. This year, the Google Cloud Certified Professional Data Engineer topped the list with an average salary of $171,749, while the Google Cloud Certified Professional Cloud Architect came in second place, with an average salary of $169,029.

You may be wondering what sort of background you need to take advantage of these opportunities: What sort of classes should you take? How exactly do you get started in the cloud without experience? Here are some tips to start learning about Google Cloud and build your cloud computing skills.

Get hands-on experience with cloud computing

Google Cloud training offers a wide range of learning paths featuring comprehensive courses and hands-on labs, so you get to practice with the real Google Cloud console. For instance, If you wanted to take classes to prepare for the Professional Data Engineer certification mentioned above, there is a complete learning path featuring four courses and 31 hands-on labs to help familiarize you with relevant topics like BigQuery, machine learning, IoT, TensorFlow, and more.

There are nine learning paths providing you with a launch pad to all major pillars of cloud computing, from networking, cloud security, database management, and hybrid cloud infrastructure. Each broader learning path contains specific learning paths to help you specifically train for job roles like Machine Learning Engineer. Visit the Google Cloud training page to find the right path for you.

Learn live from cloud experts

Google Cloud regularly hosts a half-day live training event called Cloud OnBoard which features hands-on learning led by experts. All sessions are also available to watch on-demand after the event.

If you’re a developer new to cloud computing, we recommend you start with Google Cloud Fundamentals, an entry-level course to learn about the basics of Google Cloud. Experts guide you through hands-on labs where you can practice using the Google Console, Google Cloud Shell, and more.

You’ll be introduced to core components of Google Cloud and given an overview of how its tools impact the entire cloud computing landscape. The curriculum covers Compute Engine and how to create VM instances from scratch and from existing templates, how to connect them together, and end with projects that can talk to each other safely and securely. You will also learn about the different storage and database options available on Google Cloud.

Other Cloud OnBoard event topics include cloud architecture, Kubernetes, data analytics, and cloud application development.

Explore Google Cloud infrastructure

Cloud infrastructure is the backbone of the internet. Understanding cloud infrastructure is a good starting point to start digging deeper into cloud concepts because it will give you a taste of the various aspects of cloud computing to figure out what you like best, whether it’s networking, security, or application development.

Build your foundational Google Cloud knowledge with our on-demand infrastructure training in the cloud infrastructure learning path. This learning path will provide you with practical experience through expert-guided labs which dive into Cloud Storage and other key application services like Google Cloud’s operations suite and Cloud Functions.

Show off your skills

Once you have a strong grasp on Google Cloud basics, you can start earning skill badges to demonstrate your experience.

Skill badges are digital credentials that recognize your ability to solve real-world problems with your cloud knowledge. You can share them on your resume or social profile so your professional network sees your technical skills. This can be useful for recruiters or employers as you transition to cloud computing work.Skill badges also enable you to get in-depth, hands-on experience with different Google Cloud offerings on the way to earning the credential.

You can also use them to start preparing for Google Cloud certifications which are more intensive and show employers that you are a cloud expert. Most Google Cloud certifications recommend having at least 6 months or up to several years of industry experience depending on the material.

Ready to get started in the cloud? Visit the Google Cloud training page to see all your options from in-person classes, online courses, special events, and more.

Helping fashion brands make more sustainable decisions

The fashion industry is one of the largest contributors to the global climate and ecological crisis — accounting for up to 8% of global greenhouse gas emissions. Much of this impact occurs at the raw materials stage of the supply chain, like when cotton is farmed or trees are cut down to create viscose. But when brands source these materials, they often have little to no visibility on the environmental impact of them.

In 2019, we set out to create a tool that would give companies the data they need to make more responsible sourcing decisions. Today we’re announcing the first version of the Global Fibre Impact Explorer (GFIE), and we’re inviting other brands to get involved. The tool, which is built on Google Earth Engine and uses Google Cloud computing, assesses the environmental risk of different fibers across regions as it relates to environmental factors such as air pollution, biodiversity, climate and greenhouse gasses, forestry and water use.

With this tool, brands will easily be able to identify environmental risks across more than 20 fiber types — including natural, cellulosic and synthetics materials.The tool will also provide brands with recommendations for targeted and regionally specific risk reduction activities including opportunities to work with farmers, producers and communities, such as investing in regenerative agriculture practices

The GFIE dashboard where brands can upload their fiber portfolio data and get recommendations to reduce risk across key environmental categories.

The GFIE dashboard where brands can upload their fiber portfolio data and get recommendations to reduce risk across key environmental categories.

Spooling it all together: Working with fashion brands and conservation experts

We worked with Stella McCartney, a luxury fashion brand and leader in sustainability, to understand the industry's needs and to test the platform. Using the tool alongside their existing sustainability efforts, Stella McCartney’s team was able to identify cotton sources in Turkey that were facing increased water and climate risks. This affirms the need for investing in local farming communities that focus on regenerative practices, such as water management and soil regeneration. Other brands and retailers — including Adidas, Allbirds, H&M Group and VF Corporation — have helped test and refine the tool to make sure it can be useful to everyone in the industry. And an external council of global experts have reviewed the GFIE methodology and data.

The GFIE was born out of a partnership between Google and the WWF, and is built to complement existing tools focused on industry impact and risk analysis. With the initial development phase complete, Google and WWF are now transitioning GFIE to Textile Exchange, a global non-profit focused on positively impacting climate through accelerating the use of preferred fibers across the global textile industry. As the official host of the GFIE, Textile Exchange will continue the development of the tool, onboard new brands and work towards an industry launch in 2022.

If you’re a part of a fashion brand or industry group and want access to this tool, please register your interest at globalfibreimpact.com.

Upload massive lists of products to Merchant Center using Centimani

Posted by Hector Parra, Jaime Martínez, Miguel Fernandes, Julia Hernández

Merchant Center lets merchants manage how their in-store and online product inventory appears on Google. It allows them to reach hundreds of millions of people looking to buy products like yours each day.


To upload their products, merchants can make use of feeds, that is, files with a list of products in a specific format. These can be shared with Merchant Center in different ways: using Google Sheets, SFTP or FTP shares, Google Cloud Storage or manually through the user interface. These methods work great for the majority of cases. But, if a merchant's product list grows over time, they might reach the usage limits of the feeds. Depending on the case, quota extensions could be granted, but if the list continues to grow, it might reach a point where feeds no longer support that scale, and the Content API for Shopping would become the recommended way to go forward.


The main issue is, if a merchant is recommended to stop using feeds and start using the Content API due to scale problems, it means that the number of products is massive, and trying to use the Content API directly will give them usage and quota errors, as the QPS and products per call limits will be exceeded.


For this specific use case, Centimani becomes critical in helping merchants handle the upload process through the Content API in a controlled manner, avoiding any overload of the API.


Centimani is a configurable massive file processor able to split text files in chunks, process them following a strategic pattern and store the results in BigQuery for reporting. It provides configurable options for chunk size and number of retries, and takes care of exponential backoff to ensure all requests have enough retries to overcome potential temporary issues or errors. Centimani comes with two operators: Google Ads Offline Conversions Uploader, and Merchant Center Products Uploader, but it can be extended to other uses easily.


Centimani uses Google Cloud as its platform, and makes use of Cloud Storage for storing the data, Cloud Functions to do the data processing and the API calls, Cloud Tasks to coordinate the execution of each call, and BigQuery to store the audit information for reporting.

Centimani Architecture

To start using Centimani, a couple of configuration files need to be prepared with information about the Google Cloud Project to be used (including the element names), the credentials to access the Merchant Center accounts and how the load will be distributed (e.g., parallel executions, number of products per call). Then, the deployment is done automatically using a deployment script provided by the tool.


After the tool is deployed, a cloud function will be monitoring the input bucket in Cloud Storage, and every time a file is uploaded there, it will be processed. The tool uses the name of the file to select the operator that is going to be used (“MC” indicates Merchant Center Products Uploader), and the particular configuration to use (multiple configurations can be used to connect to Merchant Center accounts with different access credentials).


Whenever a file is uploaded, it will be sliced in parts if it is greater than the number of products allowed per call, they will be stored in the output bucket in Cloud Storage, and Cloud Tasks will start launching the API calls until all files are processed. Any file with errors will be stored in a folder called “slices_failed” to help troubleshoot any issues found in the process. Also, all the information about the executions will be stored temporarily in Datastore and then moved to BigQuery, where it can be used for monitoring the whole process from a centralized place.


Centimani Status Dashboard Architecture

Centimani provides an easy way for merchants to start using the Content API for Shopping to manage their products, without having to deal with the complexity of keeping the system under the limits.


For more information you can visit the Centimani repository on Github.


Machine Learning Communities: Q3 ‘21 highlights and achievements

Posted by HyeJung Lee, DevRel Community Manager and Soonson Kwon, DevRel Program Manager

Let’s explore highlights and achievements of vast Google Machine Learning communities by region for the last quarter. Activities of experts (GDE, professional individuals), communities (TFUG, TensorFlow user groups), students (GDSC, student clubs), and developers groups (GDG) are presented here.

Key highlights

Image shows a banner for 30 days of ML with Kaggle

30 days of ML with Kaggle is designed to help beginners study ML using Kaggle Learn courses as well as a competition specifically for the participants of this program. Collaborated with the Kaggle team so that +30 the ML GDEs and TFUG organizers participated as volunteers as online mentors as well as speakers for this initiative.

Total 16 of the GDE/GDSC/TFUGs run community organized programs by referring to the shared community organize guide. Houston TensorFlow & Applied AI/ML placed 6th out of 7573 teams — the only Americans in the Top 10 in the competition. And TFUG Santiago (Chile) organizers participated as well and they are number 17 on the public leaderboard.

Asia Pacific

Image shows Google Cloud and Coca-Cola logos

GDE Minori MATSUDA (Japan)’s project on Coca-Cola Bottlers Japan was published on Google Cloud Japan Blog covering creating an ML pipeline to deploy into real business within 2 months by using Vertex AI. This is also published on GCP blog in English.

GDE Chansung Park (Korea) and Sayak Paul (India) published many articles on GCP Blog. First, “Image search with natural language queries” explained how to build a simple image parser from natural language inputs using OpenAI's CLIP model. From this second “Model training as a CI/CD system: (Part I, Part II)” post, you can learn more about why having a resilient CI/CD system for your ML application is crucial for success. Last, “Dual deployments on Vertex AI” talks about end-to-end workflow using Vertex AI, TFX and Kubeflow.

In China, GDE Junpeng Ye used TensorFlow 2.x to significantly reduce the codebase (15k → 2k) on WeChat Finder which is a TikTok alternative in WeChat. GDE Dan lee wrote an article on Understanding TensorFlow Series: Part 1, Part 2, Part 3-1, Part 3-2, Part 4

GDE Ngoc Ba from Vietnam has contributed AI Papers Reading and Coding series implementing ML/DL papers in TensorFlow and creates slides/videos every two weeks. (videos: Vit Transformer, MLP-Mixer and Transformer)

A beginner friendly codelabs (Get started with audio classification ,Go further with audio classification) by GDSC Sookmyung (Korea) learning to customize pre-trained audio classification models to your needs and deploy them to your apps, using TFlite Model Maker.

Cover image for Mat Kelcey's talk on JAX at the PyConAU event

GDE Matthew Kelcey from Australia gave a talk on JAX at PyConAU event. Mat gave an overview to fundamentals of JAX and an intro to some of the libraries being developed on top.

Image shows overview for the released PerceiverIO code

In Singapore, TFUG Singapore dived back into some of the latest papers, techniques, and fields of research that are delivering state-of-the-art results in a number of fields. GDE Martin Andrews included a brief code walkthrough for the released PerceiverIO code at perceiver- highlighting what JAX looks like, how Haiku relates to Sonnet, but also the data loading stuff which is done via tf.data.

Machine Learning Experimentation with TensorBoard book cover

GDE Imran us Salam Mian from Pakistan published a book "Machine Learning Experimentation with TensorBoard".

India

GDE Aakash Nain has published the TF-JAX tutorial series from Part 4 to Part 8. Part 4 gives a brief introduction about JAX (What/Why), and DeviceArray. Part 5 covers why pure functions are good and why JAX prefers them. Part 6 focuses on Pseudo Random Number Generation (PRNG) in Numpy and JAX. Part 7 focuses on Just In Time Compilation (JIT) in JAX. And Part 8 covers vmap and pmap.

Image of Bhavesh's Google Cloud certificate

GDE Bhavesh Bhatt published a video about his experience on the Google Cloud Professional Data Engineer certification exam.

Image shows phase 1 and 2 of the Climate Change project using Vertex AI

Climate Change project using Vertex AI by ML GDE Sayak Paul and Siddha Ganju (NVIDIA). They published a paper (Flood Segmentation on Sentinel-1 SAR Imagery with Semi-Supervised Learning) and open-sourced the project with regard to NASA Impact's ETCI competition. This project made four NeurIPS workshops AI for Science: Mind the Gaps, Tackling Climate Change with Machine Learning, Women in ML, and Machine Learning and the Physical Sciences. And they finished as the first runners-up (see Test Phase 2).

Image shows example of handwriting recognition tutorial

Tutorial on handwriting recognition was contributed to Keras example by GDE Sayak Paul and Aakash Kumar Nain.

Graph regularization for image classification using synthesized graphs by GDE Sayak Pau was added to the official examples in the Neural Structured Learning in TensorFlow.

GDE Sayak Paul and Soumik Rakshit shared a new NLP dataset for multi-label text classification. The dataset consists of paper titles, abstracts, and term categories scraped from arXiv.

North America

Banner image shows students participating in Google Summer of Code

During the GSoC (Google Summer of Code), some GDEs mentored or co-mentored students. GDE Margaret Maynard-Reid (USA) mentored TF-GAN, Model Garden, TF Hub and TFLite products. You can get some of her experience and tips from the GDE Blog. And you can find GDE Sayak Paul (India) and Googler Morgan Roff’s GSoC experience in (co-)mentoring TensorFlow and TF Hub as well.

A beginner friendly workshop on TensorFlow with ML GDE Henry Ruiz (USA) was hosted by GDSC Texas A&M University (USA) for the students.

Screenshot from Youtube video on how transformers work

Youtube video Self-Attention Explained: How do Transformers work? by GDE Tanmay Bakshi from Canada explained how you can build a Transformer encoder-based neural network to classify code into 8 different programming languages using TPU, Colab with Keras.

Europe

GDG / GDSC Turkey hosted AI Summer Camp in cooperation with Global AI Hub. 7100 participants learned about ML, TensorFlow, CV and NLP.

Screenshot from slide presentation titled Why Jax?

TechTalk Speech Processing with Deep Learning and JAX/Trax by GDE Sergii Khomenko (Germany) and M. Yusuf Sarıgöz (Turkey). They reviewed technologies such as Jax, TensorFlow, Trax, and others that can help boost our research in speech processing.

South/Central America

Image shows Custom object detection in the browser using TensorFlow.js

On the other side of the world, in Brazil, GDE Hugo Zanini Gomes wrote an article about “Custom object detection in the browser using TensorFlow.js” using the TensorFlow 2 Object Detection API and Colab was posted on the TensorFlow blog.

Screenshot from a talk about Real-time semantic segmentation in the browser - Made with TensorFlow.js

And Hugo gave a talk about Real-time semantic segmentation in the browser - Made with TensorFlow.js covered using SavedModels in an efficient way in JavaScript directly enabling you to get the reach and scale of the web for your new research.

Data Pipelines for ML was talked about by GDE Nathaly Alarcon Torrico from Bolivia explained all the phases involved in the creation of ML and Data Science products, starting with the data collection, transformation, storage and Product creation of ML models.

Screensho from TechTalk “Machine Learning Competitivo: Top 1% en Kaggle (Video)

TechTalk “Machine Learning Competitivo: Top 1% en Kaggle (Video)“ was hosted by TFUG Santiago (Chile). In this talk the speaker gave a tour of the steps to follow to generate a model capable of being in the top 1% of the Kaggle Leaderboard. The focus was on showing the libraries and“ tricks ”that are used to be able to test many ideas quickly both in implementation and in execution and how to use them in productive environments.

MENA

Screenshot from workshop about Recurrent Neural Networks

GDE Ruqiya Bin Safi (Saudi Arabia) had a workshop about Recurrent Neural Networks : part 1 (Github / Slide) at the GDG Mena. And Ruqiya gave a talk about Recurrent Neural Networks: part 2 at the GDG Cloud Saudi (Saudi Arabia).

AI Training with Kaggle by GDSC Islamic University of Gaza from Palestine. It is a two month training covering Data Processing, Image Processing and NLP with Kaggle.

Sub-Saharan Africa

TFUG Ibadan had two TensorFlow events : Basic Sentiment analysis with Tensorflow and Introduction to Recommenders Systems with TensorFlow”.

Image of Yannick Serge Obam Akou's TensorFlow Certificate

Article covered some tips to study, prepare and pass the TensorFlow developer exam in French by ML GDE Yannick Serge Obam Akou (Cameroon).

Extend Google Apps Script with your API library to empower users

Posted by Keith Einstein, Product Manager

Banner image that shows the Cloud Task logo

Google is proud to announce the availability of the DocuSign API library for Google Apps Script. This newly created library gives all Apps Script users access to the more than 400 endpoints DocuSign has to offer so they can build digital signatures into their custom solutions and workflows within Google Workspace.

The Google Workspace Ecosystem

Last week at Google Cloud Next ‘21, in the session “How Miro, DocuSign, Adobe and Atlassian are helping organizations centralize their work”, we showcased a few partner integrations called add-ons, found on Google Workspace Marketplace. The Google Workspace Marketplace helps developers connect with the more than 3 billion people who use Google Workspace—with a stunning 4.8 billion apps installed to date. That incredible demand is fueling innovation in the ecosystem, and we now have more than 5,300 public apps available in the Google Workspace Marketplace, plus thousands more private apps that customers have built for themselves. As a developer, one of the benefits of an add-on is that it allows you to surface your application in a user-friendly manner that helps people reclaim their time, work more efficiently, and adds another touchpoint for them to engage with your product. While building an add-on enables users to frictionlessly engage with your product from within Google Workspace, to truly unlock limitless potential innovative companies like DocuSign are beginning to empower users to build the unique solutions they need by providing them with a Google Apps Script Library.

Apps Script enables Google Workspace customization

Many users are currently unlocking the power of Google Apps Script by creating the solutions and automations they need to help them reclaim precious time. Publishing a Google Apps Script Library is another great opportunity to bring a product into Google Workspace and gain access to those creators. It gives your users more choices in how they integrate your product into Google Workspace, which in turn empowers them with the flexibility to solve more business challenges with your product’s unique value.

Apps Script libraries can make the development and maintenance of a script more convenient by enabling users to take advantage of pre-built functionality and focus on the aspects that unlock unique value. This allows innovative companies to make available a variety of functionality that Apps Script users can use to create custom solutions and workflows with the features not found in an off-the-shelf app integration like a Google Workspace Add-on or Google Chat application.

The DocuSign API Library for Apps Script

One of the partners we showcased at Google Cloud Next ‘21 was DocuSign. The DocuSign eSignature for Google Workspace add-on has been installed almost two-million times. The add-on allows you to collect signatures or sign agreements from inside Gmail, Google Drive or Google Docs. While collecting signatures and signing agreements are some of the most common areas in which a user would use DocuSign eSignature inside Google Workspace, there are many more features to DocuSign’s eSignature product. In fact, their eSignature API has over 400 endpoints. Being able to go beyond those top features normally found in an add-on and into the rest of the functionality of DocuSign eSignature is where an Apps Script Library can be leveraged.

And that’s exactly what we’re partnering to do. Recently, DocuSign’s Lead API Product Manager, Jeremy Glassenberg (a Google Developer Expert for Google Workspace) joined us on the Totally Unscripted podcast to talk about DocuSign’s path to creating an Apps Script Library. At the DocuSign Developer Conference, on October 27th, Jeremy will be teaming up with Christian Schalk from our Google Cloud Developer Relations team to launch the DocuSign Apps Script Library and showcase how it can be used.

With the DocuSign Apps Script Library, users around the world who lean on Apps Script to build their workplace automations can create customized DocuSign eSignature processes. Leveraging the Apps Script Library in addition to the DocuSign add-on empowers companies who use both DocuSign and Google Workspace to have a more seamless workflow, increasing efficiency and productivity. The add-on allows customers to integrate the solution instantly into their Google apps, and solve for the most common use cases. The Apps Script Library allows users to go deep and solve for the specialized use cases where a single team (or knowledge worker) may need to tap into a less commonly used feature to create a unique solution.

See us at the DocuSign Developer Conference

The DocuSign Apps Script Library is now available in beta and if you’d like to know more about it drop a message to [email protected]. And be sure to register for the session on "Building a DocuSign Apps Script Library with Google Cloud", Oct 27th @ 10:00 AM. For updates and news like this about the Google Workspace platform, please subscribe to our developer newsletter.

Migrating App Engine push queues to Cloud Tasks

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Banner image that shows the Cloud Task logo

Introduction

The previous Module 7 episode of Serverless Migration Station gave developers an idea of how App Engine push tasks work and how to implement their use in an existing App Engine ndb Flask app. In this Module 8 episode, we migrate this app from the App Engine Datastore (ndb) and Task Queue (taskqueue) APIs to Cloud NDB and Cloud Tasks. This makes your app more portable and provides a smoother transition from Python 2 to 3. The same principle applies to upgrading other legacy App Engine apps from Java 8 to 11, PHP 5 to 7, and up to Go 1.12 or newer.

Over the years, many of the original App Engine services such as Datastore, Memcache, and Blobstore, have matured to become their own standalone products, for example, Cloud Datastore, Cloud Memorystore, and Cloud Storage, respectively. The same is true for App Engine Task Queues, whose functionality has been split out to Cloud Tasks (push queues) and Cloud Pub/Sub (pull queues), now accessible to developers and applications outside of App Engine.

Migrating App Engine push queues to Cloud Tasks video

Migrating to Cloud NDB and Cloud Tasks

The key updates being made to the application:

  1. Add support for Google Cloud client libraries in the app's configuration
  2. Switch from App Engine APIs to their standalone Cloud equivalents
  3. Make required library adjustments, e.g., add use of Cloud NDB context manager
  4. Complete additional setup for Cloud Tasks
  5. Make minor updates to the task handler itself

The bulk of the updates are in #3 and #4 above, and those are reflected in the following "diff"s for the main application file:

Screenshot shows primary differences in code when switching to Cloud NDB & Cloud Tasks

Primary differences switching to Cloud NDB & Cloud Tasks

With these changes implemented, the web app works identically to that of the Module 7 sample, but both the database and task queue functionality have been completely swapped to using the standalone/unbundled Cloud NDB and Cloud Tasks libraries… congratulations!

Next steps

To do this exercise yourself, check out our corresponding codelab which leads you step-by-step through the process. You can use this in addition to the video, which can provide guidance. You can also review the push tasks migration guide for more information. Arriving at a fully-functioning Module 8 app featuring Cloud Tasks sets the stage for a larger migration ahead in Module 9. We've accomplished the most important step here, that is, getting off of the original App Engine legacy bundled services/APIs. The Module 9 migration from Python 2 to 3 and Cloud NDB to Cloud Firestore, plus the upgrade to the latest version of the Cloud Tasks client library are all fairly optional, but they represent a good opportunity to perform a medium-sized migration.

All migration modules, their videos (when available), codelab tutorials, and source code, can be found in the migration repo. While the content focuses initially on Python users, we will cover other legacy runtimes soon so stay tuned.

HLTH: Building on our commitments in health

Tonight kicked off the HLTH event in Boston that brings together leaders across health to discuss healthcare's most pressing problems and how we can tackle them to improve care delivery and outcomes.

Over the past two years, the pandemic shined a light on the importance of our collective health — and the role the private sector, payers, healthcare delivery organizations, governments and public health play in keeping communities healthy. For us at Google, we saw Search, Maps and YouTube become critical ways for people to learn about COVID-19. So we partnered with public health organizations to provide information that helped people stay safe, find testing and get vaccinated. In addition, we provided healthcare organizations, researchers and non-profits with tools, data and resources to support pandemic response and research efforts.

As I mentioned on the opening night of HLTH, Google Health is our company-wide effort to help billions of people be healthier by leaning on our strengths: organizing information and developing innovative technology. Beyond the pandemic, we have an opportunity to continue helping people to address health more holistically through the Google products they use every day and equipping healthcare teams with tools and solutions that help them improve care.

Throughout the conference, leaders from Google Health will share more about the work we’re doing and the partnerships needed across the health industry to improve health outcomes.

Meeting people in their everyday moments and empowering them to be healthier

People are increasingly turning to technology to manage their daily health and wellbeing — from using wearables and apps to track fitness goals, to researching conditions and building community around those with similar health experiences. At Google, we’re working to connect people with accurate, timely and actionable information and tools that can help them manage their health and achieve their goals.

On Monday, Dr. Garth Graham, who leads healthcare and public health partnerships for YouTube, will join the panel “Impactful Health Information Sharing” to discuss video as a powerful medium to connect people with engaging and high-quality health information. YouTube has been working closely with organizations, like the American College of Physicians, the National Alliance on Mental Illness and Mass General Brigham, to increase authoritative video content.

On Tuesday, Fitbit’s Dr. John Moore will join a panel on “The Next Generation of Health Consumers” focusing on how tools and technologies can help people take charge of their health and wellness between doctors’ visits — especially for younger generations. Regardless of age, there’s a huge opportunity for products like Fitbit to deliver daily, actionable insights into issues that can have a huge impact on overall health, like fitness, stress and sleep.

Helping health systems unlock the potential of healthcare data

Across Google Health, we’re building solutions and tools to help unlock the potential of healthcare data and transform care delivery. Care Studio, for example, helps clinicians at the point of care by bringing together patient information from different EHR systems into an integrated view. We’ve been piloting this tool at select hospital sites in the U.S. and soon clinicians in the pilot will have access to the Care Studio Mobile app so they can quickly access the critical patient information they need, wherever they are — whether that’s bedside, at clinic or in a hospital corridor.

In addition to Care Studio, we’re developing solutions that will bring greater interoperability to healthcare data, helping organizations deliver better care. Hear more from Aashima Gupta, Google Cloud’s global head of healthcare solutions, at HLTH in two sessions. On Monday, October 18, Aashima will discuss how digital strategies can reboot healthcare operations, and on Tuesday, October 19 she will join the panel “Turning of the Data Tides” to discuss different approaches to data interoperability and patient access to health records.

Building for everyone

Where people live, work and learn can greatly impact their experience with health. Behind many of our products and initiatives are industry experts and leaders who are making sure we build for everyone, and create an inclusive environment for that work to take place. During the Women at HLTH Luncheon on Tuesday, Dr. Ivor Horn, our Director of Health Equity, will share her career journey rooted in advocacy, entrepreneurship and activism.

From our early days as a company, Google has sought to improve the lives of as many people as possible. Helping people live healthier lives is one of the most impactful ways we can do that. It will take more than a single feature, product or initiative to improve health outcomes for everyone. If we work together across the healthcare industry and embed health into all our work, we can make the greatest impact.

For more information about speakers at HLTH, check out the full agenda.

Using cloud technology for the good of the planet

Editor’s Note: This article was originally published on our Google Cloud blog.

Climate change is a global issue that is getting more urgent by the year, with the past decade recorded as the hottest since records began 140 years ago. The global IT infrastructure contributes to the global carbon footprint, with an estimated 1% of the global electricity consumption attributed to data centers alone.

The good news is that companies are capable of changing course and taking action for the environment. To create the world’s cleanest cloud, here’s a look at what Google Cloud has been focusing on over the past two decades.


Renewable energy and climate neutrality

Computer centers, offices, and infrastructure will continue to require a lot of electricity in the years to come. And sourcing clean energy will become all the more important for companies to pave the way for a renewable future. As the world’s largest corporate purchaser of renewable energy, Google’s mission isn’t just to use carbon-free energy internally, but to make it available to consumers everywhere.

Regular milestones reinforce this mission. In 2007, Google became the first climate-neutral company. In 2017, it became the first company compensating 100% of its energy consumption with renewable energy. Not to mention the years prior: by now, Google has invested enough in high-quality climate compensations to compensate for all its emissions since the company was founded in 1998.

Looking ahead to the future, Google recently announced its commitment to become the first major company to operate fully carbon-free by 2030. That means: 100% carbon-free energy, 24/7.


Smart and efficient data centers

Data centers play an important role in this sustainability strategy. The more efficiently they operate, the more sustainably customers can use Google Cloud solutions. Energy-saving servers, highly efficient computer chips, and innovative water supply solutions for cooling systems are just a few examples of efficiency-enhancing measures in Google’s data centers.

Google Cloud is committed to using these technologies as part of a comprehensive sustainability strategy. But it’s not enough to be efficient on paper, it must be measurable too. That’s why Google calculates a so-called Power Usage Effectiveness value. The result: on average, a Google data center is twice as energy efficient as a typical enterprise data center.


Waste prevention with a circular economy

In a circular economy, materials, components, and products are manufactured in such a way that they can be reused, repaired, or recycled. It’s based on three core principles, which Google follows: designing out waste and pollution, keeping products and materials in use, and promoting healthy materials and safe chemistry. In 2019, Google found a new purpose for 90% of the waste products from its global data center operations and 19% of the components used for server upgrades were refurbished inventory.


Using AI to reduce food waste

Because companies can reduce their ecological footprint with advanced technologies, Google Cloud seeks to make our tools as user-friendly as possible. Many of our solutions put a strong emphasis on sustainability.

Sustainability was a top of mind for French retail group Carrefour, for example, when it established a partnership with Google Cloud in 2018. The problem? Every year, European supermarkets throw away more than four million tons of food. That’s ten kilograms per EU citizen. To reduce food waste, Carrefour and Google Cloud started a joint project for an AI solution that enables precise forecasts for the demands for fresh products in each store. This minimizes waste, as well as costs because employees get the right information they need to fill shelves depending on the existing demand.


Working toward a sustainable future, together

Another partnership, which uses technology to drive sustainability, exists between Google Cloud, WWF Sweden and the British fashion label Stella McCartney. The fashion industry is responsible for about 20% of global water waste and 10% of greenhouse gas emissions. The result of this collaboration: a tool that gives fashion labels a better overview of their supply chains and delivers actionable insights on how to reduce emissions and waste when procuring raw materials.

Sustainable actions have a real impact on our environment, and they also require teamwork. That’s why Google Cloud develops tools and technologies that help other companies and organizations worldwide to become active and create a more sustainable future for our planet.

Find out more on our sustainability page.