Monthly Archives: February 2020

Exploring Transfer Learning with T5: the Text-To-Text Transfer Transformer



Over the past few years, transfer learning has led to a new wave of state-of-the-art results in natural language processing (NLP). Transfer learning's effectiveness comes from pre-training a model on abundantly-available unlabeled text data with a self-supervised task, such as language modeling or filling in missing words. After that, the model can be fine-tuned on smaller labeled datasets, often resulting in (far) better performance than training on the labeled data alone. The recent success of transfer learning was ignited in 2018 by GPT, ULMFiT, ELMo, and BERT, and 2019 saw the development of a huge diversity of new methods like XLNet, RoBERTa, ALBERT, Reformer, and MT-DNN. The rate of progress in the field has made it difficult to evaluate which improvements are most meaningful and how effective they are when combined.

In “Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer”, we present a large-scale empirical survey to determine which transfer learning techniques work best and apply these insights at scale to create a new model that we call the Text-To-Text Transfer Transformer (T5). We also introduce a new open-source pre-training dataset, called the Colossal Clean Crawled Corpus (C4). The T5 model, pre-trained on C4, achieves state-of-the-art results on many NLP benchmarks while being flexible enough to be fine-tuned to a variety of important downstream tasks. In order for our results to be extended and reproduced, we provide the code and pre-trained models, along with an easy-to-use Colab Notebook to help get started.

A Shared Text-To-Text Framework
With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.
Diagram of our text-to-text framework. Every task we consider uses text as input to the model, which is trained to generate some target text. This allows us to use the same model, loss function, and hyperparameters across our diverse set of tasks including translation (green), linguistic acceptability (red), sentence similarity (yellow), and document summarization (blue). It also provides a standard testbed for the methods included in our empirical survey.
A Large Pre-training Dataset (C4)
An important ingredient for transfer learning is the unlabeled dataset used for pre-training. To accurately measure the effect of scaling up the amount of pre-training, one needs a dataset that is not only high quality and diverse, but also massive. Existing pre-training datasets don’t meet all three of these criteria — for example, text from Wikipedia is high quality, but uniform in style and relatively small for our purposes, while the Common Crawl web scrapes are enormous and highly diverse, but fairly low quality.

To satisfy these requirements, we developed the Colossal Clean Crawled Corpus (C4), a cleaned version of Common Crawl that is two orders of magnitude larger than Wikipedia. Our cleaning process involved deduplication, discarding incomplete sentences, and removing offensive or noisy content. This filtering led to better results on downstream tasks, while the additional size allowed the model size to increase without overfitting during pre-training. C4 is available through TensorFlow Datasets.

A Systematic Study of Transfer Learning Methodology
With the T5 text-to-text framework and the new pre-training dataset (C4), we surveyed the vast landscape of ideas and methods introduced for NLP transfer learning over the past few years. The full details of the investigation can be found in our paper, including experiments on:
  • model architectures, where we found that encoder-decoder models generally outperformed "decoder-only" language models;
  • pre-training objectives, where we confirmed that fill-in-the-blank-style denoising objectives (where the model is trained to recover missing words in the input) worked best and that the most important factor was the computational cost;
  • unlabeled datasets, where we showed that training on in-domain data can be beneficial but that pre-training on smaller datasets can lead to detrimental overfitting;
  • training strategies, where we found that multitask learning could be close to competitive with a pre-train-then-fine-tune approach but requires carefully choosing how often the model is trained on each task;
  • and scale, where we compare scaling up the model size, the training time, and the number of ensembled models to determine how to make the best use of fixed compute power.
Insights + Scale = State-of-the-Art
To explore the current limits of transfer learning for NLP, we ran a final set of experiments where we combined all of the best methods from our systematic study and scaled up our approach with Google Cloud TPU accelerators. Our largest model had 11 billion parameters and achieved state-of-the-art on the GLUE, SuperGLUE, SQuAD, and CNN/Daily Mail benchmarks. One particularly exciting result was that we achieved a near-human score on the SuperGLUE natural language understanding benchmark, which was specifically designed to be difficult for machine learning models but easy for humans.

Extensions
T5 is flexible enough to be easily modified for application to many tasks beyond those considered in our paper, often with great success. Below, we apply T5 to two novel tasks: closed-book question answering and fill-in-the-blank text generation with variable-sized blanks.

Closed-Book Question Answering
One way to use the text-to-text framework is on reading comprehension problems, where the model is fed some context along with a question and is trained to find the question's answer from the context. For example, one might feed the model the text from the Wikipedia article about Hurricane Connie along with the question "On what date did Hurricane Connie occur?" The model would then be trained to find the date "August 3rd, 1955" in the article. In fact, we achieved state-of-the-art results on the Stanford Question Answering Dataset (SQuAD) with this approach.

In our Colab demo and follow-up paper, we trained T5 to answer trivia questions in a more difficult "closed-book" setting, without access to any external knowledge. In other words, in order to answer a question T5 can only use knowledge stored in its parameters that it picked up during unsupervised pre-training. This can be considered a constrained form of open-domain question answering.
During pre-training, T5 learns to fill in dropped-out spans of text (denoted by <M>) from documents in C4. To apply T5 to closed-book question answer, we fine-tuned it to answer questions without inputting any additional information or context. This forces T5 to answer questions based on “knowledge” that it internalized during pre-training.
T5 is surprisingly good at this task. The full 11-billion parameter model produces the exact text of the answer 50.1%, 37.4%, and 34.5% of the time on TriviaQA, WebQuestions, and Natural Questions, respectively. To put these results in perspective, the T5 team went head-to-head with the model in a pub trivia challenge and lost! Try it yourself by clicking the animation below.
Fill-in-the-Blank Text Generation
Large language models like GPT-2 excel at generating very realistic looking-text since they are trained to predict what words come next after an input prompt. This has led to numerous creative applications like Talk To Transformer and the text-based game AI Dungeon. The pre-training objective used by T5 aligns more closely with a fill-in-the-blank task where the model predicts missing words within a corrupted piece of text. This objective is a generalization of the continuation task, since the “blanks” can appear at the end of the text as well.

To make use of this objective, we created a new downstream task called sized fill-in-the-blank, where the model is asked to replace a blank with a specified number of words. For example, if we give the model the input “I like to eat peanut butter and _4_ sandwiches,” we would train it to fill in the blank with approximately 4 words.

We fine-tuned T5 on this task using C4 and found the resulting outputs to be quite realistic. It’s especially fun to see how the model adjusts its predictions based on the requested size for the missing text. For example, with the input, “I love peanut butter and _N_ sandwiches,” the outputs looked like:

  • I love peanut butter and jelly sandwiches.
  • I love peanut butter and jelly on my sandwiches.
  • I love peanut butter and jelly, which is what makes good sandwiches.
  • I love peanut butter and jelly, Yum! You can’t beat peanut butter and jelly sandwiches.
  • I love peanut butter and bread. Thanks!! This looks delicious. I love all types of peanut butter, but especially peanut butter/jam sandwiches.
  • I love peanut butter and banana sandwiches and the Peanut Butter Chocolate Chip Cookie Bites are now very easy to prepare. The best part is that the PB&J Cookie Bites are the perfect size for kid lunch boxes and everyone will enjoy them. Kids love these sandwiches.
  • I love peanut butter and jelly! It's like an incredible childhood love now that I'm in my 30s. I'm obsessed with peanut butter and jelly sandwiches. I have made this recipe many times with great success. I'm guessing you could probably find that it's from a 1950s cookbook. I haven't seen it listed in a current one but it is in a 1930 version (they seem to have retired it). Hope you enjoy it ;) I hope you'll post what you do with your sandwiches.
  • I love peanut butter and banana sandwiches. Peanut butter and bananas were the original sandwich spread (also known as PB&J or Peanut Butter and Jelly), so they are probably my favorite. When I was a kid, I knew what peanut butter and bananas tasted like, but I didn't really think of them as one flavor. I did recognize PB & J's as just a sandwich spread, and not really two separate flavours. When I was in graduate school researching food allergies and intolerances, PB&B is one of the main themes that came up. PB&B was and still is the favorite sandwich spread on my school cafeteria tray, and it's still on my personal board in the kitchen. I've seen PB&B on more than one occasion over the years. I can always find one of the other types of peanut butter spread at the school cafeteria and I find it's the perfect spread for PB&B. You have several options for making your own peanut butter (I love this recipe from Cookie and Kate) or buying it at stores like Trader Joes, Publix, and Safeway. The flavor of PB&B varies, but I've found that white chocolate is my favorite to use. This simple sandwich is perfect for lunch or an afternoon snack. When you're on the go, you can use a store bought product, but it just wouldn't taste as good. These peanut butter n' banana sandwich spreads (as well as other peanut butter spreads) are great for on the go eating, but they can also be refrigerated for a few hours and then served at your next party. Preheat oven to 350 degrees. Using a kitchen timer, or using a microwave, heat butter in a saucepan and melt over low heat. Add vanilla extract and milk and heat until nearly boiling. Add peanut butter, bananas, sugar, salt, and pepper, mixing well to combine all ingredients. Spread peanut butter mixture on both sides of your bread. Top with banana slices and a slice of cheese. Toast the bread until both sides are golden brown. Remove from toast and sprinkle a little more sugar and peanut butter on the top. Enjoy while the bread is warm. Store in an airtight container up to one day. Assemble peanut butter and banana sandwich spread by spreading the peanut butter mixture on each slice of bread. Add a banana slice on top and then a PB & J sandwich. Enjoy while the bread is still warm. P.S. You might also like these peanut butter and jelly sandwiches.
Conclusion
We are excited to see how people use our findings, code, and pre-trained models to help jump-start their projects. Check out the Colab Notebook to get started, and share how you use it with us on Twitter!

Acknowledgements
This work has been a collaborative effort involving Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu, Karishma Malkan, Noah Fiedel, and Monica Dinculescu.

Source: Google AI Blog


Announcing the 2019 Google Faculty Research Award Recipients



In Fall 2019, we opened our annual call for the Google Faculty Research Awards, a program focused on supporting the world-class technical research in Computer Science, Engineering and related fields performed at academic institutions around the world. These awards give Google researchers the opportunity to partner with faculty who are doing impactful research, additionally covering tuition for a student.

This year we received 917 proposals from ~50 countries and over 330 universities, and had the opportunity to increase our investment in several research areas related to Health, Accessibility, AI for Social Good, and ML Fairness. All proposals went through an extensive review process involving 1100 expert reviewers across Google who assessed the proposals on merit, innovation, connection to Google’s products/services and alignment with our overall research philosophy.

As a result of these reviews, Google is funding 150 promising proposals across a wide range of research areas, from Machine Learning, Systems, Human Computer Interaction and many more, with 26% of the funding awarded to universities outside the United States. Additionally, 27% of our recipients this year identified as a historically underrepresented group within technology. This is just the beginning of a larger investment in underrepresented communities and we are looking forward to sharing our 2020 initiatives soon.

Congratulations to the well-deserving recipients of this round's awards. More information on our faculty funding programs can be found on our website.

Source: Google AI Blog


Stable Channel Update for Desktop

The stable channel has been updated to 80.0.3987.122 for Windows, Mac, and Linux, which will roll out over the coming days/weeks.




A list of all changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.


Security Fixes and Rewards


Note: Access to bug details and links may be kept restricted until a majority of users are updated with a fix. We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed.


This update includes 3 security fixes. Below, we highlight fixes that were contributed by external researchers. Please see the Chrome Security Page for more information.


[$5000][1044570] High: Integer overflow in ICU. Reported by André Bargull on 2020-01-22
[N/A][1045931] High CVE-2020-6407: Out of bounds memory access in streams. Reported by Sergei Glazunov of Google Project Zero on 2020-01-27


This release also contains:
[N/A][1053604] High CVE-2020-6418: Type confusion in V8. Reported by Clement Lecigne of Google's Threat Analysis Group on 2020-02-18


Google is aware of reports that an exploit for CVE-2020-6418 exists in the wild.


We would also like to thank all security researchers that worked with us during the development cycle to prevent security bugs from ever reaching the stable channel.






Krishna Govind
Google Chrome

Android Studio 3.6

Posted by Scott Swarthout, Product Manager

Android Studio logo

We are excited to announce the stable release of Android Studio 3.6 with a targeted set of features addressing quality in primarily code editing and debugging use cases. This is our first release after the end of Project Marble, which was focused on making the fundamental features and flows of the Integrated Development Environment (IDE) rock-solid. We learned a lot from Project Marble and in Android Studio 3.6 we introduce a small set of features, polished existing features, and spent a notable effort addressing bugs and improving underlying performance to ensure we meet the high quality bar we set in the past year.

Some highlights of Android Studio 3.6 include a new way to quickly design, develop and preview app layouts using XML, with a new Split View in the design editors. Additionally, you no longer have to manually type in GPS coordinates to test location with your app because we now embedded Google Maps right into the Android Emulator extended control panel. Finally, we’ve made it easier to optimize your app and find bugs with automatic memory leak detection for Fragments and Activities. We hope all of these features help you be happier and more productive while developing on Android.

Thank you to those who gave your early feedback in preview releases. Your feedback helped us iterate and improve features in Android Studio 3.6. If you are ready for the next stable release, and want to use a new set of productivity features, Android Studio 3.6 is ready to download for you to get started.

Below is a full list of new features in Android Studio 3.6, organized by key developer flows.

Design

Split view in design editors

Design editors, such as the Layout Editor and Navigation Editor, now provide a Split view that enables you to see both the Design and Code views of your UI at the same time. Split view replaces and improves upon the earlier Preview window, and can be configured on a file-by-file basis to preserve context information like zoom factor and design view options, so you can choose the view that works best for each use case. To enable split view, click the Split icon in the top-right corner of the editor window. Learn more.

Split view for design editors

Split view for design editors

Color picker resource tab

In this release we wanted to make it easier to apply colors you have defined as color resources. In Android Studio 3.6, the color picker populates the color resources in your app for you to quickly choose and replace color resources values. The color picker is accessible in the design tools as well as in the XML editor.

Color picker resource tab

Color picker resource tab

Develop

View binding

View binding is a feature that allows you to more easily write code that interacts with views by providing compile-time safety when referencing views in your code. When enabled, view binding generates a binding class for each XML layout file present in that module. In most cases, view binding replaces findViewById. You can reference all views that have an ID with no risk of null pointer or class cast exceptions.These differences mean that incompatibilities between your layout and your code will result in your build failing at compile time rather than at runtime. To enable view binding in your project, include the following in each module’s build.gradle file:

android {
    viewBinding.enabled = true
}

For more information, check out this blog post by one of our developer experts.

Android NDK updates

The following Android NDK features in Android Studio, previously supported in Java, are now also supported in Kotlin:

  • Navigate from a JNI declaration to the corresponding implementation function in C/C++. View this mapping by hovering over the C or C++ item marker near the line number in the managed source code file.
  • Automatically create a stub implementation function for a JNI declaration. Define the JNI declaration first and then type “jni” or the method name in the C/C++ file to activate.

Learn more

IntelliJ Platform Update

Android Studio 3.6 includes the IntelliJ 2019.2 platform release. This IntelliJ release includes many improvements from a new services tool window to much improved startup times. Learn more

Add classes with Apply Changes

You can now add a class and then deploy that code change to your running app by clicking either Apply Code Changes or Apply Changes and Restart Activity.

To learn more about the difference between these two actions, see Apply Changes.

Build

Android Gradle Plugin (AGP) updates

Android Gradle plugin 3.6 and higher includes support for the Maven Publish Gradle plugin, which allows you to publish build artifacts to an Apache Maven repository. The Android Gradle plugin creates a component for each build variant artifact in your app or library module that you can use to customize a publication to a Maven repository. This change will make it easier to manage the release lifecycle for your various targets. Learn more

Additionally, Android Gradle plugin has made significant performance improvement for annotation processing/KAPT for large projects. This is caused by AGP now generating R class bytecode directly, instead of .java files.

New packaging tool

The Android build team is continuously working on changes to improve build performance, and in this release we changed the default packaging tool to zipflinger for debug builds. Users should see an improvement in build speed, but you can also revert to using the old packaging tool by setting android.useNewApkCreator=false in your gradle.properties file.

Edit your gradle.properties file to disable the new packaging tool

Edit your gradle.properties file to disable the new packaging tool

Test

Android Emulator - Google Maps UI

Android Emulator 29.2.12 includes a new way for app developers to interface with the emulated device location. We embedded the Google Maps user interface in the extended controls menu to make it easier to specify locations and also to construct routes from pairs of locations. Individual points can be saved and re-sent to the device as the virtual location, while routes can be generated through typing in addresses or clicking two points. These routes can be replayed in real time as locations along the route are sent to the guest OS.

Android Emulator location UI with real-time location streaming

Android Emulator location UI with real-time location streaming

Multi-display support

Emulator 29.1.10 includes preliminary support for multiple virtual displays. As more devices are available that have multiple displays, it is important to test your app on a variety of multi-display configurations. Users can configure multiple displays through the settings menu (Extended Controls > Settings).

Multi-display support in Android Emulator

Multi-display support in Android Emulator

Configure secondary displays in the Android Emulator Extended Controls Panel

Configure secondary displays in the Android Emulator Extended Controls Panel

Resumable SDK downloads

When downloading Android SDK components and tools using the Android Studio SDK Manager, Android Studio now allows you to resume downloads that were interrupted (for example, due to a network issue) instead of restarting the download from the beginning. This enhancement is especially helpful for large downloads, such as the Android Emulator or system images, when internet connectivity is unreliable.

Pause and resume SDK downloads

Pause and resume SDK downloads

In-place updates for imported APKs

Android Studio allows you to import externally-built APKs to debug and profile them. Previously, when changes to those APKs were made, you would have to manually import them again and reattach symbols and sources. Android Studio 3.6 now automatically detects changes made to the imported APK file and gives you an option to re-import it in-place.

Attach Kotlin sources to imported APKs

We added support for attaching Kotlin source files to imported APKs. To learn more, see Attach Kotlin/Java sources.

Attach Kotlin/Java sources to imported APKs

Attach Kotlin/Java sources to imported APKs

Optimize

Leak detection in Memory Profiler

Based on your feedback, we’ve added in the Memory Profiler the ability to detect Activity and Fragment instances which may have leaked. To get started, capture or import a heap dump file in the Memory Profiler, and check the Activity/Fragment Leaks checkbox to generate the results. For more information on how Android Studio detects leaks, please see our documentation.

Detect leaked Activities and Fragments in the Memory Profiler

Detect leaked Activities and Fragments in the Memory Profiler

Deobfuscate class and method bytecode in APK Analyzer

When using the APK Analyzer to inspect DEX files, you can now deobfuscate class and method bytecode. While in the DEX file viewer, load the ProGuard mappings file for the APK you’re analyzing. When loaded, you will be able to right-click on the class or method you want to inspect by selecting Show bytecode. Learn more

Deobfuscate class and method bytecode by selecting Show Bytecode in the APK Analyzer

Deobfuscate class and method bytecode by selecting Show Bytecode in the APK Analyzer

To recap, Android Studio 3.6 includes these new enhancements & features:

Design

  • Split View in Design Editors
  • Color Picker Resource Tab

Develop

  • View binding
  • Android NDK support updates
  • IntelliJ Platform Update
  • Add classes with Apply Changes

Build

  • Android Gradle Plugin (AGP) Updates
  • New packaging tool

Test

  • Android Emulator Google Maps UI
  • Multi-display support
  • Resumable SDK downloads
  • In-place updates for imported APKs

Optimize

  • Leak detection in Memory Profiler
  • Deobfuscate class and method bytecode in APK Analyzer
  • Attach Kotlin sources to imported APKs

Getting Started

Download

Download Android Studio 3.6 from the download page. If you are using a previous release of Android Studio, you can simply update to the latest version of Android Studio. To use the mentioned Android Emulator features make sure you are running at least Android Emulator v29.2.12 downloaded via the Android Studio SDK Manager.

As mentioned above, we appreciate any feedback on things you like, and issues or features you would like to see. If you find a bug or issue, feel free to file an issue. Follow us -- the Android Studio development team ‐ on Twitter and on Medium.

Upholding the legacy of Black entrepreneurship in Atlanta

February is Black History Month across the U.S., but here in Atlanta, Black history is everywhere, year-round. Atlanta is the number one city for Black prosperity, and the country’s fourth-largest tech hub. As more than a quarter of Atlanta's tech workers are Black, it’s clear that our city’s startup scene is just the latest iteration of a long legacy of Black entrepreneurship. There's a spirit in the city that inspired the entrepreneurs of the past, and continues to attract tech talent today.

I was one of those entrepreneurs. When I founded my own startup, Partpic, I decided to do it not in Silicon Valley, where I had started my career, but in Atlanta. Partpic was acquired in 2016, but I opted to stay in Atlanta and continue to grow my roots in the tech and business community. It’s home now. In my new role as U.S. Head of Google for Startups, I’ll lead our continued support of Atlanta’s Black founders, beginning with a few exciting efforts:

Russell Center for Innovation

Along with our friends at Grow with Google, we’re partnering with the Russell Center for Innovation and Entrepreneurship (RCIE), an organization that helps black entrepreneurs and local business owners build, grow and create jobs. Our support will include mentorship, scholarships and funding three RCIE fellowships designed to help students learn and practice business firsthand. 

Collab Studio

Collab Studio—a resource center providing Black founders a safe space to learn and forge community in Atlanta—has joined the Google for Startups partner network. Our funding will help Collab Studio facilitate connections and technical resources so that 20 Black founders can prepare their businesses for the next stage of growth.

Atlanta Founders Academy

The Atlanta Founders Academy, modeled off last year's pop-up at our Atlanta offices, is coming this spring. Throughout the year, we’ll host a series of hands-on programs from Googlers, experts, and investors to support underrepresented Atlanta startup founders on topics such as sales, strategy, hiring and fundraising. Spearheading these efforts will be Googler and newly-minted Atlanta Advisor-in-Residence, Michelle Green, who has been helping Fortune 500 companies grow their business for more than a decade. Learn more about how to get involved in the Atlanta Founders Academy in this form.

As a Black woman, entrepreneur and Googler, I'm proud to be a part of the living, breathing history of Atlanta. Google’s focus on providing equitable access to information, networks, and capital for underrepresented startups speaks to a larger theme in tech and innovation today: Great ideas and startups can come from anywhere and anyone, and you don’t have to be based in Silicon Valley to be successful. We have an opportunity to highlight the work of startups here in Atlanta and in other regions that have been under-resourced for too long—and the great privilege of supporting Black founders and future history-makers.

Responsive Display Ads in Google Ads Scripts

Today we are adding support for new responsive display ads in Google Ads scripts. Starting last year, you could no longer create legacy responsive display ads, but you could still fetch your existing ads.

These new responsive display ads have added support for multiple text, image, and video assets in the same ad. The ResponsiveDisplayAd object gained new methods to support the new associated fields, and you can begin adding new responsive display ads in your scripts.

The new and legacy responsive display ads are both represented via the same ResponsiveDisplayAd object. Make sure to read the full object documentation to check which methods are relevant for legacy ads or new ads. We also have a short guide demonstrating some of the key changes.

If you have any questions or concerns, please don't hesitate to contact us via the forum.

Working from home? Use these 6 tips for better video calls

In the life of a working mom, flexibility is key. And in the life of a sometimes-work-from-home working mom, technology is the reason I can be flexible. Sometimes my kid gets sick, or I need a plumber to come fix the toilet. I’m lucky to have a job that lets me work remotely, in an age where videoconferencing is an acceptable way of staying on track with the day’s meetings. 

But videoconferencing isn’t always easy. The kids climb on you, the dog barks, there’s background noise … you get the idea. I’ve had some embarrassing moments and made plenty of mistakes, but I’ve learned a few things along the way. Here are my tips for successful videoconferencing from home. (Got more tips? Mention @gsuite on Twitter.) 

Tip #1: Choose the right environment
When I want to talk through a complex issue or brainstorm ideas, video calls are more efficient than chat or email. They also help me get to know teammates in different time zones. But when you're on a call, give some thought to what’s around you, such as the backdrop (choose a plain wall, and avoid windows that will provide too much backlight), and if you have a laptop, put it somewhere steady. I once did an entire video call with my laptop on my … well, lap—and at the end the other participant told me that the subtle wobbling of the screen was extremely distracting.

Tip #2: Invite anyone, anytime
Videoconferencing doesn’t have to be scheduled; if you’re in the middle of a too-long email conversation, you can instantly set up a meeting and invite people within or outside of your organization to join. Hangouts Meet automatically creates international dial-in codes so people can call on the phone from anywhere, and you can invite people via a Calendar event, by email, or by phone. Check out our help center to get started.

Tip #3: Can’t hear? Turn on captions
If you’re in a loud place and don’t have super-fancy headphones, you can use Meet’s live caption feature to display captions in real time (just like closed captions on TV). Start here.

Tip #4: Presenting? Only share what you mean to share
Don’t you love that moment when you’re sharing your screen and then, suddenly, everyone on the call is reading your email? To make sure you only share what you mean to share, present one window (rather than your entire screen). Check it out.

Tip #5: Want to read the room? Change the screen layout
One of my favorite features in Meet is changing the layout of the video call. If someone’s showing slides, but there’s a lively discussion happening in the office, you can switch your layout to focus on the people in the office, rather than the presentation. Learn how.

gsuite meet.gif

Tip #6: Be real
Everyone has a life outside of work. Depending on the culture of your workplace, it can be OK (even good) to show a little bit of the “real” life around you—like letting your kid wave to the camera or eating your lunch if you’ve been on nonstop calls all day. Showing a little bit of your life can foster deeper connections with coworkers and even create empathy for whatever you’re dealing with outside of work.

Got video tips of your own? We’d love to hear them—tweet us @gsuite.

Inviting applications for Class 4 of Google for Startups Accelerator India

In July 2018, we announced the launch of Google Developers Launchpad Accelerator India. Since then, we have worked with 30 technology startups over 3 classes, that were solving for India’s most pressing problems such as sanitation, healthcare, agritech, fintech and sustainability. Going forward, the Launchpad Accelerator India program will be known as Google for Startups Accelerator India,. This will unify and strengthen our numerous efforts to nurture and grow the startup ecosystem under the Google for Startups brand.


In January 2020, we concluded Class 3 of the program with a graduation ceremony for the 10 startups of the batch. During these 3 months, the startups underwent intense mentorship bootcamps, tech workshops, design sprints and marketing growth labs along with forging crucial connections to tech teams within Google and experts in the industry.  The startups were also offered opportunities to attend conferences to showcase their work and interact with the media.


In addition to technical mentoring, the startups also underwent a Google-created Leaders Lab that is designed to build empathy in leaders, provide tools for creating sustainable team culture and reveal blind spots in their leadership styles. 


As these 10 startups continue on their journey to build scalable solutions to India’s core problems, we are excited to now invite applications for Class 4 of Google for Startups Accelerator India.


If you are a startup that uses technology like AI/ML to solve systemic problems in India, submit your application now, at this link, under 'Google for Startups Accelerator India', by 15th March 2020. The batch will kick off with a 1-week mentorship bootcamp in April, in Bangalore.


Before you apply, do check if your startup meets the following criteria to be eligible for the program


1) You should be a technology startup
2) Your base should be in India
3) You should preferably have at least raised seed funding
4) You should be addressing a challenge that is specific to India
5) The product should use advanced technology like AI/ML to power the solution


Each class will receive mentorship and support from the best of Google in AI/ML, Cloud, UX, Android, Web, Product Strategy and Marketing.


Successful applicants will be notified in early April. 


Posted by Paul Ravindranath G, Program Manager, Developer Relations, Google India 

How one Googler creates more than music at Carnival

While many Brazilians grow up celebrating Carnival, this wasn’t true for Christiane Silva Pinto. It wasn’t until college when she joined her first bateria that it became an incredibly important tradition to her. “When I was playing in college, I loved the music and practicing with the band, but I also loved that I got to know more about that culture I hadn’t been in touch with when I was a kid,” says Christiane, who played the drums in her college bateria, which is a Brazilian percussion band. 

“Some of the people who played with us had experience playing in the Carnival parades, and those stories were contagious.” Today, in addition to working as an Associate Product Marketing Manager for Google helping small and medium-sized businesses in Brazil, Christiane is part of a band that plays every year during the iconic Carnival in Sao Paulo, Brazil, where a sea of spectators gather every year. 

Carnival lasts for four days, and much of the celebration happens in the streets. While there are different traditions in different cities in Brazil, people in Sao Paulo enjoy parades, food and most importantly, music. Bands called blocos or bloquinhos (which include the traditional baterias along with other instruments as well as singing and dancing) set up temporary stages or hire trucks and offer free, wandering concerts.

In 2013, Christiane and her friends founded their first Carnival bloquinho and she was excited to see 30 people had turned up for their show. She would’ve never imagined that her band would become so popular that around 10,000 people would gather to watch them play, like they did for last year’s Carnival. In her bloco, where Christiane plays a kind of tambourine called tamborim and the snare drum; they play traditional Carnival songs, original pieces they’ve written and even reinterpret contemporary songs with Carnival rhythms from bands like Pink Floyd or Rage Against The Machine.

Aside from making music, Christiane sees carnival as an opportunity to unite Brazilians  and generate equality awareness, as well as connect with her African heritage. “We have a lot of inequality in Brazil. Most people are poor, and most of the poor people are Black. Race is very related to economy, and unfortunately you will probably see that during Carnival the white people are having fun and the Black people are working,” she says. 

In fact, in her bloquinho there are only two Black women, including Christiane. While the majority of Brazilians are Black, they’re hugely underrepresented, and she’s proud to bring her perspective to the celebration and give visibility to her culture and ancestors. 

Christiane also wants to empower women through Carnival. She recently joined a second bloquinho dedicated to empowering women through music and body positiveness. This bloco is exclusively for women, which is unusual; it was formed in 2015 by one of her friends after she was harassed during Carnival. “We founded a feminist bloco where women could come together to celebrate freedom, to be safe and to be able to express their bodies.” She’s also helping campaign local government to pass initiatives that protect women against harassment.   

Christiane’s dedication to Carnival began with her love of music, but through it she’s found a way to make underrepresented voices heard. “Many people say that things are so bad that they don’t understand how some people can still enjoy Carnival and forget about the country’s problems. But that’s the way people who don’t live Carnival think, because they don’t understand its culture. For me, it’s a way of cultural resistance.” she says. 

“Music is a powerful way to express your ideas and your values. Being able to create music is very beautiful and powerful. And for me, it’s priceless to keep my culture and my ancestors alive through Carnival.”