Monthly Archives: October 2019

Customize text size and position for captions in Google Slides

Quick launch summary

You can now personalize caption text size and position while presenting in Google Slides. These features can help make captions easier to read, like ensuring all audience members can view captions in a large room. Or, you could make your text smaller to maximize the number of words on screen at once.


While presenting, select the dropdown menu next to the Captions button on the toolbar. From there, you can then set the text size and position.


See our Help Center to learn more about presenting Slides with captions.


Availability

Rollout details

G Suite editions
  • Available to all G Suite editions

On/off by default?
  • This feature will be available by default.

Stay up to date with G Suite launches

USB-C Titan Security Keys – available tomorrow in the US




Securing access to online accounts is critical for safeguarding private, financial, and other sensitive data online. Phishing - where an attacker tries to trick you into giving them your username and password - is one of the most common causes of data breaches. To protect user accounts, we’ve long made it a priority to offer users many convenient forms of 2-Step Verification (2SV), also known as two-factor authentication (2FA), in addition to Google’s automatic protections. These measures help to ensure that users are not relying solely on passwords for account security.

For users at higher risk (e.g., IT administrators, executives, politicians, activists) who need more effective protection against targeted attacks, security keys provide the strongest form of 2FA. To make this phishing-resistant security accessible to more people and businesses, we recently built this capability into Android phones, expanded the availability of Titan Security Keys to more regions (Canada, France, Japan, the UK), and extended Google’s Advanced Protection Program to the enterprise.

Starting tomorrow, you will have an additional option: Google’s new USB-C Titan Security Key, compatible with your Android, Chrome OS, macOS, and Windows devices.



USB-C Titan Security Key


We partnered with Yubico to manufacture the USB-C Titan Security Key. We have had a long-standing working and customer relationship with Yubico that began in 2012 with the collaborative effort to create the FIDO Universal 2nd Factor (U2F) standard, the first open standard to enable phishing-resistant authentication. This is the same security technology that we use at Google to protect access to internal applications and systems.

USB-C Titan Security Keys are built with a hardware secure element chip that includes firmware engineered by Google to verify the key’s integrity. This is the same secure element chip and firmware that we use in our existing USB-A/NFC and Bluetooth/NFC/USB Titan Security Key models manufactured in partnership with Feitian Technologies.

USB-C Titan Security Keys will be available tomorrow individually for $40 on the Google Store in the United States. USB-A/NFC and Bluetooth/NFC/USB Titan Security Keys will also become available individually in addition to the existing bundle. Bulk orders are available for enterprise organizations in select countries.


We highly recommend all users at a higher risk of targeted attacks to get Titan Security Keys and enroll into the Advanced Protection Program (APP), which provides Google’s industry-leading security protections to defend against evolving methods that attackers use to gain access to your accounts and data. You can also use Titan Security Keys for any site where FIDO security keys are supported for 2FA, including your personal or work Google Account, 1Password, Coinbase, Dropbox, Facebook, GitHub, Salesforce, Stripe, Twitter, and more.

Dev Channel Update for Desktop

The Dev Channel has been updated to 79.0.3938.0 for Windows, Mac, and Linux.



A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.
Lakshmana Pamarthy Google Chrome

Exploring Massively Multilingual, Massive Neural Machine Translation



“... perhaps the way [of translation] is to descend, from each language, down to the common base of human communication — the real but as yet undiscovered universal language — and then re-emerge by whatever particular route is convenient.”Warren Weaver, 1949

Over the last few years there has been enormous progress in the quality of machine translation (MT) systems, breaking language barriers around the world thanks to the developments in neural machine translation (NMT). The success of NMT however, owes largely to the great amounts of supervised training data. But what about languages where data is scarce, or even absent? Multilingual NMT, with the inductive bias that “the learning signal from one language should benefit the quality of translation to other languages”, is a potential remedy.

Multilingual machine translation processes multiple languages using a single translation model. The success of multilingual training for data-scarce languages has been demonstrated for automatic speech recognition and text-to-speech systems, and by prior research on multilingual translation [1,2,3]. We previously studied the effect of scaling up the number of languages that can be learned in a single neural network, while controlling the amount of training data per language. But what happens once all constraints are removed? Can we train a single model using all of the available data, despite the huge differences across languages in data size, scripts, complexity and domains?

In “Massively Multilingual Neural Machine Translation in the Wild: Findings and Challenges” and follow-up papers [4,5,6,7], we push the limits of research on multilingual NMT by training a single NMT model on 25+ billion sentence pairs, from 100+ languages to and from English, with 50+ billion parameters. The result is an approach for massively multilingual, massive neural machine translation (M4) that demonstrates large quality improvements on both low- and high-resource languages and can be easily adapted to individual domains/languages, while showing great efficacy on cross-lingual downstream transfer tasks.

Massively Multilingual Machine Translation
Though data skew across language-pairs is a great challenge in NMT, it also creates an ideal scenario in which to study transfer, where insights gained through training on one language can be applied to the translation of other languages. On one end of the distribution, there are high-resource languages like French, German and Spanish where there are billions of parallel examples, while on the other end, supervised data for low-resource languages such as Yoruba, Sindhi and Hawaiian, is limited to a few tens of thousands.
The data distribution over all language pairs (in log scale) and the relative translation quality (BLEU score) of the bilingual baselines trained on each one of these specific language pairs.
Once trained using all of the available data (25+ billion examples from 103 languages), we observe strong positive transfer towards low-resource languages, dramatically improving the translation quality of 30+ languages at the tail of the distribution by an average of 5 BLEU points. This effect is already known, but surprisingly encouraging, considering the comparison is between bilingual baselines (i.e., models trained only on specific language pairs) and a single multilingual model with representational capacity similar to a single bilingual model. This finding hints that massively multilingual models are effective at generalization, and capable of capturing the representational similarity across a large body of languages.
Translation quality comparison of a single massively multilingual model against bilingual baselines that are trained for each one of the 103 language pairs.
In our EMNLP’19 paper [5], we compare the representations of multilingual models across different languages. We find that multilingual models learn shared representations for linguistically similar languages without the need for external constraints, validating long-standing intuitions and empirical results that exploit these similarities. In [6], we further demonstrate the effectiveness of these learned representations on cross-lingual transfer on downstream tasks.
Visualization of the clustering of the encoded representations of all 103 languages, based on representational similarity. Languages are color-coded by their linguistic family.
Building Massive Neural Networks
As we increase the number of low-resource languages in the model, the quality of high-resource language translations starts to decline. This regression is recognized in multi-task setups, arising from inter-task competition and the unidirectional nature of transfer (i.e., from high- to low-resource). While working on better learning and capacity control algorithms to mitigate this negative transfer, we also extend the representational capacity of our neural networks by making them bigger by increasing the number of model parameters to improve the quality of translation for high-resource languages.

Numerous design choices can be made to scale neural network capacity, including adding more layers or making the hidden representations wider. Continuing our study on training deeper networks for translation, we utilized GPipe [4] to train 128-layer Transformers with over 6 billion parameters. Increasing the model capacity resulted in significantly improved performance across all languages by an average of 5 BLEU points. We also studied other properties of very deep networks, including the depth-width trade-off, trainability challenges and design choices for scaling Transformers to over 1500 layers with 84 billion parameters.

While scaling depth is one approach to increasing model capacity, exploring architectures that can exploit the multi-task nature of the problem is a very plausible complementary way forward. By modifying the Transformer architecture through the substitution of the vanilla feed-forward layers with sparsely-gated mixture of experts, we drastically scale up the model capacity, allowing us to successfully train and pass 50 billion parameters, which further improved translation quality across the board.
Translation quality improvement of a single massively multilingual model as we increase the capacity (number of parameters) compared to 103 individual bilingual baselines.
Making M4 Practical
It is inefficient to train large models with extremely high computational costs for every individual language, domain or transfer task. Instead, we present methods [7] to make these models more practical by using capacity tunable layers to adapt a new model to specific languages or domains, without altering the original.

Next Steps
At least half of the 7,000 languages currently spoken will no longer exist by the end of this century*. Can multilingual machine translation come to the rescue? We see the M4 approach as a stepping stone towards serving the next 1,000 languages; starting from such multilingual models will allow us to easily extend to new languages, domains and down-stream tasks, even when parallel data is unavailable. Indeed the path is rocky, and on the road to universal MT many promising solutions appear to be interdisciplinary. This makes multilingual NMT a plausible test bed for machine learning practitioners and theoreticians interested in exploring the annals of multi-task learning, meta-learning, training dynamics of deep nets and much more. We still have a long way to go.

Acknowledgements
This effort is built on contributions from Naveen Arivazhagan, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Chen, Yuan Cao, Yanping Huang, Sneha Kudugunta, Isaac Caswell, Aditya Siddhant, Wei Wang, Roee Aharoni, Sébastien Jean, George Foster, Colin Cherry, Wolfgang Macherey, Zhifeng Chen and Yonghui Wu. We would also like to acknowledge support from the Google Translate, Brain, and Lingvo development teams, Jakob Uszkoreit, Noam Shazeer, Hyouk Joong Lee, Dehao Chen, Youlong Cheng, David Grangier, Colin Raffel, Katherine Lee, Thang Luong, Geoffrey Hinton, Manisha Jain, Pendar Yousefi and Macduff Hughes.


* The Cambridge Handbook of Endangered Languages (Austin and Sallabank, 2011).

Source: Google AI Blog


TerraTalk is changing how Japan’s students learn English

With increasing classroom sizes, more paperwork than ever and new mandates from the ministry of education, Japanese teachers face an uphill battle in their mission to teach their students. 

Yoshiyuki Kakihara wanted to use technology to figure out a solution, with an emphasis on English language education. He created TerraTalk, an AI-powered app that allows students to have audio conversations. TerraTalk’s artificial intelligence can hear and process what the students say and give feedback, removing this burden from teachers, and reinvigorating the classroom by creating an atmosphere filled with conversation and English learning games. TerraTalk was recently part of Google Developers Launchpad Accelerator, a program that provides mentorship and support to early-stage startups.

With nine acceleration programs and 341 startup alumni, we at Launchpadhave seen firsthand how  entrepreneurs around the world are using technology and startup innovation to solve the world’s biggest problems. In the third installment of our series, “Ideas to Reality,” we talked to Yoshiyuki about why he started TerraTalk, and where he hopes it will be in the next few years. 

TerraTalk app

A look at the TerraTalk English learning app.

When did you realize you wanted to make an impact on the education field? 

I grew up on the outskirts of Tokyo as a science-savvy kid and became super interested in foreign culture. I ended up leaving my high school to study in the United Kingdom. I did well academically back home, so it was quite a shock how my English fell short of being comprehensible at all abroad. It turns out that I wasn’t alone; in Japan, very few people reach conversational level at the end of secondary or university curriculum.

I feel that this is the result of an outdated methodology where too much emphasis is placed on explaining the grammar and little to no attention on putting the language into use. To make matters worse,  80 percent of teachers in Japan are putting 100 hours of overtime per month. They don’t have time to investigate, experiment with and transform the way they teach. When I learned this, I realized that I could help by creating a new technology to ease the burden on teachers, and make learning English more engaging for students.  

Who are your customers? How is your company positively affecting them?  

We do business directly with education institutions and local education councils. With our TerraTalk app, students can engage in role-playing style conversation lessons with their mobile devices. This enables teachers to ensure their students get enough speaking time, which is difficult to achieve with conventional classroom methodologies.

We are seeing students teach each other on how to tackle the exercises, sometimes creating their own competition out of it. In some ways, the technology we are bringing is humanizing classrooms, as it frees teachers from the standard lecture format.

How did you use Google products to make TerraTalk? 

BigQuery has helped us crunch massive user data to discover how people are using our app. Google Analytics is our go-to tool for marketing and search engine analysis. We use the TensorFlow family of machine learning tools and other numerous open source projects maintained by Google. We also use G Suite as a primary business tool, because of its reliability, security and ease of use.

Why did you choose to participate in Google Launchpad?

Google is a leading company in machine learning and cloud technology applications, which we heavily rely on. The prospect of receiving support in these areas was extremely appealing, especially when you are running a startup and saving time is everything.

What was the most memorable moment from Launchpad? 

We attended Launchpad Tokyo, which had seven startups in total. In a session called Founders Circle, founders from the startups got together and shared their biggest failures to date in a fireside-chat style. It was the moment where we became a true community, and many of us are still in touch after the program.

What advice do you have for future entrepreneurs? 

Don’t quit. Find a business or market where you have a natural advantage over other people. Whether your competition is other startups or established companies, it is the people you work with who make the difference.

How Google made me proud to be out at work

Until I started working at Google in 2014, I had never been out at work.  

Now, less than five years later, everything is different: I’m an active volunteer leader in Google’s LGBTQ+ Employee Resource Group—a Googler-run, company-supported organization that works to provide an inclusive workplace for LGBTQ+ employees, and partners closely with our Trans Employee Resource Group, which represents our transgender, gender non-conforming, and non-binary colleagues. As part of my role, I’ve had the chance to engage LGBTQ+ Googlers across our global offices, speak publicly about being LGBTQ+ in the workplace and have even been able to share my perspectives and experiences directly with Google leadership. 

At this point, I can barely remember what it felt like to not be a visible, openly LGBTQ+ person at work. So it’s hard to imagine that before joining Google, I felt I couldn’t come out at the office at all. 

As we celebrate National Coming Out Day and reflect on all of the progress we’ve made as a community, I am determined to remember this simple but crucial reality: Openness matters. Community matters. Being able to be out at work matters. 

LGBTQ+ Pride sign at Google

Googlers create signs supporting the LGBTQ+ community for the 2017 New York City Pride March.

Prior to joining Google, I’d spent time in a variety of industries, always under the careful, polite policy of evasion when it came to questions about my personal life. Perhaps I didn’t need to be so secretive. I worked with wonderful, kind people, and though there were no explicit shows of support for LGBTQ+ issues from my workplace, I’m sure most of my colleagues and managers wouldn’t have taken issue with my identity. 

Still, for many LGBTQ+ folks, the fear of prejudice can nag at you, and cause you to hesitate even around the most well-meaning of coworkers. Some assume that with the ushering in of marriage equality here in the U.S., other kinds of inequality have disappeared and the movement is complete. But as many LGBTQ+-identifying people will tell you, critical challenges still remain, and it takes a conscious and dedicated effort to counteract their effects. 

Growing up in New Mexico, I got an early introduction to some of the challenges that LGBTQ+ people still so often face: harassment, discrimination, violence. The understanding that being LGBTQ+ was unsafe was imprinted on me almost immediately, and that fear left a lasting mark.  

In each new city, from college to a job to graduate school to another job, I was reminded (often in not-so-subtle ways) that no matter what might change in the law or in popular culture, I should always be wary, always be careful.  

So I never took the chance.  

In so many important ways, restraining from bringing my full self to work hurt my ability to be a good employee. Constantly worrying about slipping up and revealing that I had a girlfriend rather than a boyfriend prevented me from feeling fully integrated. It became an obstacle to forming the kinds of professional relationships that help company culture feel cohesive and supportive.  

Now, I realize how much I was missing.  Today, I’m part of a workplace with visible LGBTQ+ leaders, explicit shows of support for LGBTQ+ cultural moments and celebrations and broad encouragement to use what makes me different to create an environment of inclusion for my fellow Googlers. This journey has made me realize how much all workplaces can benefit from supporting their employees’ differences, just as much as they celebrate their collective unity.  

I’m proud. I hope you are, too. 

Using AI to give people who are blind the “full picture”

Everything that makes up the web—text, images,video and audio—can be easily discovered. Many people who are blind or have low vision rely on screen readers to make the content of web pages accessible through spoken feedback or braille. 

For images and graphics, screen readers rely on descriptions created by developers and web authors, which are usually referred to as “alt text” or “alt attributes” in the code. However, there are millions of online images without any description, leading screen readers to say “image,” “unlabeled graphic,” or a lengthy, unhelpful reading of the image’s file name. When a page contains images without descriptions, people who are blind may not get all of the information conveyed, or even worse, it may make the site totally unusable for them. To improve that experience, we’ve built an automatic image description feature called Get Image Descriptions from Google. When a screen reader encounters an image or graphic without a description, Chrome will create one. 

Image descriptions automatically generated by a computer aren't as good as those written by a human who can include additional context, but they can be accurate and helpful. An image description might help a blind person read a restaurant menu, or better understand what their friends are posting on social media.

If someone using a screen reader chooses to opt in through Settings, an unlabeled image on Chrome is sent securely to a Google server running machine learning software. The technology aggregates data from multiple machine-learning models. Some models look for text in the image, including signs, labels, and handwritten words. Other models look for objects they've been trained to recognize—like a pencil, a tree, a person wearing a business suit, or a helicopter. The most sophisticated model can describe the main idea of an image using a complete sentence.

The description is evaluated for accuracy and valuable information: Does the annotation describe the image well? Is the description useful? Based on whether the annotation meets that criteria, the machine learning model determines what should be shown to the person, if anything. We’ll only provide a description if we have reasonable confidence it's correct. If any of our models indicate the results may be inaccurate or misleading, we err on the side of giving a simpler answer, or nothing at all. 

Here are a couple of examples of the actual descriptions generated by Chrome when used with a screen reader.

Pineapples, bananas and coconuts

Machine-generated description for this image: "Appears to be: Fruits and vegetables at the market."

Man playing guitar on gray sofa

Machine-generated description for this image: "Appears to be: Person playing guitar on the sofa." 

Over the past few months of testing, we’ve created more than 10 million descriptions with hundreds of thousands being added every day. The feature is available in English, but we plan to add more languages soon. Image descriptions in Chrome are not meant to replace diligent and responsible web authoring; we always encourage developers and web authors to follow best practices and provide image descriptions on their sites. But we hope that this feature is a step toward making the web more accessible to everyone. 

How Raleigh Digital Connectors (RDC) Enabled Me To Be A Leader in Raleigh


We’re closing out our Digital Inclusion Week series with a post from Raleigh, NC. Habib Khadri, a sophomore at UNC-Chapel Hill in Computer Science and Business Administration, is an alum of the City of Raleigh’s Digital Connector program, sponsored by Google Fiber, which provides 14-18 year olds with technology and leadership training.

Being a leader isn’t reserved for individuals who are already placed in positions of authority, but rather invites individuals who possess determination, initiative, and a proactive mindset to step up and take charge. I have held numerous positions in clubs, organizations, and at work, but I cannot say I was a leader solely due to the title I was given. There are a multitude of initiatives where I was simply just a member, but can recall exact moments where I felt I exemplified what it means to be a leader.

Moving to Raleigh allowed me to participate in Raleigh Digital Connectors, a program for teenagers from backgrounds that are underrepresented in the technology industry. With support from Google Fiber, Digital Connectors offered leadership development aligned with community-based service projects. The program was close to my high school but the drastic difference in neighborhoods showed me the wealth gap that exists in Raleigh. The program’s classes were held in a low socioeconomic area where parents feared to drop off their children. My background coupled with my analytical nature allowed me to construct and present ideas to “bridge the divide” between socioeconomic classes within Raleigh. I was in the program for two years only to be invited back to be an instructor and later on, the speaker for two years in a row at the annual commencement ceremony.

Raleigh Digital Connectors has an annual program called “The Oak City Techathon” which enabled me to become an instructor within my community. Whether it was creating a Facebook account for senior citizens and allowing them to connect with long lost friends or teaching young kids how to assemble basic robots, I was able to spark a newfound interest in a multitude of groups scattered across the city of Raleigh. I knew these people would learn and then be inspired to teach what they learn to their peers. 

I want to work toward eliminating the wealth gap and bring communities together so everyone has access to resources that are only available in the affluent areas. I think a step toward equity between these communities is to promote programs such as Raleigh Digital Connectors. I feel it is my experiences that enable me to be a global leader. Being a leader doesn’t mean just sitting back and delegating tasks, but involves hands-on experience and the ambition to want to better your community, whether it be local or global. I learned that even when you feel insignificant, everybody has to start somewhere.

Posted by Habib Khadri, UNC-Chapel Hill student and Raleigh Digital Connectors alum


~~~

author: Habib Khadri
title: UNC-Chapel Hill sophomore and Raleigh Digital Connectors alum
category: community_impact
backgroundsize: contain



imagealign: center

Grow with Google brings digital skills training and tips to Burnie businesses

Grow with Google has continued on the road, this time heading to Burnie in Tasmania for the first time to host digital skills workshops for businesses and locals. We were joined there by Tasmanian Senator Jacqui Lambie who officially opened the event.
  Caption: Tasmanian Senator Jacqui Lambie with Google Australia Public Policy Manager Hannah Frank 

Today’s event was held at Weller’s Inn and attended by more than twenty local businesses who picked up tips on how to grow their presence online and be found by more customers, where to gain customer insights, and general digital tips and tricks.

Grow with Google aims to provide all Australians with access to digital skills training online and in-person, to help them make the most of the Internet - including within our fast growing digital economy.

We know that digital tools can open up new opportunities for regional communities and businesses - and help level the playing field. But many people and businesses are unsure where to begin, which is why we created Grow with Google to help close this gap.

A report released by AlphaBeta in September found that Google’s advertising and productivity platforms were helping more than a million Aussie businesses and had helped deliver business benefits of $35 billion so far this year - including supporting 2100 Tasmanian jobs.

The report also highlighted Tasmanian business success story Bridestowe Lavender, which has used digital marketing to help turn an old lavender farm into a popular tourist destination.

Since 2014, Google has trained more than half a million people across Australia through online and in-person digital skills training, as well as curriculum integrated through school and partner programs.

Grow with Google aims to create opportunity for all Australians to grow their skills, careers, and businesses with free tools, training, and events. It includes an online learning hub accessible from anywhere, on any device, with hundreds of handy training modules. The next Grow with Google event will be held in Melbourne on 24-25 October. Find out more at: http://g.co/GrowMelbourne.