Monthly Archives: September 2019

Join us at a Webmaster Conference in Mountain View, California

Earlier this year we announced a series of Webmaster Conferences being held around the world to help website creators understand how to optimize their sites for Search. We’ve already held 22 of these events, with more planned through the end of the year. Building on the success of these events so far, we’re hosting a product summit version of this event at the Google Headquarters in Mountain View on Monday November 4th.


Photos from the Webmaster Conference in Kuala Lumpur, earlier this year.

This event is designed to facilitate an open dialog between the webmaster and SEO community and Search product teams. This one-day event will include talks from Search product managers, Q&A sessions, and a product fair giving attendees the opportunity to have direct conversations with product managers. Attendees will learn from the people building Search about how they think about the evolution of the platform, and have the opportunity to share feedback about the needs of the community.

We also realize that not everyone will be able to make this event in person, so we plan to share out much of the content and feedback after the event.

If you’re interested and able to make it, we encourage you to apply today as space is limited. Complete details about the event and the application process can be found on the event registration site. And as always, you can check out our other upcoming events on the general Webmaster Conference site, the Google Webmasters event calendar, or follow our blogs and @googlewmc on Twitter!

Posted by John Mueller, Google Switzerland

Beta Channel Update for Chrome OS

The Beta channel has been updated to 78.3904.35 (Platform version: 12499.14.0) for most Chrome OS devices. This build contains a number of bug fixes, security updates and feature enhancements. Changes can be viewed here.


If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser).

Geo Hsu
Google Chrome OS

Celebrating World Teachers’ Day on Manitoulin Island

This World Teachers’ Day, we’re shining a spotlight on a special Canadian teacher who is using CS First, a Grow with Google curriculum for elementary and middle school students, in the classroom. Our guest author is April Aelick, who teaches grade 8 at Little Current Public School, which is part of the Rainbow District School Board on Manitoulin Island.

Having taught for almost seventeen years on Manitoulin Island -- at the same school I attended from kindergarten to grade 8, no less -- I know how challenging it is to keep students engaged and excited in class.

That’s why I was so happy to come across CS First, Google’s free computer science curriculum that makes coding easy for teachers to share and fun for students to learn. Earlier this year, I signed up for an evening workshop to learn CS First, with the hopes of being able to introduce it to my grade eight students.
At the workshop, I learned about an interesting concept called ‘computational thinking’. It’s a systematic approach to solving problems through data that is at the foundation of computer science and can be applied to many other subject areas -- and careers -- that intersect with technology.

As a teacher in a rural community, I can see how CS First will allow my students the opportunity to explore ways in which computer science can fit into their interests and possibly lead them down a career path they didn’t consider before. '

Ask any student or teacher, grade 8 can be a difficult age to engage students in something new. Many students are self-conscious and are reluctant to take risks. They can also get frustrated when things don’t go right. Often, they think the easy way out is to just quit.

CS First uses computational thinking to teach students not just hard skills, like coding, but the soft skills they need to be successful in life.
Recently, one of my students worked very hard on a CS First project and, well, had a “tech fail”. His entire project was lost, and he was very disappointed to say the least. While some students would easily give up, this student went right back to work, rewatched the tutorials online and created something even better than before. CS First helped teach the class a great lesson that day, beyond just learning how to code: there will inevitably be “tech fails”, and it is how you overcome these problems that will help you succeed in life.

The beauty of CS First is that it is so accessible to all students. There is no requirement for peripheral materials. I am lucky that my students have 1:1 access to Chromebooks, but even if a class didn’t have this option, it can still be used effectively with offline lessons.

I think if you’re a teacher interested in expanding computer science into your classroom, give CS First a try, you’ve got nothing to lose! The amount of problem-solving and willingness to take risks I have witnessed so far from my students has been worth it. Even teachers who are not comfortable with coding can find success in their classrooms.

Education opens doors for people that may be otherwise shut. It is my goal to expose my students to as many opportunities as I can so they don’t feel limited by their circumstances or geographic location. I teach amazing students that will have big impact in our world, and I want them to recognize that.

Editor’s note: Want to see CS First in action? Watch this video featuring an elementary school from Waterloo! If you’re interested in CS First, check out our website for how to get started.

Google Summer of Code 2019 (Statistics Part 2)

2019 has been an epic year for Google Summer of Code as we celebrated 15 years of connecting university students from around the globe with 201 open source organizations big and small.

We want to congratulate our 1,134 students that complete GSoC 2019. Great work everyone!

Now that GSoC 2019 is over we would like to wrap up the program with some more statistics to round out the year.

Student Registrations

We had 30,922 students from 148 countries register for GSoC 2019 (that’s a 19.5% increase in registrations over last year, the previous record). Interest in GSoC clearly continues to grow and we’re excited to see it growing in all parts of the world.

For the first time ever we had students register from Bhutan, Fiji, Grenada, Papua New Guinea, South Sudan, and Swaziland.

Universities

The 1,276 students accepted into the GSoC 2019 program hailed from 6586 universities, of which, 164 have students participating for the first time in GSoC.

Schools with the most accepted students for GSoC 2019:

University # of Accepted Students
Indian Institute of Technology, Roorkee48
International Institute of Information Technology - Hyderabad29
Birla Institute of Technology and Science, Pilani (BITS Pilani)27
Guru Gobind Singh Indraprastha University (GGSIPU Dwarka)20
Indian Institute of Technology, Kanpur19
Indian Institute of Technology, Kharagpur19
Amrita University / Amrita Vishwa Vidyapeetham14
Delhi Technological University11
Indian Institute of Technology, Bombay11
Indraprastha Institute of Information and Technology, New Delhi11

Mentors

Each year we pore over gobs of data to extract some interesting statistics about the GSoC mentors. Here’s a quick synopsis of our 2019 crew:
  • Registered mentors: 2,815
  • Mentors with assigned student projects: 2,066
  • Mentors who have participated in GSoC for 10 or more years: 70
  • Mentors who have been a part of GSoC for 5 years or more: 307
  • Mentors that are former GSoC students: 691
  • Mentors that have also been involved in the Google Code-in program: 498
  • Percentage of new mentors: 35.84%
GSoC 2019 mentors are from all parts of the world, representing 81 countries!

Every year thousands of GSoC mentors help introduce the next generation to the world of open source software development—for that we are forever grateful. We can not stress enough that without our invaluable mentors the GSoC program would not exist. Mentorship is why GSoC has remained strong for 15 years, the relationships built between students and mentors have helped sustain the program and many of these communities. Sharing their passion for open source, our mentors have paved the road for generations of contributors to enter open source development.

Thank you to all of our mentors, organization administrators, and all of the “unofficial” mentors that help in our open source organization’s communities. Google Summer of Code is a community effort and we appreciate each and every one of you.

By Stephanie Taylor, Google Open Source

Large-Scale Multilingual Speech Recognition with a Streaming End-to-End Model



Google's mission is not just to organize the world's information but to make it universally accessible, which means ensuring that our products work in as many of the world's languages as possible. When it comes to understanding human speech, which is a core capability of the Google Assistant, extending to more languages poses a challenge: high-quality automatic speech recognition (ASR) systems require large amounts of audio and text data — even more so as data-hungry neural models continue to revolutionize the field. Yet many languages have little data available.

We wondered how we could keep the quality of speech recognition high for speakers of data-scarce languages. A key insight from the research community was that much of the "knowledge" a neural network learns from audio data of a data-rich language is re-usable by data-scarce languages; we don't need to learn everything from scratch. This led us to study multilingual speech recognition, in which a single model learns to transcribe multiple languages.

In “Large-Scale Multilingual Speech Recognition with a Streaming End-to-End Model”, published at Interspeech 2019, we present an end-to-end (E2E) system trained as a single model, which allows for real-time multilingual speech recognition. Using nine Indian languages, we demonstrated a dramatic improvement in the ASR quality on several data-scarce languages, while still improving performance for the data-rich languages.

India: A Land of Languages
For this study, we focused on India, an inherently multilingual society where there are more than thirty languages with at least a million native speakers. Many of these languages overlap in acoustic and lexical content due to the geographic proximity of the native speakers and shared cultural history. Additionally, many Indians are bilingual or trilingual, making the use of multiple languages within a conversation a common phenomenon, and a natural case for training a single multilingual model. In this work, we combined nine primary Indian languages, namely Hindi, Marathi, Urdu, Bengali, Tamil, Telugu, Kannada, Malayalam and Gujarati.

A Low-latency All-neural Multilingual Model
Traditional ASR systems contain separate components for acoustic, pronunciation, and language models. While there have been attempts to make some or all of the traditional ASR components multilingual [1,2,3,4], this approach can be complex and difficult to scale. E2E ASR models combine all three components into a single neural network and promise scalability and ease of parameter sharing. Recent works have extended E2E models to be multilingual [1,2], but they did not address the need for real-time speech recognition, a key requirement for applications such as the Assistant, Voice Search and GBoard dictation. For this, we turned to recent research at Google that used a Recurrent Neural Network Transducer (RNN-T) model to achieve streaming E2E ASR. The RNN-T system outputs words one character at a time, just as if someone was typing in real time, however this was not multilingual. We built upon this architecture to develop a low-latency model for multilingual speech recognition.
[Left] A traditional monolingual speech recognizer comprising of Acoustic, Pronunciation and Language Models for each language. [Middle] A traditional multilingual speech recognizer where the Acoustic and Pronunciation model is multilingual, while the Language model is language-specific. [Right] An E2E multilingual speech recognizer where the Acoustic, Pronunciation and Language Model is combined into a single multilingual model.
Large-Scale Data Challenges
Using large-scale, real-world data for training a multilingual model is complicated by data imbalance. Given the steep skew in the distribution of speakers across the languages and speech product maturity, it is not surprising to have varying amounts of transcribed data available per language. As a result, a multilingual model can tend to be more influenced by languages that are over-represented in the training set. This bias is more prominent in an E2E model, which unlike a traditional ASR system, does not have access to additional in-language text data and learns lexical characteristics of the languages solely from the audio training data.
Histogram of training data for the nine languages showing the steep skew in the data available.
We addressed this issue with a few architectural modifications. First, we provided an extra language identifier input, which is an external signal derived from the language locale of the training data; i.e. the language preference set in an individual’s phone. This signal is combined with the audio input as a one-hot feature vector. We hypothesize that the model is able to use the language vector not only to disambiguate the language but also to learn separate features for separate languages, as needed, which helped with data imbalance.

Building on the idea of language-specific representations within the global model, we further augmented the network architecture by allocating extra parameters per language in the form of residual adapter modules. Adapters helped fine-tune a global model on each language while maintaining parameter efficiency of a single global model, and in turn, improved performance.
[Left] Multilingual RNN-T architecture with a language identifier. [Middle] Residual adapters inside the encoder. For a Tamil utterance, only the Tamil adapters are applied to each activation. [Right] Architecture details of the Residual Adapter modules. For more details please see our paper.
Putting all of these elements together, our multilingual model outperforms all the single-language recognizers, with especially large improvements in data-scarce languages like Kannada and Urdu. Moreover, since it is a streaming E2E model, it simplifies training and serving, and is also usable in low-latency applications like the Assistant. Building on this result, we hope to continue our research on multilingual ASRs for other language groups, to better assist our growing body of diverse users.

Acknowledgements
We would like to thank the following for their contribution to this research: Tara N. Sainath, Eugene Weinstein, Bo Li, Shubham Toshniwal, Ron Weiss, Bhuvana Ramabhadran, Yonghui Wu, Ankur Bapna, Zhifeng Chen, Seungji Lee, Meysam Bastani, Mikaela Grace, Pedro Moreno, Yanzhang (Ryan) He, Khe Chai Sim.

Source: Google AI Blog


How a psychiatry pioneer helped me understand my mother

Editor’s note: To help families dealing with addiction, Google has given over $1 million worth of contributions to Partnership for Drug-Free Kids (PDFK) this year and worked with PDFK to show up for people who are seeking support. When someone searches for relevant queries such as "teen drug addiction" on Google or YouTube, they get the number to call an experienced parent coach who works with caregivers to develop individualized plans for helping loved ones with substance use. You can also find local and national helplines on our Recover Together site.

I can still vividly remember confronting my mother when I was in my late teens. After a lifetime of dealing with her unreliability, I had just had it. In a blowup conversation, I told her that she had chosen drugs over me. 

At the time, I looked at her years of being incarcerated or held up in halfway homes as abandonment. But I now know that there was no other place she would have rather been than to be home with me—clean and sober. While my mother ultimately died of an opioid overdose, truthfully the drug had been slowly taking her over the years. I’ve come to understand that she only chose drugs once. She started using at 13 years old, and that was the one and only time it was her decision. After that, the drugs had her and never let go.

Today’s global Google Doodle honors the late Dr. Herbert D. Kleber, who followed a calling in his life to study patients with addictions, like my mother. It was a direction he didn’t plan for in his professional career as a psychiatrist. However, his pioneering work on understanding and treating addiction brought the scientific community to the understanding that drug addictions are physiological shortcomings, and not moral ones. I’m grateful for Dr. Kleber’s work, because it has certainly helped me better understand my mother’s plight. 

Hey, Kiddo excerpt

An excerpt from Jarrett’s memoir, “Hey, Kiddo.”

Like Dr. Kleber, I also followed a calling. I didn’t expect to write a memoir about my relationship with my mother and her drug use. But after meeting so many young readers who also walked a similar path in life to the one I had, I truly felt the need to tell my story. It’s why I was also moved to work on this Google Doodle honoring Dr. Kleber. I hope that both offer people, especially young readers, the opportunity to see their experiences reflected in media that is visible to all.

I hear from my readers often as they recount their own complicated relationships with a parent suffering from opioid addiction disorder. When they ask what it was like to write and draw a book that recounts so many traumatic moments, I let them know that it helped me come to an important conclusion: My mother wasn’t the antagonist in the story of my life. The drugs were.

Make the Palace of Versailles yours on Google Arts & Culture

One of the first things I saw when I arrived at the Palace of Versailles in 2011 was a construction site. In partnership with Google, we were building the History Gallery, an exhibit that brought together our art collections and digital reconstructions of the palace in 3D. The History Gallery gave people a better understanding of Versailles, and eight years later, the partnership between the Palace of Versailles and Google Arts & Culture continues to give everyone access to this cultural treasure through technology. Today, we’re launching a new online exhibition for everyone who can’t make it to Paris or who wants to explore this majestic place in a new way: Versailles: The Palace is Yours.

Our new app VersaillesVR—a technological first in the cultural world—takes visitors on a virtual reality tour of the Royal Grand Apartments, the Chapel and the Opera. To capture the imagery, we used photogrammetry—a technology that reconstructs three-dimensional models of objects and landmarks from two-dimensional photographs. It’s an invitation to discover the secrets of Versailles, and a magnificent sneak peek for those who might plan to visit in person. Though nothing will ever replace the emotion of actually stepping into the Palace, we hope this visual immersion might inspire you to do just that.

There are also 18 new online exhibitions featuring 340 artworks—including portraits of the royal family digitized in ultra high resolution and archival photos of Versailles dating from the 19th century—as well as 18 never-before-seen 3D models of iconic rooms and objects. Explore the 73-meter long Hall of Mirrorsthe King’s Bed or Marie-Antoinette’s jewelry cabinet.

Versailles has always been an incredible place to visit. Today, opening the doors of Versailles to the world means opening them virtually, too.

Google Street View Dives into the Largest Fringing Coral Reef in the World

On the west coast of Australia is an ocean paradise home to 300 species of coral, 500 fish species and megafauna such as whale sharks. It’s the Ningaloo Coast, and this is where the Google Street View team has spent the last 10 days capturing imagery -- adding to the more than 170 billion images from 87 countries already collected.

The Ningaloo Coast is World Heritage listed and was named a Hope Spot by Mission Blue this year, meaning it’s critical to the health of the ocean. The good news is much of the reef is currently still healthy. This Street View capture is a chance to document its current condition and keep track of how it's evolving. And by raising awareness and making sure that as many people as possible see this natural wonder, and get to understand its significance, we hope to do our bit to help protect this incredible place.
Kerstin Stender from Parks and Wildlife Service WA captures Turquoise Bay with Google Street View Trekker. 
Street View imagery is gathered in a number of ways: some places are captured by the Street View car, others by the Trekker, or we can dive beneath the waves with 360-degree cameras. Partnering with Parks and Wildlife Service WA and not-for-profit Underwater Earth, we captured Ningaloo from every angle, collecting imagery above, below and along the coast.
Kerstin Stender from Parks and Wildlife Service WA and the Google Street View Trekker keeping an eye out for whale sharks as they cruise the Ningaloo Coast. 
We trekked hundreds of kilometres of National Park lands and beaches. On the water, we watched for whale sharks, humpback whales, turtles, and more. Then, we swam in the pristine waters of the Indian Ocean to capture images and learn about this unique part of the world.
Christophe Bailhache from Underwater Earth says G'Day to a leopard shark as he captures the Ningaloo Reef for Google Street View. 
Whether you’re in Newcastle, Naples, Napa or Nairobi, in the coming months you’ll be able to experience and explore the magic of the Ningaloo Coast on Google Street View and Google Earth, without getting your feet wet.



Photo credit: Sam Venn Photography

Unleashing Open Source Silicon

Open Source Silicon

We all know that open source software has changed the fundamental nature of the software industry and that Google generously adds fuel to this culture of openness and community through Google Summer of Code. What few people realize is that there is another major industry that is ripe for an open source overhaul—the silicon industry. And, this summer, a Google Summer of Code student helped open the floodgates.

If you search social media for “open source silicon,” you’ll find a few dozen names that pop up with some frequency. These folks are fanatically driving forward with open source circuit models and software for creating them. You’ll also find people clambering to jump aboard the RISC-V bandwagon. RISC-V, like x86, MIPS, and others before it, is a CPU “instruction set architecture,” and the mere fact that it is free of proprietary licenses has inspired countless open source implementations and an industry shake-up that has ARM quaking in its boots.

While this open source silicon community is a hotbed of enthusiasm, it is several decades behind the world of open source software. In this post, I’ll reveal the three reasons this movement has, thus far, not been able to take off like open source software, and I’ll explain why these three obstacles are all coming to a very sudden and dramatic end, that will unleash a tidal wave, catching the silicon industry by surprise. And you’ll see that Google Summer of Code, this year, played a pivotal role.

What’s Standing in the Way

So, why is coding and sharing circuit models any different from sharing software? Three reasons:
  1. Implementation Details: There’s more to worry about with hardware than software. Correct functionality is far from the only concern. Particular care must be given to physical implementation. And this detail must be modified for specific silicon technology and design constraints. As a result, leveraging open source logic can involve a substantial amount of rework.
  2. Access to software: While compilers for software tend to be open source, electronic design automation (EDA) tools for compiling hardware are traditionally proprietary and prohibitively expensive.
  3. Access to hardware: Unlike software, circuit models must be turned into silicon to be useful. Fabricating custom silicon is out of the question for a hobbyist, but field-programmable gate arrays (FPGAs) provide a more realistic option. These are chips that can be quickly reconfigured, or “programmed,” to implement any logic function. While FPGAs are within reach, they still cost money, and they are delivered by postal service, not a web browser. And, worst of all, it could take weeks to get an FPGA platform up and running and communicating with the open source logic.

Breaking Down the Barriers

Let’s look at what the open source community is doing to help:
  1. Implementation Details: There is a trend toward designing more abstractly, and leaving the details to tools. Open source tools can now compile C++ into silicon (with caveats). And several open source hardware description languages leverage modern software language innovations that make it easier to rework implementation details. The open source community has shown a greater willingness than industry to explore and adopt these languages. Though hardware remains fundamentally different from software, their differences are becoming less prominent.
  2. Access to software: Open source EDA software has marked some significant achievements in the past several years. Circuit designs have been implemented on FPGAs using 100% free and open source EDA tools. (Google Summer of Code has helped to fund a few open source EDA capabilities in projects under the Free and Open Source Silicon Foundation.) The US government has recognized the opportunity and is providing significant fuel to the fire through the Posh Open Source Hardware initiative. Being restricted to open source software can still be a bit limiting, but it is no longer prohibitive.
  3. Access to hardware: Hmmm. This is still a problem.
My personal contributions to this open source silicon movement stem from my startup, Redwood EDA. We directly target problem #1 by providing tools that support advanced (yet simpler) circuit modeling techniques. And, to address #2, we make all of our software freely available online for open source development. But neither open source EDA nor the efforts of my startup had been able to noticeably impact problem #3, access to hardware.

This is where bigger forces have stepped in. In the past few years, cloud providers have begun incorporating FPGAs into their datacenters. These are available to anyone with an internet connection and a credit card, bundled with industry-class EDA software, on a pay-per-use basis. Wow! This is the solution to hardware access! An open source developer can provide not only their hardware model but also the platform for which their model was designed. A user can download and go, just like they can with software! …in theory.

So here’s the rub. The learning curve for cloud FPGA platforms has been way too high for the open source community to latch on.

Our Project

With a bit of help from Politecnico di Milano’s NECST Lab and ThroughPuter Inc., I was able to get a project off the ground, and it attracted some attention for this year’s Google Summer of Code. I was happy to see an application from Ákos Hadnagy who had done some other ground-breaking work with me in the last Summer of Code, and he was accepted into the program. Together, this summer, we built infrastructure, automated flows, and wrote documentation (or more to the point, eliminated documentation), and now, instead of a month to ramp up, it is now possible to develop for this platform in a matter of minutes!
We dubbed our framework “1st CLaaS,” where we have coined the term “CLaaS” for custom logic as a service. Very simply, 1st CLaaS wraps a developer’s custom FPGA logic as a microservice. Standard web protocols can be used to stream bits to and from this logic, and platform details are hidden by the framework.

Implications and Wrap-up

So there is no longer anything standing in the way! Hobbyists can build and share hardware, and open source silicon can thrive. Just imagine the disruption this will have on the industry, which is currently driven by corporate giants. And with easy web integration, the opportunity and demand for hardware acceleration should rise, and we could start to see some interesting new capabilities on the web that were not imaginable until now.

Google certainly didn’t have this specific industry transformation in mind when starting Google Summer of Code, but I suspect the whole point of the program was to inspire and enable the unexpected. And it did!

If you’d like to contribute to 1st CLaaS or collaborate on some of the world’s first FPGA-accelerated web applications, we’d be more than happy to have you involved. I look forward to next year's applications.

By Steve Hoover, Redwood EDA, Google Summer of Code mentor

Stable Channel Update for Chrome OS

The Stable channel is being updated to 77.0.3865.105 (Platform version: 12371.75.0) for most Chrome OS devices. This build contains a number of bug fixes and security updates. Systems will be receiving updates over the next several days.

You can review new features here.

If you find new issues, please let us know by vising our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using 'Report an issue...' in the Chrome menu (3 vertical dots in the upper right corner of the browser).

Daniel Gagnon
Google Chrome OS