Bigger Rewards for Security Bugs

Chrome has always been built with security at its core, by a passionate worldwide community as part of the Chromium open source project. We're proud that community includes world class security researchers who help defend Chrome, and other Chromium based browsers.

Back in 2010 we created the Chrome Vulnerability Rewards Program which provides cash rewards to researchers for finding and reporting security bugs that help keep our users safe. Since its inception the program has received over 8,500 reports and paid out over five million dollars! A big thank you to every one of the researchers - it's an honor working with you.

Over the years we've expanded the program, including rewarding full chain exploits on Chrome OS, and the Chrome Fuzzer Program, where we run researchers' fuzzers on thousands of Google cores and automatically submit bugs they find for reward.

Today, we're delighted to announce an across the board increase in our reward amounts! Full details can be found on our program rules page but highlights include tripling the maximum baseline reward amount from $5,000 to $15,000 and doubling the maximum reward amount for high quality reports from $15,000 to $30,000. The additional bonus given to bugs found by fuzzers running under Chrome Fuzzer Program is also doubling to $1,000.

We've also clarified what we consider a high quality report, to help reporters get the highest possible reward, and we've updated the bug categories to better reflect the types of bugs that are reported and that we are most interested in.

But that's not all! On Chrome OS we're increasing our standing reward to $150,000 for exploit chains that can compromise a Chromebook or Chromebox with persistence in guest mode. Security bug in firmware and lock screen bypasses also get their own reward categories.

These new reward amounts will apply to bugs submitted after today on the Chromium bug tracker using the Security template. As always, see the Chrome Vulnerability Reward Program Rules for full details about the program.

In other news, our friends over at the Google Play Security Reward Program have increased their rewards for remote code execution bugs from $5,000 to $20,000, theft of insecure private data from $1,000 to $3,000, and access to protected app components from $1,000 to $3,000. The Google Play Security Reward Program also pays bonus rewards for responsibly disclosing vulnerabilities to participating app developers. Check out the program to learn more and see which apps are in scope.

Happy bug hunting!

YouTube Music now lets listeners switch seamlessly between audio and music videos

Imagine listening to a new track by your favorite artist in the YouTube Music app and having the ability to seamlessly switch over to watch the music video ⁠— no pauses, no interruptions, just a simple transition that keeps the music flowing. That’s exactly what we’re introducing! Today, YouTube Premium and YouTube Music Premium subscribers can now make a seamless transition between a song and its music video for uninterrupted listening and watching.

Switching between songs and music videos is as simple as the tap of a button. Users will notice a video button at the top of the screen as they start listening to a song, and with a simple tap, they can instantly start watching the music video or flip back to the audio at the same point in the track.



This new feature simplifies listening to songs and watching videos, plus more!

  • Discovering new music videos is easier than ever before. From recent mega-hits to deep cuts, if a song has a video, YouTube Music will surface a video button so switching between audio and visuals is just one click away.
  • YouTube Music has perfectly time-matched over five million official music videos to their respective audio tracks, so no matter when or how often you flip back and forth between the two, you won’t miss a beat.
  • When you flip from video to song, say goodbye to the other sounds that go with the music video — like those long introductions  and enjoy the song as it was intended.
  • Not into music videos? We’ve got you covered. To stick to songs 100% of the time, visit your settings and turn off the music video option by toggling “Don’t play music videos” to the “on” position.


Whether you’re listening to your favorites or checking out new releases, your music experience just got way more interactive. To check out flipping between song and video, along with all the other great features, download the YouTube Music app for Android or iOS, and start your trial of YouTube Music Premium.

Brandon Bilinski, Product Manager, YouTube Music. He's recently been listening to "Happier" by Marshmello & Bastille.

Source: YouTube Blog


YouTube Music now lets listeners switch seamlessly between audio and music videos

Imagine listening to a new track by your favorite artist in the YouTube Music app and having the ability to seamlessly switch over to watch the music video ⁠— no pauses, no interruptions, just a simple transition that keeps the music flowing. That’s exactly what we’re introducing! Today, YouTube Premium and YouTube Music Premium subscribers can now make a seamless transition between a song and its music video for uninterrupted listening and watching.

Switching between songs and music videos is as simple as the tap of a button. Users will notice a video button at the top of the screen as they start listening to a song, and with a simple tap, they can instantly start watching the music video or flip back to the audio at the same point in the track.



This new feature simplifies listening to songs and watching videos, plus more!

  • Discovering new music videos is easier than ever before. From recent mega-hits to deep cuts, if a song has a video, YouTube Music will surface a video button so switching between audio and visuals is just one click away.
  • YouTube Music has perfectly time-matched over five million official music videos to their respective audio tracks, so no matter when or how often you flip back and forth between the two, you won’t miss a beat.
  • When you flip from video to song, say goodbye to the other sounds that go with the music video — like those long introductions  and enjoy the song as it was intended.
  • Not into music videos? We’ve got you covered. To stick to songs 100% of the time, visit your settings and turn off the music video option by toggling “Don’t play music videos” to the “on” position.


Whether you’re listening to your favorites or checking out new releases, your music experience just got way more interactive. To check out flipping between song and video, along with all the other great features, download the YouTube Music app for Android or iOS, and start your trial of YouTube Music Premium.

Brandon Bilinski, Product Manager, YouTube Music. He's recently been listening to "Happier" by Marshmello & Bastille.

Source: YouTube Blog


A moonlit tribute to a moon landing icon

“There was no choice but to be pioneers.” 

That’s how Margaret Hamilton describes working on the software that put us on the moon. Margaret led the team that developed the onboard flight software for all of NASA’s manned Apollo missions, including Apollo 11’s historic moon landing.

With the anniversary of that moon landing approaching, Google set out to shine a light on Margaret’s influence on Apollo, and on the field of software engineering itself. The tribute was created by positioning over 107,000 mirrors at the Ivanpah Solar Facility in the Mojave Desert to reflect the light of the moon, instead of the sun, like the mirrors normally do. The result is a 1.4-square-mile portrait of Margaret, bigger than New York’s Central Park.

At the MIT Instrumentation Lab in the 1960s, Margaret was working on code for the Apollo Guidance Computer. A working mom, she sometimes did what a lot of us do: she took her daughter, Lauren, to the office. Margaret would often test programs in the simulator, and Lauren liked to play astronaut like her mom. One day, Lauren crashed the simulator after she pressed a button that set off a prelaunch program while the mission was in mid-flight. 

Margaret didn’t scold Lauren. Instead, she was struck with a thought: “What if an astronaut did the same thing during a real mission?” Margaret lobbied to add code that would prevent a system crash from actually happening if he did. 

This way of thinking came to define Margaret’s work. She’d always ask, “What if something you never thought would happen, happens?” Then, she’d develop and test a system that would be prepared for that scenario.

Her “what if” mindset was crucial throughout the Apollo missions, where the software had to work perfectly, and had to work the first time, in space. Keep in mind, this was at a time when software engineering literally wasn’t even a thing yet—Margaret herself coined the phrase “software engineering” while working on Apollo.

A portrait of Margaret Hamilton

Margaret, in 1969, standing beside the listings of the actual Apollo Guidance Computer source code. Photo courtesy of the MIT Museum.

Margaret’s mindset most famously paid off moments before Apollo 11 was set to land. The guidance computer was overwhelmed with tasks and underwent a series of restarts, triggering alarms that could have forced an abort. But the team’s software was reliable, and the priority display (that Margaret created, and fought to include) let the astronauts and Mission Control know what they were dealing with. The Eagle was able to land safely, and Neil Armstrong was able to take that one small step.

As the anniversary of that historic moment approaches, we can all thank Margaret for her part in it. But I find myself thanking her for so much more:

For pioneering the field of software engineering. A field that has changed our world. 

For reminding us to think always of the user, and to keep pushing to make software more reliable, and more helpful, for them. 

For inspiring us all to take moonshots, showing us what’s possible when you work tirelessly toward them, and demonstrating what magic can come when you allow a child’s perspective to change the way you think about the world. 

Want to learn more about Margaret and the moon landing? Check out Google Arts & Culture for all kinds of Apollo 11 anniversary stories, including an article about  Margaret. Want to teach Margaret’s story in your class? Download a Common Core aligned lesson plan.

The Compass Experiment is navigating local news in Ohio

I fell in love with journalism while growing up in Ohio, and later while in college at Kent State University. As a student, I tried—and failed—to get an internship at a nearby newspaper I admired, the Youngstown Vindicator. 

But now, 150 years after it started, The Vindicator is closing on August 31. That will leave Youngstown, Ohio, and a larger region of about 500,000 people, without a daily newspaper. The timing of such a loss couldn’t be worse for Youngstown, which has suffered through a tremendous economic downturn over the last 40 years.  

While the area may be struggling financially, Youngstown has a distinct identity and a strong sense of community, which is why we want to help build a path forward for local news. Today, McClatchy announced Youngstown will be the location of The Compass Experiment’s first local news operation, due to launch this fall. 

Compass is a local news lab founded in partnership between McClatchy and Google, and part of the Google News Initiative’s Local Experiments Project. Over the next three years, we will launch and operate three digital-only news operations in small to mid-sized U.S. communities that have limited sources of local, independent journalism. The goal is to not only support the dissemination of news in these communities, but also make the local operations financially self-sustaining, through experimentation with a variety of revenue models. We will also document and share what we've learned with the broader news community, with the intention of creating successful models that can be replicated elsewhere. 

Over the past few weeks, the Compass team has been talking to journalists, community leaders and businesses in the Youngstown area about the area’s news needs. We have found many allies eager to help bring this to life.  

The locations of the remaining Compass sites have not been decided yet. Each site will be independently built and may launch with different platforms and revenue models. All three sites will be 100 percent owned and operated by McClatchy, which has sole editorial control over content. 

In the search for ideal Compass sites, McClatchy has put considerable effort into identifying local markets ripe for innovation in local news. Compass consulted with Penelope Muse Abernathy, the Knight Chair in Journalism and Digital Media Economics at the University of North Carolina and author of a 2018 study on the loss of local journalism in the United States, in analyzing potential communities for the first local digital news sites.  

We at McClatchy are looking forward to continuing our close collaboration with Google as we embark on this next important step. Over the course of the next three years, we will be sharing our successes, failures and what we’ve learned to the media industry at large.  

Compass is currently hiring editorial and business staff from the area to begin work on the Youngstown operation, as well as positions on its central team. In the meantime, please follow along on our Medium page as we develop our Youngstown news operation.

Chrome Beta for Android Update

Hi everyone! We've just released Chrome Beta 76 (76.0.3809.70) for Android: it's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Krishna Govind
Google Chrome

Dev Channel Update for Desktop

The Dev channel has been updated to 77.0.3854.3 for Windows, Mac, and Linux.



A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.
Lakshmana Pamarthy
Google Chrome

Parrotron: New Research into Improving Verbal Communication for People with Speech Impairments



Most people take for granted that when they speak, they will be heard and understood. But for the millions who live with speech impairments caused by physical or neurological conditions, trying to communicate with others can be difficult and lead to frustration. While there have been a great number of recent advances in automatic speech recognition (ASR; a.k.a. speech-to-text) technologies, these interfaces can be inaccessible for those with speech impairments. Further, applications that rely on speech recognition as input for text-to-speech synthesis (TTS) can exhibit word substitution, deletion, and insertion errors. Critically, in today’s technological environment, limited access to speech interfaces, such as digital assistants that depend on directly understanding one's speech, means being excluded from state-of-the-art tools and experiences, widening the gap between what those with and without speech impairments can access.

Project Euphonia has demonstrated that speech recognition models can be significantly improved to better transcribe a variety of atypical and dysarthric speech. Today, we are presenting Parrotron, an ongoing research project that continues and extends our effort to build speech technologies to help those with impaired or atypical speech to be understood by both people and devices. Parrotron consists of a single end-to-end deep neural network trained to convert speech from a speaker with atypical speech patterns directly into fluent synthesized speech, without an intermediate step of generating text—skipping speech recognition altogether. Parrotron’s approach is speech-centric, looking at the problem only from the point of view of speech signals—e.g., without visual cues such as lip movements. Through this work, we show that Parrotron can help people with a variety of atypical speech patterns—including those with ALS, deafness, and muscular dystrophy—to be better understood in both human-to-human interactions and by ASR engines.
The Parrotron Speech Conversion Model
Parrotron is an attention-based sequence-to-sequence model trained in two phases using parallel corpora of input/output speech pairs. First, we build a general speech-to-speech conversion model for standard fluent speech, followed by a personalization phase that adjusts the model parameters to the atypical speech patterns from the target speaker. The primary challenge in such a configuration lies in the collection of the parallel training data needed for supervised training, which consists of utterances spoken by many speakers and mapped to the same output speech content spoken by a single speaker. Since it is impractical to have a single speaker record the many hours of training data needed to build a high quality model, Parrotron uses parallel data automatically derived with a TTS system. This allows us to make use of a pre-existing anonymized, transcribed speech recognition corpus to obtain training targets.

The first training phase uses a corpus of ~30,000 hours that consists of millions of anonymized utterance pairs. Each pair includes a natural utterance paired with an automatically synthesized speech utterance that results from running our state-of-the-art Parallel WaveNet TTS system on the transcript of the first. This dataset includes utterances from thousands of speakers spanning hundreds of dialects/accents and acoustic conditions, allowing us to model a large variety of voices, linguistic and non-linguistic contents, accents, and noise conditions with “typical” speech all in the same language. The resulting conversion model projects away all non-linguistic information, including speaker characteristics, and retains only what is being said, not who, where, or how it is said. This base model is used to seed the second personalization phase of training.

The second training phase utilizes a corpus of utterance pairs generated in the same manner as the first dataset. In this case, however, the corpus is used to adapt the network to the acoustic/phonetic, phonotactic and language patterns specific to the input speaker, which might include, for example, learning how the target speaker alters, substitutes, and reduces or removes certain vowels or consonants. To model ALS speech characteristics in general, we use utterances taken from an ALS speech corpus derived from Project Euphonia. If instead we want to personalize the model for a particular speaker, then the utterances are contributed by that person. The larger this corpus is, the better the model is likely to be at correctly converting to fluent speech. Using this second smaller and personalized parallel corpus, we run the neural-training algorithm, updating the parameters of the pre-trained base model to generate the final personalized model.

We found that training the model with a multitask objective to predict the target phonemes while simultaneously generating spectrograms of the target speech led to significant quality improvements. Such a multitask trained encoder can be thought of as learning a latent representation of the input that maintains information about the underlying linguistic content.
Overview of the Parrotron model architecture. An input speech spectrogram is passed through encoder and decoder neural networks to generate an output spectrogram in a new voice.
Case Studies
To demonstrate a proof of concept, we worked with our fellow Google research scientist and mathematician Dimitri Kanevsky, who was born in Russia to Russian speaking, normal-hearing parents but has been profoundly deaf from a very young age. He learned to speak English as a teenager, by using Russian phonetic representations of English words, learning to pronounce English using transliteration into Russian (e.g., The quick brown fox jumps over the lazy dog => ЗИ КВИК БРАУН ДОГ ЖАМПС ОУВЕР ЛАЙЗИ ДОГ). As a result, Dimitri’s speech is substantially distinct from native English speakers, and can be challenging to comprehend for systems or listeners who are not accustomed to it.

Dimitri recorded a corpus of 15 hours of speech, which was used to adapt the base model to the nuances specific to his speech. The resulting Parrotron system helped him be better understood by both people and Google’s ASR system alike. Running Google’s ASR engine on the output of Parrotron significantly reduced the word error rate from 89% to 32%, on a held out test set from Dimitri. Below is an example of Parrotron’s successful conversion of input speech from Dimitri:

Input from Dimitri Audio
Output from Parrotron Audio

We also worked with Aubrie Lee, a Googler and advocate for disability inclusion, who has muscular dystrophy, a condition that causes progressive muscle weakness, and sometimes impacts speech production. Aubrie contributed 1.5 hours of speech, which has been instrumental in showing promising outcomes of the applicability of this speech-to-speech technology. Below is an example of Parrotron’s successful conversion of input speech from Aubrie:

Input from Aubrie Audio
Output from Parrotron Audio
Input from Aubrie Audio
Output from Parrotron Audio

We also tested Parrotron’s performance on speech from speakers with ALS by adapting the pretrained model on multiple speakers who share similar speech characteristics grouped together, rather than on a single speaker. We conducted a preliminary listening study and observed an increase in intelligibility when comparing natural ALS speech to the corresponding speech obtained from running the Parroton model, for the majority of our test speakers.

Cascaded Approach
Project Euphonia has built a personalized speech-to-text model that has reduced the word error rate for a deaf speaker from 89% to 25%, and ongoing research is also likely to improve upon these results. One could use such a speech-to-text model to achieve a similar goal as Parrotron by simply passing its output into a TTS system to synthesize speech from the result. In such a cascaded approach, however, the recognizer may choose an incorrect word (roughly 1 out 4 times, in this case)—i.e., it may yield words/sentences with unintended meaning and, as a result, the synthesized audio of these words would be far from the speaker’s intention. Given the end-to-end speech-to-speech training objective function of Parrotron, even when errors are made, the generated output speech is likely to sound acoustically similar to the input speech, and thus the speaker’s original intention is less likely to be significantly altered and it is often still possible to understand what is intended:

Input from Dimitri Audio
Output from Parrotron Audio
Input from Dimitri Audio
Output from Parrotron/Input to Assistant Audio
Output from Assistant Audio
Input from Aubrie Audio
Output from Parrotron Audio

Furthermore, since Parrotron is not strongly biased to producing words from a predefined vocabulary set, input to the model may contain completely new invented words, foreign words/names, and even nonsense words. We observe that feeding Arabic and Spanish utterances into the US-English Parrotron model often results in output which echoes the original speech content with an American accent, in the target voice. Such behavior is qualitatively different from what one would obtain by simply running an ASR followed by a TTS. Finally, by going from a combination of independently tuned neural networks to a single one, we also believe there are improvements and simplifications that could be substantial.

Conclusion
Parrotron makes it easier for users with atypical speech to talk to and be understood by other people and by speech interfaces, with its end-to-end speech conversion approach more likely to reproduce the user’s intended speech. More exciting applications of Parrotron are discussed in our paper and additional audio samples can be found on our github repository. If you would like to participate in this ongoing research, please fill out this short form and volunteer to record a set of phrases. We look forward to working with you!
Acknowledgements
This project was joint work between the Speech and Google Brain teams. Contributors include Fadi Biadsy, Ron Weiss, Pedro Moreno, Dimitri Kanevsky, Ye Jia, Suzan Schwartz, Landis Baker, Zelin Wu, Johan Schalkwyk, Yonghui Wu, Zhifeng Chen, Patrick Nguyen, Aubrie Lee, Andrew Rosenberg, Bhuvana Ramabhadran, Jason Pelecanos, Julie Cattiau, Michael Brenner, Dotan Emanuel and Joel Shor. Our data collection efforts have been vastly accelerated by our collaborations with ALS-TDI.

Source: Google AI Blog


Step up your interviewing game with Byteboard

I’ve worked as a software engineer on Google products like Photos and Maps for four years. But if you asked me to interview for a new role today, I doubt most technical interviews would accurately measure my skills. I would need to find time to comb through my college computer science books, practice coding theory problems like implementing linked lists or traversing a graph, and be prepared to showcase this knowledge on a whiteboard. 

According to a survey we conducted of over 2,500 working software engineers, nearly half of the respondents spent more than 15 hours studying for their technical interviews. Unfortunately, many companies still interview engineers in a way that's entirely disjointed from day-to-day engineering work—valuing access to the time and resources required to prepare over actual job-related knowledge and skills.

As a result, the tech interview process is often inefficient for companies, which sink considerable engineering resources into a process that yields very little insight, and frustrating for candidates, who aren't able to express their full skill-set. 

At Byteboard, a project built inside of Area 120 (Google’s workshop for experimental projects), we’ve redesigned the technical interview experience to be more effective, efficient and equitable for all. Our project-based interview assesses for engineering skills that are actually used on the job. The structured, identity-blind evaluation process enables hiring managers to reliably trust our recommendations, so they have to conduct fewer interviews before reaching a confident hiring decision. For candidates, this means they get to work through the design and implementation of a real-world problem in a real-world coding environment on their own time, without the stress of going through high-pressured theoretical tests. 

An effective interview to assess for on-the-job skills

Byteboard creates more effective technical interviews

We built the Byteboard interview by pairing our software engineering skills analysis with extensive academic research on assessment theory and inclusion best practices. Our interview assesses for skills like problem solving, role-related computer science knowledge, code fluency, growth mindset and interpersonal interaction. Byteboard evaluators—software engineers with up to 15+ years of experience—are trained to objectively review each anonymized interview for the presence of 20+ essential software engineering skills, which are converted into a skills profile for each candidate using clear and well-defined rubrics. 

By providing a more complete understanding of a candidate’s strengths and weaknesses across a range of skills, Byteboard enables hiring managers and recruiters to make data-backed hiring decisions. Early tester Betterment saw their onsite-to-offer rates significantly increase by using Byteboard, indicating its effectiveness at identifying strong candidates for the job.

A more efficient interview to save engineers time

Byteboard creates more efficient technical interviews

Byteboard offers an end-to-end service that includes developing, administering and evaluating the interviews, letting companies focus on meeting more potential candidates face-to-face and increasing the number of candidates they can interview. Our clients have replaced up to 100 percent of their pre-onsite interviews with the Byteboard interview, allowing them to redirect time toward recruiting candidates directly at places like conferences and college campuses.

An equitable interview format to reduce bias

Byteboard creates more equitable technical interviews

The Byteboard interview is designed to grant everyone, regardless of gender, race, ethnicity, name, background or education, the same opportunity to demonstrate their skills. Traditional technical interviews tend to test for understanding of theoretical concepts, which often require a big investment of time or resources to study up on. This can create anxiety for candidates who may not have either of those to spare as they are looking for a new job. By focusing on engineering skills that are actually used on the job, Byteboard allows candidates to confidently show off their role-related skills in an environment that is less performative and more similar to how they typically work as engineers. 

I felt less anxious while doing the interview and it gave me the most complete view of my strengths and weaknesses than any other interview I've done. a recent candidate from Howard University
An applicant or recruiter using Byteboard

The Byteboard Assessment Development team of educators and software engineers develop challenging questions that are tested and calibrated among engineers across a wide range of demographics. Through Byteboard's anonymization and structured evaluation of the interviews, hiring managers can make decisions with confidence without relying on unconscious biases. 


With Byteboard, our ultimate goal is to make interviewing better for companies and candidates alike. Companies looking to improve their hiring process can get in touch at byteboard.dev.

When you can’t find the words, 65 new emoji are here for you

Are you a 🥳person or a 💃person? Or maybe you're more of a 💝💖💓💞💕💖❣ person than a simple 🥰person. Either way it's time to celebrate what is arguably the most important day of the year, World Emoji Day. Never heard of it? That's ok, you can look forward to 65 new emoji that we’re releasing with Android Q later this year. For those who can’t wait, here’s a sneak peek at what’s coming:

A sloth for when you’re having a slow morning and running late but looking cute.

sloth emoji.jpeg

An otter for when you need to tell your significant otter that they are otterly amazing.

otter emoji.png

Garlic for when you need to fend off some vampires.

garlic emoji.png

Waffle emoji and kneeling emoji. For when you’re proposing your undying commitment and love for … breakfast.

proposalwaffle.png

Service Dog emoji and Guide Dog emoji. Just two good boys.

dog emoji.png

There are a lot of different kinds of couples out there, and our emoji should reflect that. So we designed 71 couples with different skin tones.

multi skintone emoji.png

The Diya lamp emoji is also new. We’ve had Christmas and Thanksgiving covered for a while—now it’s time for Diwali celebrations.

diwali emoji.png

We’re supporting 53 emojis with gender inclusive designs. For example, the emoji for “police officer” is commonly displayed as male and "person getting haircut" is female. These kinds of design decisions can reinforce gender stereotypes so with this update, emojis that don’t specify gender will default to a gender-ambiguous design. You can still choose between male and female presentations if want to opt into a gender on your keyboard.

gender emojis.gif

These new emoji will officially become available with the launch of Android Q. If you have one of these phones, you can access them today by enrolling in the Q Beta program. 

♓🅰️🅿🅿️✌ 〰🅾®️🕒D  📧♏️🔘🌶🕯️ D🅰️✌❕

Source: Android