This release includes stability and performance improvements. You can see a full list of the changes in the Git log. If you find a new issue, please let us know by filing a bug.
TL;DR We increased the Chrome Fuzzer Program bonus from $500 to $1,000 as part of our recent update of reward amounts.
Chrome Fuzzer Program is a part of the Google Chrome Vulnerability Reward Program that lets security researchers run their fuzzers at scale on the ClusterFuzz infrastructure. It makes bug reporting fully automated, and the fuzzer authors get the same rewards as if they reported the bugs manually, plus an extra bonus ($1,000 as of now) on top of it for every new vulnerability.
We run fuzzers indefinitely, and some of the fuzzers contributed years ago are still finding security issues in ever changing Chrome code. This is a win-win for both sides, as security researchers do not have to spend time analyzing the crashes, and Chrome developers receive high quality bug reports automatically.
To learn more about the Chrome Fuzzer Program, let’s talk to Ned Williamson, who’s been a participant since 2017 and now works on the Google Security team.
Q: Hey Ned! It looks like you’ve received over $50,000 by participating in the Google Chrome Vulnerability Reward Program with your quic_stream_factory_fuzzer.
A: Yes, it’s true. I wrote a fuzzer for QUIC which helped me find and report two critical vulnerabilities, each worth $10,000. Because I knew my fuzzer worked well, I submitted it to the Chrome Fuzzer Program. Then, in the next few months, I received that reward three more times (plus a bonus), as the fuzzer caught several security regressions on ClusterFuzz soon after they happened.
Q: Have you intentionally focused on the areas that yield higher severity issues and bigger rewards?
A: Yes. While vulnerabilities in code that is more critical to user security yield larger reward amounts, I actually started by looking at lower severity bugs and incrementally began looking for more severe bugs until I could find critical ones. You can see this progression by looking at the bugs I reported manually as an external researcher.
Q: Would you suggest starting by looking for non-critical bugs?
A: I would say so. Security-critical code is generally better designed and more thoroughly audited, so it might be discouraging to start from there. Finding less critical security bugs and winning bounties is a good way to build confidence and stay motivated.
Q: Can you share an algorithm on how to find security bugs in Chrome?
A: Looking at previous and existing bug reports, even for non-security crashes, is a great way to tell which code is security-critical and potentially buggy. From there, if some code looks like it’s exposed to user inputs, I’d set up a fuzzing campaign against that component. After you gain experience you will not need to rely on existing reports to find new attack surface, which in turn helps you find places that have not been considered by previous researchers. This was the case for my QUIC fuzzer.
Q: How did you learn to write fuzzers?
A: I didn’t have any special knowledge about fuzzing before I started looking for vulnerabilities in Chrome. I followed the documentation in the repository and I still follow the same process today.
A: The key insight in the QUIC fuzzer was realizing that the parts of the code that handled plaintext messages after decryption were prone to memory corruption. Typically, fuzzing does not perform well with encrypted inputs (it’s pretty hard to “randomly” generate a packet that can be successfully decrypted), so I extended the QUIC testing code to allow for testing with encryption disabled.
Q: Are there any other good examples of fuzz targets employing a similar logic?
A: Another example is pdf_formcalc_context_fuzzer that wraps the fuzzing input around with a valid hardcoded PDF file, therefore focusing fuzzing only on the XFA script part of it. As a researcher, you just need to choose what exactly you want to fuzz, and then understand how to execute that code properly. Looking at the unit tests is usually the easiest way to get such an understanding.
Happy fuzzing and bug hunting!
Growing up, I always looked forward to summer and the road trips I’d take with family and friends. It didn’t matter if we were trekking from Chicago to Florida or taking a scenic journey to camp at Boulder Lake in Wisconsin. We’d always make a summer jams soundtrack (on cassette), pack the car full of snacks, and stick our heads out the window to feel the cool breeze.
These days, road trips feature my wife and son, as we explore all that California has to offer, but those old habits have remained the same.
For many people like myself, road trips will always will be quintessential part of summer. If you’re planning to hit the road for an adventure of your own, here are eight ways the Google Assistant can help you safely get things done when you’re behind the wheel (or in the back seat):
Check the weather at your destination by saying “Hey Google, what’s the weather like in Yellowstone this weekend?”
"Hey Google, how's traffic to downtown Charlotte?" will give you the quickest route to your destination.
Give your friends an update on your arrival time by saying, “Hey Google, share my ETA with Ari.”
Stay in touch while you’re on the road by asking, “Hey Google, call Dad.”
“Hey Google, find the nearest gas station” will help you when you need to make a pit stop. Or ask your Assistant, “Hey Google, where’s the nearest coffee shop?” when you need to get your caffeine fix.
Avoid boredom with a podcast or audiobook while you're driving through remote locations. Just say, “Hey Google, play Planet Money.”
Play, pause or skip through your favorite songs from services like YouTube Music, Pandora, and Spotify.
Send text messages with your voice so you can keep your eyes on the road. Just ask the Assistant, “Hey Google, send a text to Jake” or “Hey Google, read my messages.”
And it’s really easy to get started. You can access the Assistant in a variety of places, whether you’re using Google Maps for Android and iOS, Waze for Android, Android Auto, or through the new car accessory, Anker Roav Bolt. Later this year, we’re introducing the Assistant’s new driving mode, a voice-forward dashboard for Android that brings your most relevant activities—like navigation, messaging, calling, and media—front and center.
Bonus tip: When you get home from your trip, you can always pull up specific pictures from your journey from Google Photos by asking the Assistant on your Smart Display. Give it a go by saying, “Hey Google, show me my pictures from Yosemite.”
Buckle up and and remember to take plenty of pictures of your trip!
As you hit the road this summer, Android Auto is sporting a new look with features that make driving more simple, personal and helpful. So grab your sunglasses and fill up your tank—here’s what you can expect.
With the new app launcher, you can find all your favorite apps with fewer taps. The bottom left button will open the app launcher, where you'll find the familiar app icons laid out with your most commonly used apps automatically featured in the top row. Just a couple of taps and you can dive into your favorite podcast, rock out to a new song or send a message to Mom.
You'll notice several of the icons have the Google Assistant badge. By tapping the icon, your Assistant will tell you about your calendar, give you the weather report, read you the news or set a reminder for you.
Whether you’re jamming to the greatest hits or deep into an interesting podcast, Android Auto will automatically start playing where you left off. Make sure you check out the many auto-enabled media apps available in Google Play.
Never get lost again with your favorite navigation app easily accessible on your display right when you connect Android Auto. Tap on a suggested location or use the Assistant to start navigating. And if you already have a route queued up on your phone, Android Auto will automatically populate the directions and begin routing you to your destination on your display.
The new navigation bar sits at the bottom of your display, and allows you to manage multiple apps, more easily. So if you’re listening to music, you won’t miss your next turn; or if you’re following directions, you can still easily pause or skip a song. You can also jump straight to your app running in the background with one tap.
On the bottom right corner, a new notification button houses all of your recent calls, messages and alerts. You can also keep in touch with friends and family, while keeping your eyes on the road. Just long press the mic button on the steering wheel, tap on the mic button on your display or say “Hey Google” to have the Google Assistant help make calls, send messages and read your notifications.
Android Auto is flexible and can morph itself to fit widescreen displays in cars that support it—giving you extra space for step-by-step navigation, media playback and ongoing call controls (dependent on vehicle support). Plus, the new Android Auto improves visibility with easier to read fonts as well as a new dark theme and colorful accents that match your car’s interior.
If your car has Android Auto support, you’ll start to see the new design over the next few weeks. These updates will not be reflected in Android Auto for your phone screen. We will be evolving the phone screen experience from Android Auto to the Assistant’s new driving mode in the future.
Stay tuned for this new update!
When Google Science Fair launched last fall, we challenged students to channel their curiosity and ingenuity to invent, code or build a solution to a problem they’re passionate about. Thousands of students participated, and this weekend we welcomed our 24 finalists—from 14 countries around the world—to explore Google’s headquarters to reveal the winners.
These changemakers tackled issues across sustainability, healthcare, and accessibility. We saw impressive entries that used a variety of STEM disciplines—from using AI to help detect disease in plants to finding new ways to diagnose heart disease.
Ready to find out who the winners are?
We were joined by a panel of judges, including our partners: Lego Education, Scientific American, Virgin Galactic and National Geographic. Mariette DiChristina, Editor in Chief of Scientific American and the chief judge for this year’s competition praised Fionn for his “tenacity and dedication to solving an important environmental problem embodies the spirit of exploration.” A big thanks to Mariette and the other judges for lending their expertise across science and engineering to help us to find the next generation of problem solvers.
Behind every ambitious student are parents and teachers (hats off to you!) who cheer them on, and push them to keep learning. And to the students, you rock. We can’t wait to see what you do next.
I’m a proud and lifelong New Yorker. I’ve seen and done a lot in New York City through all my years of living here, but one of the beauties of living here is that you’re always able to see and do something new. The possibilities are endless.
With the help and recommendations of Google Local Guides, I had the opportunity to explore my city in a new way. Local Guides are the people who share reviews, photos and more on Google Maps to help you uncover the best parts of your city. Through the recommendations they’ve shared on what to eat, see and do, I discovered everything from the best bagels to the best free activity in town.
Hands-down, my favorite part of this experience was checking out the best view of Manhattan at Cantor Roof Garden Bar on the roof of the Metropolitan Museum of Art. Local Guides recommended this for an awesome view that isn’t swarming with tourists. There’s nothing like taking in the New York City skyline with an ice-cold drink in hand, surrounded by a beautiful garden and sculptures.
I also got to check out some unique, quirky and decidedly non-touristy souvenirs at Fishs Eddy, where local artists are behind many of the designs. And of course, I ate some of the most delicious food New York City has to offer, including the classic matzo ball soup from Russ & Daughters Cafe and an unlimited table-side service of pasta at Becco in Times Square.
It was refreshing to see my city with a new perspective. I assumed I'd seen it all, but I learned I needed to open my mind to new experiences. Being a tourist in my own city gave me a new appreciation for the things I walk past every day. If you want to see all the spots I visited while taking in New York City, you can watch the video and follow this Google Maps list full of Local Guides’ recommendations. That way you can experience New York like a local, even if you’re not one.
|First, the Transformer model is applied to an input sentence (lower left) and, in conjunction with the target output sentence (above right) and target input sentence (middle right; beginning with the placeholder “<sos>”), the translation loss is calculated. The AdvGen function then takes the source sentence, word selection distribution, word candidates, and the translation loss as inputs to construct an adversarial source example.|
|In the defense stage, the adversarial source example serves as input to the Transformer model, and the translation loss is calculated. AdvGen then uses the same method as above to generate an adversarial target example from the target input.|
|Comparison of Transformer model (Vaswani et al., 2017) on standard benchmarks.|
|Comparison of Transformer, Miyao et al. and Cheng et al. on artificial noisy inputs.|