Manage legal matters and holds programmatically with the Google Vault API

Google Vault can help your organization meet its legal needs by offering an easy way to preserve and retrieve user data. To harness the full potential of Vault, however, you may need to integrate its functionality with other tools and processes that your business employs. Today, we’re making that possible with the Google Vault API.

The Vault API will allow you to programmatically manage legal matters and holds. This means that you will be able to create, list, update, and delete matters and holds related to data supported by Vault.

For more information on the Vault API, check out the Developer’s Guide. We’ll add more features to the API in the future, so stay tuned.

Launch Details
Release track:
Launching to both Rapid Release and Scheduled Release

Editions:
Available to all G Suite Business, Education, and Enterprise editions, as well as any G Suite users with the Vault add-on license

Rollout pace:
Full rollout (1–3 days for feature visibility)

Impact:
Admins, developers, and users with the Vault add-on license

Action:
Admin action suggested/FYI

More Information
Google Vault API Developers Site


Launch release calendar
Launch detail categories
Get these product update alerts by email
Subscribe to the RSS feed of these updates

Manage legal matters and holds programmatically with the Google Vault API

Google Vault can help your organization meet its legal needs by offering an easy way to preserve and retrieve user data. To harness the full potential of Vault, however, you may need to integrate its functionality with other tools and processes that your business employs. Today, we’re making that possible with the Google Vault API.

The Vault API will allow you to programmatically manage legal matters and holds. This means that you will be able to create, list, update, and delete matters and holds related to data supported by Vault.

For more information on the Vault API, check out the Developer’s Guide. We’ll add more features to the API in the future, so stay tuned.

Launch Details
Release track:
Launching to both Rapid Release and Scheduled Release

Editions:
Available to all G Suite Business, Education, and Enterprise editions, as well as any G Suite users with the Vault add-on license

Rollout pace:
Full rollout (1–3 days for feature visibility)

Impact:
Admins, developers, and users with the Vault add-on license

Action:
Admin action suggested/FYI

More Information
Google Vault API Developers Site


Launch release calendar
Launch detail categories
Get these product update alerts by email
Subscribe to the RSS feed of these updates

G4NP Around the Globe – Zooming in on Action Against Hunger

Every dollar and minute count to further your cause and focus on your mission. We’re pleased to highlight nonprofits who were able to make greater impact with fewer resources by using Google tools—from G Suite to Google AdGrants–made available through Google for Nonprofits (G4NP) at no charge.

Varying in size, scope, and timezones, these nonprofits from around the world share one thing in common: utilizing the G4NP suite of tools to help their specific needs. G4NP offers nonprofit organizations across 50 countries access to Google tools like Gmail, Google Calendar, Google Ad Grants and more at no cost. This week, we’ll take a look at how the nonprofit Action Against Hunger utilizes these tools to increase productivity, visibility, and donations,  in order to improve lives in  the communities they serve.

Action Against Hunger

In 2016 alone, Action Against Hunger provided nourishment to over 1.5 million starving children(1). In order to save lives with nutritional programs, Action Against Hunger looked to Google for aid—not for food, but for technology. Action Against Hunger now utilizes five Google technologies that have drastically improved their ability to save lives around the globe.

Raising Awareness with  Google Ad Grants & Analytics

For major international emergencies, like the Ebola outbreak or the South Sudan famine, Action Against Hunger needs a way to inform people and recommend ways to get involved. With Ad Grants, the nonprofit activates targeted keywords relating to the crises to drive people to their page and empower them to take action. Google Analytics then allows them to track their effectiveness and adjust accordingly to increase engagement and improve their fundraising techniques. With this data-driven strategy and the tools’ ability to optimize campaigns, Action Against Hunger has nearly doubled funding year-over-year. In fact, Ad Grants brought 158,000 people to their website in the past year alone, raising $66,000 which is equal to treating 1,466 hungry children.

Ad Grants brought 158,000 people to their website in the past year alone, raising $66,000 which is equal to treating 1,466 hungry children.

Increasing Productivity with  G Suite

When working with a global network and managing hundreds of programs abroad, collaboration and communication are key. After experiencing unnecessary latencies in their operations, Action Against Hunger has since adopted G Suite which streamlined their workflow. The nonprofit is especially fond of Gmail, Hangouts, and Drive where Action Against Hunger employees can message each other quickly, share files securely, and collaborate on Docs in real-time—avoiding duplication of efforts and saving time.

Fundraising with One Today & YouTube

To drive donations and expand awareness to broad audiences, Action Against Hunger uses One Today, a Google app that allows users to easily donate $1 or more towards causes they care about. Campaigning on One Today on World Food Day in 2016,  Action Against Hunger raised more than $1,200 in support of their cause with each dollar going directly helping those in need—the equivalent of feeding 1,000 hungry children. Additionally, Action Against Hunger creates and shares content on YouTube to reach their global audience, and is  beginning to use the YouTube donation cards to further increase donations. The large exposure and website referrals from both YouTube and Google+ helped Action Against Hunger raise over $20,000.

Using Google products Action Against Hunger gained extra time and energy to focus on what really matters: feeding the hungry.

To read more aboutAction Against Hunger’s story and learn how they used Google tools so effectively, visit our Google for Nonprofits Community Stories page. Stay tuned in the coming weeks for more inspirational stories about nonprofits using technology to help their cause.


To see if your nonprofit is eligible to participate, review the Google for Nonprofits eligibility guidelines. Google for Nonprofits offers organizations like yours free access to Google tools like Gmail, Google Calendar, Google Drive, Google Ad Grants, YouTube for Nonprofits and more. These tools can help you reach new donors and volunteers, work more efficiently, and tell your nonprofit’s story. Learn more and enroll here.

Footnote:  Statements are provided by Nonprofits that received products as part of the Google for Nonprofits program, which offers products at no charge to qualified nonprofits.


Source: Google Cloud


Developer Preview 4 now available, official Android O coming soon!

Posted by Dave Burke, VP of Engineering

As we put the finishing touches on the Android O platform, today we're rolling out Developer Preview 4 to help you make sure your apps are ready.

This is the final preview before we launch the official Android O platform to consumers later this summer. Take this opportunity to wrap up your testing and publish your updates soon, to give users a smooth transition to Android O.

If you have a device that's enrolled in the Android Beta Program, you'll receive an update to Developer Preview 4 in the next few days. If you haven't enrolled your device yet, just visit the Android Beta site to enroll and get the update.

Watch for more information on the official Android O release soon!

What's in this update?

Developer Preview 4 is a release candidate build of Android O that you can use to complete your development and testing in time for the upcoming official release. It includes the final system behaviors, the latest bug fixes and optimizations, and the final APIs (API level 26) already available since Developer Preview 3.

We're releasing the Developer Preview 4 device system images today, together with the stable version of the Android 26.0.0 Support Library. Incremental updates to the SDK, tools, and Android Emulator system images are on the way over the next few days.

We're also introducing a new version of Android Testing Support Library that includes new features like Android Test Orchestrator, Multiprocess Espresso, and more. Watch for details coming soon.

Test your apps on Android O

Today's Developer Preview 4 system images give you an excellent way to test your current apps on the near-final version of Android O. By testing now, you can make sure your app offers the experience you want as users start to upgrade to the official Android O platform.

Just enroll a supported device in the Android Beta Program to get today's update over-the-air, install your current app from Google Play, and test the user flows. The app should run and look great, and should handle the Android O behavior changes properly -- in particular, pay attention to background location limits, notification channels, and changes in networking, security, and identifiers.

Once you've resolved any issues, publish your app updates with the current targeting level, so that they're available as users start to receive Android O.

Enhance your apps with Android O features and APIs

Users running the latest versions of Android are typically among the most active in terms of downloading apps, consuming content, and making purchases. They're also more vocal about support for the latest Android features in their favorite apps. With Android O, users are anticipating features like notification channels and dots, shortcut pinning, picture-in-picture, autofill, and others. These features could also help increase engagement with your app as more users upgrade to Android O over time.

With Android O your app can directly pin a specific app shortcut in the launcher to drive engagement.
Notification dots keep users active in your app and let them jump directly the app's core functions.

Enhancing your apps with Android O features can help you drive engagement with users, offer new interactions, give them more control and security, and improve performance. Features like adaptive icons, downloadable fonts, and autosizing TextView can simplify your development and minimize your APK size. Battery is also a top concern for users, so they'll appreciate your app being optimized for background execution limits and other important changes in vital system behavior for O apps.

Visit the O Developer Preview site to learn about all of the new features and APIs and how to build them into your apps.

Speed your development with Android Studio

When you're ready to build for Android O, we recommend updating to the latest version of Android Studio 3.0, available for download from the canary channel. Aside from improved app performance profiling tools, support for the Kotlin programming language, and Gradle build optimizations, Android Studio 3.0 makes it easier to develop with Instant Apps, XML Fonts, Downloadable Fonts, and Adaptive Icons.

We also recommend updating to the stable version of the Android Support Library 26.0.0, available now from Google's Maven repository, and to the latest SDK, tools, and emulator system images, available over the next few days.

You can update your project's compileSdkVersion to API 26 to compile against the official Android O APIs. We also recommend updating your app's targetSdkVersion to API 26 to opt-in and test your app with Android O specific behavior changes. See the migration guide for details on how to setup your environment to build with Android O.

Publish your updates to Google Play

Google Play is open for apps compiled against or targeting API 26. When you're ready, you can publish your APK updates in your alpha, beta, or production channels.

Make sure that your updated app runs well on Android O as well as older versions. We recommend using Google Play's beta testing feature to get early feedback from a small group of users. Then do a staged rollout. We're looking forward to seeing your app updates!

How to get Developer Preview 4

It's simple to get Developer Preview 4 if you haven't already! Just visit android.com/beta and opt-in your eligible phone or tablet. As always, you can also download and flash this update manually. The O Developer Preview is available for Pixel, Pixel XL, Pixel C, Nexus 5X, Nexus 6P, Nexus Player, and the Android Emulator. Enrolled devices will automatically update when we release the official version of Android O.

Thanks for all of your input throughout the preview. Continue to share your feedback and requests, we love it!

Experience Tunisia’s rich culture with Street View Imagery

My Street View journey took me to Tunisia, home to beautiful sun soaked beaches, ancient Roman ruins, and Islamic monuments. And now you can explore Tunisia on Street view too.

The first stop is the Amphitheatre of El Djem, the largest Roman amphitheatre in North Africa, located in the heart of Tunisia. This beautiful monument stands in the midst of a lively and vibrant town—El Djem—-previously known as “Thysdrus,” a prosperous town during the reign of the Roman Empire.

As you walk through the arena, imagine 35,000 cheering spectators gathered in the auditorium to watch gladiators and lions raised and lowered from cells to meet their fate. As the cheering crowd fades, you are brought back to the present, and the crowd’s roars are replaced with sound of birds chirping and leaves rustling in the cornerstone of El Djem.

Then I went on to explore the massive city of Carthage, founded in the 9th Century B.C and home to an iconic civilization. It is also the hometown of the famed warrior and military leader, Hannibal, who grew to lead victorious battles. Today, Tunisians regard Carthage and the memory of Hannibal with a strong sense of pride. Use Street View to take a stroll through the Theatre of Carthage, Cisterns of La Malaga, Basilica of Damus al-Karita and the Baths of Antoninus which face the stunning view of the Mediterranean.

Next we visited Dougga, an ancient Roman Town that was built on a hill and flourished during the Roman and Byzantine times. Take a walk through its beautiful ruins which have been around for more than six centuries, and envision the daily life of people in a typical Roman town. Let the monuments left behind give you a glimpse into the Numidian, Punic, Hellenistic, and Roman cultures. Stroll around the site with Street View and stop to gaze up at The Capitol, a Roman Temple dedicated to Rome’s protective triad; Jupiter, Juno and Minerva.

To delve into some of Tunisia’s beautiful Islamic architecture during the early centuries, we stopped by Sousse. This gorgeous city lies on the Tunisian Sahel with monuments to admire such as the Ribat of Sousse as well as the city’s Great Mosque. Take a walk through the vast courtyard of the mosque, the stairs will lead you to the watchtowers where you can enjoy a beautiful view of the mosque and its surroundings.

Finally,  my favorite part of the journey was going to the different Museums spread across Tunisia. Some of these include The National Bardo Museum, Sbeïtla Archaeological Museum, Utique Museum and The National Museum of Carthage. The rich collection of artifacts displayed tell their own stories, especially the beautiful collection of Roman Mosaics in The Bardo. Make sure to take a tour of your own.

We hope that we have inspired you to take a moment to step into the wonder that is Tunisia. For more highlights from Tunisia Street View collection, visit Tunisia Highlights

Source: Google LatLong


CIO’s guide to data analytics and machine learning



Editor's Note: Download the new CIO's guide to data analytics and machine learning here.

Breakthroughs in artificial intelligence (AI) have captured the imaginations of business and technical leaders alike: computers besting human world-champions in board games with more positions than there are atoms in the universe, mastering popular video games, and helping diagnose skin cancer. The AI techniques underlying these breakthroughs are finding diverse application across every industry. Early adopters are seeing results; particularly encouraging is that AI is starting to transform processes in established industries, from retail to financial services to manufacturing.

However, an organization’s effectiveness in applying these breakthroughs is anchored in the basics: a disciplined foundation in capturing, preparing and analyzing data. Data scientists spend up to 80% of their time on the “data wrangling,” “data munging” and “data janitor” work required well before the predictive capabilities promised by AI can be realized.

Capturing, preparing and analyzing data creates the foundation for successful AI initiatives. To help business and IT leaders create this virtuous cycle, Google Cloud has prepared a CIO’s guide to data analytics and machine learning that outlines key enabling technologies at each step. Crucially, the guide illustrates how managed cloud services greatly simplify the journey — regardless of an organization’s maturity in handling big data.

This is important because, for many companies, the more fundamental levels of data management present a larger challenge than new capabilities like AI. “Management teams often assume they can leapfrog best practices for basic data analytics by going directly to adopting artificial intelligence and other advanced technologies,” noted Oliver Wyman consultants Nick Harrison and Deborah O’Neill in a recent Harvard Business Review article (aptly titled If Your Company Isn’t Good at Analytics, It’s Not Ready for AI). “Like it or not, you can’t afford to skip the basics.

Building on new research and Google’s own contributions to big data since the beginning, this guide walks readers through each step in the data management cycle, illustrating what’s possible alongside examples.

Specifically, the CIO’s guide to data analytics and machine learning is designed to help business and IT leaders address some of the essential questions companies face in modernizing data strategy:

For my most important business processes, how can I capture raw data to ensure a proper foundation for future business questions? How can I do this cost-effectively?

  • What about unstructured data outside of my operational/transactional databases: raw files, documents, images, system logs, chat and support transcripts, social media?
  • How can I tap the same base of raw data I’ve collected to quickly get answers as new business questions arise?
  • Rather than processing historical data in batch, what about processes where I need a real-time view of the business? How can I easily handle data streaming in real time?
  • How can I unify the scattered silos of data across my organization to provide a current, end-to-end view? What about data stored off-premises in the multiple cloud and SaaS providers I work with?
  • How can I disseminate this capability across my organization — especially to business users, not just developers and data scientists?

Because managed cloud services deal with an organization's sensitive data, security is a top consideration at each step of the data management cycle. From data ingestion into the cloud, followed by storage, preparation and ongoing analysis as additional data flows in, techniques like data encryption and the ability to connect your network directly to Google’s reflect data security best practices that keep data assets safe as they yield insights.

Wherever your company is on its path to data maturity, Google Cloud is here to help. We welcome the opportunity to learn more about your challenges and how we can help you unlock the transformational potential of data.

Teaching Robots to Understand Semantic Concepts



Machine learning can allow robots to acquire complex skills, such as grasping and opening doors. However, learning these skills requires us to manually program reward functions that the robots then attempt to optimize. In contrast, people can understand the goal of a task just from watching someone else do it, or simply by being told what the goal is. We can do this because we draw on our own prior knowledge about the world: when we see someone cut an apple, we understand that the goal is to produce two slices, regardless of what type of apple it is, or what kind of tool is used to cut it. Similarly, if we are told to pick up the apple, we understand which object we are to grab because we can ground the word “apple” in the environment: we know what it means.

These are semantic concepts: salient events like producing two slices, and object categories denoted by words such as “apple.” Can we teach robots to understand semantic concepts, to get them to follow simple commands specified through categorical labels or user-provided examples? In this post, we discuss some of our recent work on robotic learning that combines experience that is autonomously gathered by the robot, which is plentiful but lacks human-provided labels, with human-labeled data that allows a robot to understand semantics. We will describe how robots can use their experience to understand the salient events in a human-provided demonstration, mimic human movements despite the differences between human robot bodies, and understand semantic categories, like “toy” and “pen”, to pick up objects based on user commands.

Understanding human demonstrations with deep visual features
In the first set of experiments, which appear in our paper Unsupervised Perceptual Rewards for Imitation Learning, our is aim is to enable a robot to understand a task, such as opening a door, from seeing only a small number of unlabeled human demonstrations. By analyzing these demonstrations, the robot must understand what is the semantically salient event that constitutes task success, and then use reinforcement learning to perform it.
Examples of human demonstrations (left) and the corresponding robotic imitation (right).
Unsupervised learning on very small datasets is one of the most challenging scenarios in machine learning. To make this feasible, we use deep visual features from a large network trained for image recognition on ImageNet. Such features are known to be sensitive to semantic concepts, while maintaining invariance to nuisance variables such as appearance and lighting. We use these features to interpret user-provided demonstrations, and show that it is indeed possible to learn reward functions in an unsupervised fashion from a few demonstrations and without retraining.
Example of reward functions learned solely from observation for the door opening tasks. Rewards progressively increase from zero to the maximum reward as a task is completed.
After learning a reward function from observation only, we use it to guide a robot to learn a door opening task, using only the images to evaluate the reward function. With the help of an initial kinesthetic demonstration that succeeds about 10% of the time, the robot learns to improve to 100% accuracy using the learned reward function.
Learning progression.
Emulating human movements with self-supervision and imitation.
In Time-Contrastive Networks: Self-Supervised Learning from Multi-View Observation, we propose a novel approach to learn about the world from observation and demonstrate it through self-supervised pose imitation. Our approach relies primarily on co-occurrence in time and space for supervision: by training to distinguish frames from different times of a video, it learns to disentangle and organize reality into useful abstract representations.

In a pose imitation task for example, different dimensions of the representation may encode for different joints of a human or robotic body. Rather than defining by hand a mapping between human and robot joints (which is ambiguous in the first place because of physiological differences), we let the robot learn to imitate in an end-to-end fashion. When our model is simultaneously trained on human and robot observations, it naturally discovers the correspondence between the two, even though no correspondence is provided. We thus obtain a robot that can imitate human poses without having ever been given a correspondence between humans and robots.
Self-supervised human pose imitation by a robot.
A striking evidence of the benefits of learning end-to-end is the many-to-one and highly non-linear joints mapping shown above. In this example, the up-down motion involves many joints for the human while only one joint is needed for the robot. We show that the robot has discovered this highly complex mapping on its own, without any explicit human pose information.

Grasping with semantic object categories
The experiments above illustrate how a person can specify a goal for a robot through an example demonstration, in which case the robots must interpret the semantics of the task -- salient events and relevant features of the pose. What if instead of showing the task, the human simply wants to tell it to what to do? This also requires the robot to understand semantics, in order to identify which objects in the world correspond to the semantic category specified by the user. In End-to-End Learning of Semantic Grasping, we study how a combination of manually labeled and autonomously collected data can be used to perform the task of semantic grasping, where the robot must pick up an object from a cluttered bin that matches a user-specified class label, such as “eraser” or “toy.”
In our semantic grasping setup, the robotic arm is tasked with picking up an object corresponding to a user-provided semantic category (e.g. Legos).
To learn how to perform semantic grasping, our robots first gather a large dataset of grasping data by autonomously attempting to pick up a large variety of objects, as detailed in our previous post and prior work. This data by itself can allow a robot to pick up objects, but doesn’t allow it to understand how to associate them with semantic labels. To enable an understanding of semantics, we again enlist a modest amount of human supervision. Each time a robot successfully grasps an object, it presents it to the camera in a canonical pose, as illustrated below.
The robot presents objects to the camera after grasping. These images can be used to label which object category was picked up.
A subset of these images is then labeled by human labelers. Since the presentation images show the object in a canonical pose, it is easy to then propagate these labels to the remaining presentation images by training a classifier on the labeled examples. The labeled presentation images then tell the robot which object was actually picked up, and it can associate this label, in hindsight, with the images that it observed while picking up that object from the bin.

Using this labeled dataset, we can then train a two-stream model that predicts which object will be grasped, conditioned on the current image and the actions that the robot might take. The two-stream model that we employ is inspired by the dorsal-ventral decomposition observed in the human visual cortex, where the ventral stream reasons about the semantic class of objects, while the dorsal stream reasons about the geometry of the grasp. Crucially, the ventral stream can incorporate auxiliary data consisting of labeled images of objects (not necessarily from the robot), while the dorsal stream can incorporate auxiliary data of grasping that does not have semantic labels, allowing the entire system to be trained more effectively using larger amounts of heterogeneously labeled data. In this way, we can combine a limited amount of human labels with a large amount of autonomously collected robotic data to grasp objects based on desired semantic category, as illustrated in the video below:
Future Work
Our experiments show how limited semantically labeled data can be combined with data that is collected and labeled automatically by the robots, in order to enable robots to understand events, object categories, and user demonstrations. In the future, we might imagine that robotic systems could be trained with a combination of user-annotated data and ever-increasing autonomously collected datasets, improving robotic capability and easing the engineering burden of designing autonomous robots. Furthermore, as robotic systems collect more and more automatically annotated data in the real world, this data can be used to improve not just robotic systems, but also systems for computer vision, speech recognition, and natural language processing that can all benefit from such large auxiliary data sources.

Of course, we are not the first to consider the intersection of robotics and semantics. Extensive prior work in natural language understanding, robotic perception, grasping, and imitation learning has considered how semantics and action can be combined in a robotic system. However, the experiments we discussed above might point the way to future work into combining self-supervised and human-labeled data in the context of autonomous robotic systems.

Acknowledgements
The research described in this post was performed by Pierre Sermanet, Kelvin Xu, Corey Lynch, Jasmine Hsu, Eric Jang, Sudheendra Vijayanarasimhan, Peter Pastor, Julian Ibarz, and Sergey Levine. We also thank Mrinal Kalakrishnan, Ali Yahya, and Yevgen Chebotar for developing the policy learning framework used for the door task, and John-Michael Burke for conducting experiments for semantic grasping.

Unsupervised Perceptual Rewards for Imitation Learning was presented at RSS 2017 by Kelvin Xu, and Time-Contrastive Networks: Self-Supervised Learning from Multi-View Observation will be presented this week at the CVPR Workshop on Deep Learning for Robotic Vision.

Chatting with the National Spelling Bee champ on her success and what’s next

Last month, Ananya Vinay clinched the National Spelling Bee with the word “marocain.” (I’m guessing she has never needed to use the "Did you mean" feature in Google Search.) When we ascertained that Ananya endeavored to visit the Googleplex, we invited her for lunch and a peregrination around campus. I had the chance to confabulate with her about her alacrity for spelling, her multifarious approach to practicing a preponderance of words, how Google Hangouts helped her maintain equanimity at the Bee, and which venture she plans to vanquish next.

Ananya at Google

Keyword: What was your favorite part of the tour at Google?

Ananya: I really liked seeing the first server (known as the “corkboard server”) at the Visitors Center. Then I got to use Google Earth, and zoomed in on my grandmother’s house in Kerala, India.

If you could work at Google one day, what kind of job would you want to do?

I’d like to work in the division where they do research on AI and medicine. I’d want to diagnose diseases. This summer I went to a camp called “mini medical school” where I got to do a bunch of dissections—I really like that stuff.

We heard you used Google Hangouts to practice for the spelling bee, can you tell us more about that?

There’s a spellers chat on Hangouts, and when you make it to the National Spelling Bee, another speller will add you to the chat. People use the chat to share resources on how to study and quiz each other, which helped expand my knowledge of words. When we used Hangouts Chat (instead of video), autocorrect got in the way of spelling, which is really hilarious. The words are so strange that autocorrect doesn’t recognize them. I’ve beaten autocorrect a lot.

Is there a word that always trips you up? Or does that only happen to me?

When I was younger I always messed up “mozzarella.” Now it’s easier for me to guess words because I go off of language patterns and word rules, so I can figure out a word based on language of origin. There’s a lower chance I’ll miss a word because I have a larger word base.

What’s next? Are you going to keep doing spelling bees?

I can’t compete again because I already won the national competition, but next year I get to open up the Bee. Now I’m going deep into math and science. I’m going into seventh grade, and my new hobby is going to be debate.

If you could have a dress made of marocain, what color would it be?

I’m going to use a spelling bee word: cerulean* (which means sky blue).

*Editor’s Note: While I was taking notes during the interview, Ananya immediately called me out on my misspelling of cerulean (not cirulian, as I thought). She’s good.


The High Five: Live every week like you’ll discover a dinosaur fossil

This week a human races a shark, and a dinosaur was discovered a million years after it walked the Earth. It’s a whole new world out there. Here’s what people are searching for this week:

shark_grey.gif

Phelps has the gold, now he’s going for the White

Shark Week returns Sunday night on the Discovery Channel, and this year it’s going to the next level with a “race” between Olympian Michael Phelps and a great white shark. So far Phelps is beating “great white shark” in search traffic, but all bets are off in the water. Delaware, Rhode Island and Pennsylvania are the regions with the most searches for “Shark Week,” but people are also interested in Amity Island’s resident killer “Jaws,” which was the top searched shark movie of the week.

Stumbling on history

This week’s excavation of a million-year old Stegomastadon is making news after a boy tripped over its fossilized skull while hiking with his family in New Mexico. Search interest in Stegomastadon went up than 700 percent with queries like, “What does a stegomastodon look like?” and “How long ago did dinosaurs live?” Even with its moment in the limelight this week, Stegomastadon was searched less than Tyrannosaurus Rex and Velociraptor.

Get those people a croissant

After 23 days, 21 stages, and more than 2,000 miles, cyclists will cross the Tour de France finish line in Paris this weekend. Curious about how that is physically possible, people are searching: “How many rest days are there in the Tour de France?” and “How long is a stage in the Tour de France?” Search interest in “yellow jersey” (worn by the leader of the race and ultimately presented to the winner) spiked 200 percent this week.

O.J. stirs things up

After serving an eight-year prison sentence for armed robbery, O.J. Simpson was released on parole this week. Leading up to the hearing, people searched: “What did O.J. Simpson do?” “What time is OJ’s parole hearing?” and “What is a parole hearing?” Search interest in O.J. spiked 350 percent this week, and interest in his now-deceased attorney Robert Kardashian—yup that Kardashian, father of Kim, Khloe and Kourtney—went up 200 percent.

Harry goes in a new direction

“Dunkirk,” Christopher Nolan’s highly anticipated movie about the World War II battle in which 300,000 troops were evacuated from a French beach, opened in theaters this week. This month search interest in “Dunkirk evacuation” reached its highest since 2004, and it spiked more than 200 percent this week alone. People are also looking for info on one cast member in particular: One Direction frontman Harry Styles, who makes his acting debut in the movie. Search interest in “Harry Styles Dunkirk” was searched 900 percent more than “Harry Styles songs.”


Beta Channel Update for Chrome OS

The Beta channel has been updated to 60.0.3112.72 (Platform version: 9592.66.0 ) for most Chrome OS devices. This build contains a number of bug fixes, security updates and feature enhancements. A list of changes can be found here.


If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser). 

Josafat Garcia
Google Chrome