Monthly Archives: January 2020

Beta Channel Update for Chrome OS

The Beta channel has been updated to 80.0.3987.67 (Platform version: 12739.44.0) for most Chrome OS devices. This build contains a number of bug fixes, security updates and feature enhancements. Changes can be viewed here.


If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser).

Daniel Gagnon
Google Chrome

Discovering millions of datasets on the web

Across the web, there are millions of datasets about nearly any subject that interests you. If you’re looking to buy a puppy, you could find datasets compiling complaints of puppy buyers or studies on puppy cognition. Or if you like skiing, you could find data on revenue of ski resorts or injury rates and participation numbers. Dataset Search has indexed almost 25 million of these datasets, giving you a single place to search for datasets and find links to where the data is. Over the past year, people have tried it out and provided feedback, and now Dataset Search is officially out of beta.

Dataset search - skiing

Some of the search results for the query "skiing," which include datasets ranging from speeds of the fastest skiers to revenues of ski resorts.

What's new in Dataset Search?

Based on what we’ve learned from the early adopters of Dataset Search, we’ve added new features. You can now filter the results based on the types of dataset that you want (e.g., tables, images, text), or whether the dataset is available for free from the provider. If a dataset is about a geographic area, you can see the map. Plus, the product is now available on mobile and we’ve significantly improved the quality of dataset descriptions. One thing hasn't changed however: anybody who publishes data can make their datasets discoverable in Dataset Search by using an open standard (schema.org) to describe the properties of their dataseton their own web page.

We have also learned how many different types of people look for data. There are academic researchers, finding data to develop their hypotheses (e.g., try oxytocin), students looking for free data in a tabular format, covering the topic of their senior thesis (e.g., try incarceration rates with the corresponding filters), business analysts and data scientists looking for information on mobile apps or fast food establishments, and so on. There is data on all of that! And what do our users ask? The most common queries include "education," "weather," "cancer," "crime," "soccer," and, yes, "dogs".

Dataset search - fast food establishments

Some of the search results for the query "fast food establishment.”

What datasets can you find in Dataset Search?

Dataset Search also gives us a snapshot of the data out there on the Web. Here are a few highlights. The largest topics that the datasets cover are geosciences, biology, and agriculture. The majority of governments in the world publish their data and describe it with schema.org. The United States leads in the number of open government datasets available, with more than 2 million. And the most popular data formats? Tables–you can find more than 6 million of them on Dataset Search.

The number of datasets that you can find in Dataset Search continues to grow. If you have a dataset on your site and you describe it using schema.org, an open standard, others can find it in Dataset Search. If you know that a dataset exists, but you can't find it in Dataset Search, ask the provider to add the schema.org descriptions and others will be able to learn about their dataset as well.

What's next?

Dataset Search is out of beta, but we will continue to improve the product, whether or not it has the "beta" next to it. If you haven't already, take Dataset Search for a spin, and tell us what you think.

Source: Search


Chrome Beta for Android Update

Hi everyone! We've just released Chrome Beta 80 (80.0.3987.68) for Android: it's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Krishna Govind
Google Chrome

Stable Channel Update for Chrome OS

The Stable channel is being updated to 79.0.3945.123 (Platform version: 12607.82.0) for most Chrome OS devices. This build contains a number of bug fixes and security updates. Systems will be receiving updates over the next several days.

If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser).


Cindy Bayless
Google Chrome OS

View data for only selected call participants in the Meet Quality Tool

Quick launch summary

You can now select specific participants when viewing meetings in the Meet Quality Tool. This allows you to display data and statistics for just a subset of the participants. When viewing calls with many participants, this helps limit the amount of information displayed on the screen at one time. By fitting just the most relevant information into the view, pagination can often be avoided even for very large meetings.

Getting started

Admins: This feature will be available by default when using the Meet Quality Tool. To select participants, use the participant list on the left-hand side of the Meeting Details page. As selections are made, the information displayed to the right will update accordingly.

End users: This feature has no impact on end users.

Rollout pace

Availability

  • Available to all G Suite customers

Resources


Beta Channel Update for Desktop

The beta channel has been updated to 80.0.3987.66 for Windows, Mac, and, Linux.

A full list of changes in this build is available in the log. Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.



Srinivas Sista
Google Chrome

A fresh way to revisit your online finds in Google Search

Remember that chicken parmesan recipe you found online last week? Or that rain jacket you discovered when you were researching camping gear? Sometimes when you find something on Search, you’re not quite ready to take the next step, like cooking a meal or making a purchase. And if you’re like me, you might not save every page you want to revisit later. 


Today, we’re launching some changes to Collections in Search to make it easier to jump back into your task without digging through your search history. Last year, we created activity cards in Search to make your search history more useful, and to help you pick up where you left off. Using AI, Collections in the Google app and mobile web now groups similar pages you've visited from Search related to activities like cooking, shopping and hobbies. You can choose to save these suggested collections so you can come back to them later. 


These suggestions can be accessed any time from the Collections tab in the Google app (new on Android!), or through the Google.com side menu on the mobile web. And if you don't want Google to suggest collections for you, you can control this in your settings, which you can visit right from Collections in the Google app.

dinnerparty-animation1-min.gif

Once you have a collection, Google can help you make better, faster decisions by showing you what you might want to check out next. Based on what you’ve saved, you’ll see related websites, images, products, and even related searches so you can explore new aspects of a topic. You’ll find these by clicking on the “Find More” button within a collection.

dinnerparty-animation2 (1)-min.gif

There’s also a new collaboration feature that lets you share and work on a collection with others. For example, if you’re planning a party with friends, you might want to share the recipes you’re considering, or the decorations you want to use so you can make a decision together. When sharing a collection, you'll have the option to let others view it or to let others make changes. And you can always make it private again if you don't want to share it anymore. 

The ability to see related content and to share or collaborate on a collection is now available globally. Suggested collections will start to appear for U.S. English users beginning today. We’ll look to bring these features to more languages and regions over time.

dinnerparty-animation3-min.gif

Whether you're planning an event, plotting a garden renovation or tracking down tips for tidying up, jump start your next project with Collections. 

Source: Search


Releasing the Drosophila Hemibrain Connectome — The Largest Synapse-Resolution Map of Brain Connectivity



A fundamental way to describe a complex system is to measure its “network” — the way individual parts connect and communicate with each other. For example, biologists study gene networks, social scientists study social networks and even search engines rely, in part, on analyzing the way web pages form a network by linking to one another.

In neuroscience, a long-standing hypothesis is that the connectivity between brain cells plays a major role in the function of the brain. While technical difficulties have historically been a barrier for neuroscientists trying to study brain networks in detail, this is beginning to change. Last year, we announced the first nanometer-resolution automated reconstruction of an entire fruit fly brain, which focused on the individual shape of the cells. However, this accomplishment didn't reveal information about their connectivity.

Today, in collaboration with the FlyEM team at HHMI’s Janelia Research Campus and several other research partners, we are releasing the “hemibrain” connectome, a highly detailed map of neuronal connectivity in the fly brain, along with a suite of tools for visualization and analysis. The hemibrain is derived from a 3D image of roughly half the fly brain, and contains verified connectivity between ~25,000 neurons that form more than twenty-million connections. To date, this is the largest synapse-resolution map of brain connectivity that has ever been produced, in any species. The goal of this project has been to produce a public resource that any scientist can use to advance their own work, similar to the fly genome, which was released twenty years ago and has become a fundamental tool in biology.
Fly brain regions contained within the hemibrain connectome. Also available: interactive version (example region: mushroom body).
Imaging, Reconstructing, and Proofreading the Hemibrain Connectome
Over a decade of research and development from numerous research partners was required to overcome the challenges in producing the hemibrain connectome. At Janelia, new methods were developed to stain the fly brain and then divide the tissue into separate 20-micron thick slabs. Each slab was then imaged at 8x8x8nm3 voxel-resolution using focused ion beam scanning electron microscopes customized for months-long continuous operation. Computational methods were developed to stitch and align the raw data into a coherent 26-trillion pixel 3D volume.

However, without an accurate 3D reconstruction of the neurons in a fly brain, producing a connectome from this type of imaging data is impossible. Forming a collaboration with Janelia in 2014, Google began working on the fly brain data, focused on automating 3D reconstruction to jointly work towards producing a connectome. After several iterations of technological development we devised a method called flood-filling networks (FFNs) and applied it towards reconstructing the entire hemibrain dataset. In the current project, we worked closely with our collaborators to optimize the reconstruction results to be more useful for generating a connectome (i.e., embedded within a proofreading and synaptic detection pipeline), rather than just showing the shape of the neurons.
A flood-filling network segmenting (tracing) part of a neuron in the fly hemibrain data.
FFNs were the first automated segmentation technology to yield reconstructions that were sufficiently accurate to enable the overall hemibrain project to proceed. This is because errors in automated reconstruction require correction by expert human “proofreaders,” and previous approaches were estimated to require tens of millions of hours of human effort. With FFNs, the hemibrain was proofread using hundreds of thousands of human hours: a two order-of-magnitude improvement. This (still substantial) proofreading effort was performed over two years by a team of highly skilled and dedicated annotators, using tools and workflows pioneered at Janelia for this purpose. For example, annotators used VR headsets and custom 3D object-editing tools to examine neuron shapes and fix errors in the automated reconstruction. These revisions were then used to retrain the FFN models, leading to revised and more accurate machine output.

Finally, after proofreading, the reconstruction was combined with automated synaptic detection in order to produce the hemibrain connectome. Janelia scientists manually labeled individual synapses and then trained neural network classifiers to automate the task. Generalization was improved through multiple rounds of labeling, and the results from two different network architectures were merged to produce robust classifications across the hemibrain.

Further details about producing the hemibrain can be found in HHMI’s press release.

What Is Being Released?
The focus of today’s announcement is a set of inter-related datasets and tools that enable any interested person to visually and programmatically study the fly connectome. Specifically, the following resources are available:
  • Terabytes of raw data, proofread 3D reconstruction, and synaptic annotations can be interactively visualized or downloaded in bulk.
  • A web-based tool neuPrint, which can be used to query the connectivity, partners, connection strengths and morphologies of any specified neurons.
  • A downloadable, compact representation of the connectome that is roughly a million-times smaller in bytes than the raw data from which it was derived.
  • Documentation and video tutorials explaining the use of these resources.
  • A pre-print with further details related to the production and analysis of the hemibrain connectome.
Next Steps
Researchers have begun using the hemibrain connectome to develop a more robust understanding of the drosophila nervous system. For example, a major brain circuit of interest is the “central complex” which integrates sensory information and is involved in navigation, motor control, and sleep:
A detailed view of “ring neurons” in the central complex of the fly brain, one of many neural circuits that can be studied using the hemibrain reconstruction and connectome. Interactive version: ring neurons and ellipsoid body.
Another circuit that is being intensely studied is the “mushroom body,” a primary site of learning and memory in the drosophila brain whose detailed structure is contained within the hemibrain connectome (interactive visualization).

Acknowledgements
We would like to acknowledge core contributions from Tim Blakely, Laramie Leavitt, Peter Li, Larry Lindsey, Jeremy Maitin-Shepard (Google), Stuart Berg, Gary Huang, Bill Katz, Chris Ordish, Stephen Plaza, Pat Rivlin, Shin-ya Takemura (Janelia collaborators who worked closely with Google’s team), and other amazing collaborators at Janelia and elsewhere who were involved in the hemibrain project.

Source: Google AI Blog


Solve a Google engineering challenge in Hash Code 2020

Want to solve a Google-inspired engineering problem? Want to meet other developers? Want a chance to visit a Google office? Well good news! Hash Code, Google’s team programming competition, is back for 2020!

This is the 7th edition of Hash Code, and while the competition has grown over the years, one thing has stayed the same: its focus on real-world problems that can be solved with technology. In the past, developers have put their heads together to tackle challenges focused on YouTube, self-driving cars, compiling code at Google scale, and more! We asked one of the founding Hash Code engineers, Przemek Pietrzkiewicz, to share his favorite past challenges :

Przemek on stage at a Hash Code world finals.

1. Routing Street View cars, Hash Code 2014
"One of my favorites has to be the first ever Hash Code problem. In this challenge, teams were given a description of a city (the actual data set was an approximate representation of Paris) and asked to schedule itineraries for a fleet of Street View’s image-capturing cars.The objective was to photograph every street in the city as quickly as possible. Since this was the first Hash Code problem, it set the example for those to follow: it was open-ended, challenging, and inspired by Google software engineering — some of my colleagues at Google France worked on this very problem around the same time!"

2. Directing Loon balloons, Final Round, Hash Code 2015
"In this problem, teams had to route Google’s connectivity-providing balloons in order to provide internet coverage to users around the world. This is tricky because these balloons can’t move on their own. While they can control their altitude, they are actually moved by winds in different layers of the atmosphere. I loved this problem because it was fun to generate the data sets — we had to learn about wind and weather, as well as find software libraries that would let us generate pseudo-random, realistic-looking wind maps."

3. Creating a photo slideshow, Online Qualification Round, Hash Code 2019
"This problem tasked teams with arranging a set of photos into an engaging photo slideshow. The Google Home Hub is a “smart display” — among its many features, it serves as a photo frame, displaying photos from your personal collection in a never-ending slideshow. In addition to showing landscape (horizontal) photos, the device can also find interesting pairs of portrait (vertical) photos and combine them together on a single slide. I’m an avid user of this product and thought it was a neat idea for a Hash Code challenge. I’m really happy we used it!"
Hash Code participants during the finals.

Interested in tackling a challenge like these? Then head over to g.co/hashcode now to register for the Online Qualification Round on February 20. For this round, your team can participate from wherever you’d like, including from a Hash Code hub near you (remember our hub post from last month?). Top teams from the Online Qualification Round will be invited to the World Finals at Google Ireland in April. And if you don’t have a team yet, don’t worry! You can register today and find a team later using our Facebook group. We hope this year’s challenge will be one of your favorites! 

Solve a Google engineering challenge in Hash Code 2020

Want to solve a Google-inspired engineering problem? Want to meet other developers? Want a chance to visit a Google office? Well good news! Hash Code, Google’s team programming competition, is back for 2020!

This is the 7th edition of Hash Code, and while the competition has grown over the years, one thing has stayed the same: its focus on real-world problems that can be solved with technology. In the past, developers have put their heads together to tackle challenges focused on YouTube, self-driving cars, compiling code at Google scale, and more! We asked one of the founding Hash Code engineers, Przemek Pietrzkiewicz, to share his favorite past challenges :

Przemek on stage at a Hash Code world finals.

1. Routing Street View cars, Hash Code 2014
"One of my favorites has to be the first ever Hash Code problem. In this challenge, teams were given a description of a city (the actual data set was an approximate representation of Paris) and asked to schedule itineraries for a fleet of Street View’s image-capturing cars.The objective was to photograph every street in the city as quickly as possible. Since this was the first Hash Code problem, it set the example for those to follow: it was open-ended, challenging, and inspired by Google software engineering — some of my colleagues at Google France worked on this very problem around the same time!"

2. Directing Loon balloons, Final Round, Hash Code 2015
"In this problem, teams had to route Google’s connectivity-providing balloons in order to provide internet coverage to users around the world. This is tricky because these balloons can’t move on their own. While they can control their altitude, they are actually moved by winds in different layers of the atmosphere. I loved this problem because it was fun to generate the data sets — we had to learn about wind and weather, as well as find software libraries that would let us generate pseudo-random, realistic-looking wind maps."

3. Creating a photo slideshow, Online Qualification Round, Hash Code 2019
"This problem tasked teams with arranging a set of photos into an engaging photo slideshow. The Google Home Hub is a “smart display” — among its many features, it serves as a photo frame, displaying photos from your personal collection in a never-ending slideshow. In addition to showing landscape (horizontal) photos, the device can also find interesting pairs of portrait (vertical) photos and combine them together on a single slide. I’m an avid user of this product and thought it was a neat idea for a Hash Code challenge. I’m really happy we used it!"
Hash Code participants during the finals.

Interested in tackling a challenge like these? Then head over to g.co/hashcode now to register for the Online Qualification Round on February 20. For this round, your team can participate from wherever you’d like, including from a Hash Code hub near you (remember our hub post from last month?). Top teams from the Online Qualification Round will be invited to the World Finals at Google Ireland in April. And if you don’t have a team yet, don’t worry! You can register today and find a team later using our Facebook group. We hope this year’s challenge will be one of your favorites!