Chrome for Android Update

Hi, everyone! We've just released Chrome 75 (75.0.3770.143) for Android: it'll become available on Google Play over the next few weeks.

This release includes stability and performance improvements. You can see a full list of the changes in the Git log. If you find a new issue, please let us know by filing a bug.

Ben Mason
Google Chrome

How we keep Search relevant and useful

When you come to Google Search, our goal is to connect you with useful information as quickly as possible. That information can take many forms, and over the years the search results page has evolved to include not only a list of blue links to pages across the web, but also useful features to help you find what you’re looking for even faster. Some examples include featured snippets, which highlight results that are likely to contain what you’re looking for; Knowledge Panels, which can help you find key facts about an individual or other topic in the world; and predictive features like Autocomplete that help you navigate Search more quickly.

Google Search Features

Left-right: Examples of a featured snippet, a Knowledge Panel and an Autocomplete prediction

Because these features are highlighted in a unique way on the page, or may show up when you haven’t explicitly asked for them, we have policies around what should and should not appear in those spaces. This means that in some cases, we may correct information or remove those features from a page.

This is quite different from how we approach our web and image search listings and how those results rank in Search, and we thought it would be helpful to explain why, using a few examples.

Featured Snippets
One helpful information format is featured snippets, which highlight web pages that our systems determine are especially likely to contain what you’re looking for. Because their unique formatting and positioning can be interpreted as a signal of quality or credibility, we’ve published standards for what can appear as a featured snippet.

We don’t allow the display of any snippets that violate our policies by being sexually explicit, hateful, violent, harmful or lacking expert consensus on public interest topics. 

Our automated systems are designed to avoid showing snippets that violate these policies. However, if our systems don’t work as intended and a violating snippet appears, we’ll remove it. In such cases, the page is not removed as a web search listing; it’s simply not highlighted as a featured snippet.


The Knowledge Graph
The Knowledge Graph in Google Search reflects our algorithmic understanding of facts about people, places and things in the world. The Knowledge Graph automatically maps the attributes and relationships of these real-world entities from information gathered from the web, structured databases, licensed data and other sources. This collection of facts allows us to respond to queries like "Bessie Coleman" with a Knowledge Panel with facts about the famous aviator.

Information from the Knowledge Graph is meant to be factual and is presented as such. However, while we aim to be as accurate as possible, our systems aren’t perfect, nor are all the sources of data available. So we collect user feedback and may manually verify and update information if we learn something is incorrect and our systems have not self-corrected. We have developed tools and processes to provide these corrections back to sources like Wikipedia, with the goal of improving the information ecosystem more broadly. 

Furthermore, we give people and organizations the ability to claim their Knowledge Panels and provide us with authoritative feedback on facts about themselves, and if we otherwise are made aware of incorrect information, we work to fix those errors. If an image or a Google Images results preview that’s shown in a Knowledge Panel does not accurately represent the person, place or thing, we’ll also fix the error. 

Predictive features
There are other features that are “predictive,” like Autocomplete and Related Searches, which are tools to help you navigate Search more quickly. As you type each character into the search bar, Autocomplete will match what you’re typing to common searches to help you save time. On the search results page, you may see a section of searches labeled “People also search for,” which is designed to help you navigate to related topics if you didn’t find what you were looking for, or to explore a different dimension of a topic.

Because you haven’t asked to see these searches, we’re careful about not showing predictions that might be shocking or offensive or could have a negative impact on groups or individuals. Read more about our policies.

You can still issue any search you’d like, but we won’t necessarily show all possible predictions for common searches. If no predictions appear or if you’re expecting to see a related search and it’s not there, it might be that our algorithms have detected that it contains potentially policy-violating content, the prediction has been reported and found to violate our policies, or the search may not be particularly popular.

While we do our best to prevent inappropriate predictions, we don’t always get it right. If you think a prediction violates one of our policies, you can report a prediction.

Across all of these features, we do not want to shock or offend anyone with content that they did not explicitly seek out, so we work to prevent things like violence or profanity from appearing in these special formats.

Organic search results
While we’ve talked mostly about helpful features that appear on the search results page, the results that probably come to mind most are our organic listings—the familiar “blue links” of web page results, thumbnails displayed in a grid in Google Images or videos from the web in video mode.

In these cases, the ranking of the results is determined algorithmically. We do not use human curation to collect or arrange the results on a page. Rather, we have automated systems that are able to quickly find content in our index--from the hundreds of billions of pages we have indexed by crawling the web--that are relevant to the words in your search.

To rank these, our systems take into account a number of factors to determine what pages are likely to be the most helpful for what you’re looking for. You can learn more about this on our How Search Works site.

While we intend to provide relevant results and prioritize the most reliable sources on a given topic, as with any automated system, our search algorithms aren’t perfect. You might see sites that aren’t particularly relevant to your search term rising to the top, or perhaps a page that does not contain trustworthy information rank above a more official website. 

When these problems arise, people often take notice and ask us whether we intend to “fix” the issue. Often what they might have in mind is that we’d manually re-order or remove a particular result from a page. As we’ve said many times in the past, we do not take the approach of manually intervening on a particular search result to address ranking challenges.

This is for a variety of reasons. We receive trillions of searches each year, so “fixing” one query doesn’t improve the issue for the many other variations of the same query or help improve search overall. 

So what do we do instead? We approach all changes to our Search systems in the same way: we learn from these examples to identify areas of improvement. We come up with solutions that we believe could help not just those queries, but a broad range of similar searches. We then rigorously test the change using insights from live experiments and data from human search rater evaluations. If we determine that the change provides overall positive benefits-- making a large number of search results more helpful, while preventing significant losses elsewhere-- we launch that change.

Our search algorithms are complex math equations that rely on hundreds of variables, and last year alone, we made more than 3,200 changes to our search systems. Some of these were visible launches of new features, while many others were regular updates meant to keep our results relevant as content on the web changes. And some are also improvements based on issues we identified, either via public reports or our own ongoing quality evaluations. Unlike with Search features where we are able to quickly correct issues that violate our policies, sometimes identifying the root cause of ranking issues can take time, and improvements may not happen immediately.. But as we have been for more than 20 years, we are always committed to identifying these challenges and working to make Search better.

Spam Protections
This is not to say that there aren’t policies and guidelines that apply to our organic listings. Content there has to meet our long-standing webmaster guidelines, which protect users against things like spam, malware and deceptive sites. Our spam protection systems automatically work to prevent our ranking systems from rewarding such content. 

In cases where our spam systems don’t work, we have long taken manual actions against pages or sites. We report these actions through the Manual Actions report in our Search Console tool, in hopes that site owners will curb such behavior. These actions are not linked to any particular search results or query. They’re taken against affected content generally.

Legal and Policy-based Removals
As our mission is to provide broad access to information, we remove pages from our results in limited circumstances, when required by law--such as child abuse imagery and copyright infringement claims--and in narrow cases where we have developed policies to protect people, such as content that has sensitive personal information

Some of these legal and policy actions do remove results for particular searches, such as someone’s name. However, none of these removals happen because Google has chosen to “fix” poor results. Rather, these are acts of legal compliance and applications of publicly documented policies to help keep people safe.

Overall, we’re constantly striving to make our search results and features on the results page as useful and reliable as possible, and we value your feedback to help us understand where we can do better.


Stable Channel Update for Desktop

The stable channel has been updated to 75.0.3770.142 for Windows, Mac, and Linux, which will roll out over the coming days/weeks.

Security Fixes and Rewards
Note: Access to bug details and links may be kept restricted until a majority of users are updated with a fix. We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed.

This update includes 2 security fixes. Below, we highlight fixes that were contributed by external researchers. Please see the Chrome Security Page for more information.

[$TBD][972921] High CVE-2019-5847: V8 sealed/frozen elements cause crash. Reported by m3plex on 2019-06-11
[$TBD][951487] Medium CVE-2019-5848: Font sizes may expose sensitive information. Reported by Mark Amery on 2019-04-10

We would also like to thank all security researchers that worked with us during the development cycle to prevent security bugs from ever reaching the stable channel.



A list of all changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.


Srinivas Sista
Google Chrome

Celebrate 50 years of space exploration in Google Earth

This week marks the 50th anniversary of the historic Apollo 11 mission that first put a man on the moon. To honor that achievement and the other countless strides in space exploration, we’re bringing you new tours, another way to explore the moon and an out-of-this-world quiz--all in Google Earth. And for those who are still dreaming about the stars, we’re sharing even more stories about the lunar mission on Search.

First up, we join NASA to learn how the Apollo 11 mission came to be. From President John F. Kennedy’s challenge to put a man on the moon to the astronaut training facilities to mission control, the countdown to launch started long before June 16, 1969.

Space Screenshot

Explore the history of the Apollo 11 mission to the moon.

Next, go inside a flame trench with the popular radio broadcast Science Friday. We’ll explore how NASA is upgrading existing launch sites for future missions and how they’re dealing with the threat of sea level rise for these coastal facilities. If you’re a teacher, we’re also sharing ideas for how to explore these tours with students

Launchpad Gif

See how NASA is preserving rocket launchpads like the site of the Apollo 1 launch.

We’re also launching a new way to explore the Moon in Google Earth Studio, an animation tool for Google Earth’s satellite and 3D imagery imagery. Starting today, you’ll be able to create animations of the Moon and Mars using the tool, opening up a whole new world for video creators. Simply use the World menu from the new project page or go to your project settings page to get started.

Moon Gif

Learn how to animate the moon in Google Earth Studio

Finally, we’re honoring 10 iconic space explorers—the men, women and robots who have advanced our understanding of the world beyond our planet through research and space travel. Once you think you’re ready to command your own mission, test your knowledge in our space quiz. We’ll even give you a hint: The French were the first to send a feline named Félicette into space. 

SpaceAlt

Clockwise from top: Yuri Gagarin, the first man in space; Mae Jemison, the first African-American woman in space; Sally Ride, the first LGBTQ astronaut to travel to space; Carl Sagan, the astrophysicist who helped popularize science through his television series "Cosmos: A Personal Voyage.”

Visit Google Earth all week long to explore the wonders of space. 

Upcoming changes to the AdSense mobile experience

The web is mobile.

Nearly 70% of AdSense audiences experience the web on mobile devices. With new mobile web technologies such as responsive mobile sites, Accelerated Mobile Pages (AMP) and Progressive Web Apps (PWA) the mobile web works better and faster than ever.

We understand that using AdSense on the go is important to you. More than a third of our users access AdSense from mobile devices and this is an area where we continue to invest.

Our vision is an AdSense that does more to keep your account healthy, letting you focus on creating great content, and comes to you when issues or opportunities need your attention.

With this in mind, we have reviewed our mobile strategy. As a result, we will be focusing our investment on the AdSense mobile web interface and sunsetting the current iOS and Android apps. By investing in a common web application that supports all platforms, we will be able to deliver AdSense features optimized for mobile much faster than we can today.

Later this year we will announce improvements to the AdSense mobile web interface. The AdSense Android and iOS apps will be deprecated in the coming months, and will be discontinued and removed from the app stores by the end of 2019.

Like our publishers who have built their businesses around the mobile web, we look forward to leveraging great new web technologies to deliver an even better, more automated, and more useful mobile experience. Stay tuned for further announcements throughout the rest of the year.


Posted by: Andrew Gildfind
AdSense Product Manager

Source: Inside AdSense


Google employees take action to encourage women in computer science

When she was a teenager, Andrea Francke attended Schnupperstudium, or “Taster Week”—an event aimed at high-school girls to give them a taste of what it’s like to study computer science and work in the industry. That moment changed the course of her life. “As a teenager, Schnupperstudium was a game changer for me. That’s when I decided to study computer science,” says Andrea, who is now a senior software engineer at Google in Zürich.

This year, Andrea went back to Schnupperstudium, this time as a volunteer, to share her experience as part of a collaboration between employees at Google Zürich and the computer science department at ETH Zürich (Swiss Federal Institute of Technology in Zürich). “Offering other girls a glimpse into life as a software engineer is a cause that’s very dear to my heart,” Andrea says.

Andrea Francke and Tahmineh Sanamrad, Google software engineers, delivering a career panel for high school girls at Google Zürich.

Andrea Francke and Tahmineh Sanamrad, Google software engineers, delivering a career panel for high school girls at Google Zürich.

After this year’s Schnupperstudium event, surveys showed that seven in nine girls agreed they could learn computer science if they wanted to, said they had an interest in the subject and believed computer science could help them find a job they would enjoy. “While stereotypes about computer science abound, events like Schnupperstudium can often counter them by showing what it’s really like to work in this field,” Andrea adds.

Something as simple as having a good role model can help to encourage girls to pursue their aspirations. A study Google conducted showed that encouragement and exposure directly influence whether young women decide to go for a computer science degree.

As we look into the skills needed for the current and future workplace, we see that there will be an increased demand for workers in STEM jobs, which will greatly affect the next generation. Yet only around 30 percent of women go into STEM programs in college, so not all young people may end up represented in the field. Somewhere along the way to choosing a career path, women are losing interest in technology. 

That means there’s more to be done, especially at the stage when women are making decisions about their futures. That’s why here at Google, our employees are getting involved with events that encourage young people, and particularly women, to follow through on a computer science degree. 

In 2018 alone, more than 300 Google employees across Europe directly worked with 29,000 students and 1,000 teachers through a range of volunteering activities. These initiatives are part of Grow with Google, which gives people training, products and tools to help them find jobs, grow their businesses or careers. In Europe alone, 48 percent of the people we trained in digital skills were women, thanks to programs like WomenWill and #IamRemarkable.

As we celebrate  World Youth Skills Day and the achievements of 1.8 billion young people from age 10 to 24, we will continue working to help them prepare for their futures.

Multilingual Universal Sentence Encoder for Semantic Retrieval



Since it was introduced last year, “Universal Sentence Encoder (USE) for English’’ has become one of the most downloaded pre-trained text modules in Tensorflow Hub, providing versatile sentence embedding models that convert sentences into vector representations. These vectors capture rich semantic information that can be used to train classifiers for a broad range of downstream tasks. For example, a strong sentiment classifier can be trained from as few as one hundred labeled examples, and still be used to measure semantic similarity and for meaning-based clustering.

Today, we are pleased to announce the release of three new USE multilingual modules with additional features and potential applications. The first two modules provide multilingual models for retrieving semantically similar text, one optimized for retrieval performance and the other for speed and less memory usage. The third model is specialized for question-answer retrieval in sixteen languages (USE-QA), and represents an entirely new application of USE. All three multilingual modules are trained using a multi-task dual-encoder framework, similar to the original USE model for English, while using techniques we developed for improving the dual-encoder with additive margin softmax approach. They are designed not only to maintain good transfer learning performance, but to perform well on semantic retrieval tasks.
Multi-task training structure of the Universal Sentence Encoder. A variety of tasks and task structures are joined by shared encoder layers/parameters (pink boxes).
Semantic Retrieval Applications
The three new modules are all built on semantic retrieval architectures, which typically split the encoding of questions and answers into separate neural networks, which makes it possible to search among billions of potential answers within milliseconds. The key to using dual encoders for efficient semantic retrieval is to pre-encode all candidate answers to expected input queries and store them in a vector database that is optimized for solving the nearest neighbor problem, which allows a large number of candidates to be searched quickly with good precision and recall. For all three modules, the input query is then encoded into a vector on which we can perform an approximate nearest neighbor search. Together, this enables good results to be found quickly without needing to do a direct query/candidate comparison for every candidate. The prototypical pipeline is illustrated below:
A prototypical semantic retrieval pipeline, used for textual similarity.
Semantic Similarity Modules
For semantic similarity tasks, the query and candidates are encoded using the same neural network. Two common semantic retrieval tasks made possible by the new modules include Multilingual Semantic Textual Similarity Retrieval and Multilingual Translation Pair Retrieval.
  • Multilingual Semantic Textual Similarity Retrieval
    Most existing approaches for finding semantically similar text require being given a pair of texts to compare. However, using the Universal Sentence Encoder, semantically similar text can be extracted directly from a very large database. For example, in an application like FAQ search, a system can first index all possible questions with associated answers. Then, given a user’s question, the system can search for known questions that are semantically similar enough to provide an answer. A similar approach was used to find comparable sentences from 50 million sentences in wikipedia. With the new multilingual USE models, this can be done in any of supported non-English languages.
  • Multilingual Translation Pair Retrieval
    The newly released modules can also be used to mine translation pairs to train neural machine translation systems. Given a source sentence in one language (“How do I get to the restroom?”), they can find the potential translation target in any other supported language (“¿Cómo llego al baño?”).
Both new semantic similarity modules are cross-lingual. Given an input in Chinese, for example, the modules can find the best candidates, regardless of which language it is expressed in. This versatility can be particularly useful for languages that are underrepresented on the internet. For example, an early version of these modules has been used by Chidambaram et al. (2018) to provide classifications in circumstances where the training data is only available in a single language, e.g. English, but the end system must function in a range of other languages.

USE for Question-Answer Retrieval
The USE-QA module extends the USE architecture to question-answer retrieval applications, which generally take an input query and find relevant answers from a large set of documents that may be indexed at the document, paragraph, or even sentence level. The input query is encoded with the question encoding network, while the candidates are encoded with the answer encoding network.
Visualizing the action of a neural answer retrieval system. The blue point at the north pole represents the question vector. The other points represent the embeddings of various answers. The correct answer, highlighted here in red, is “closest” to the question, in that it minimizes the angular distance. The points in this diagram are produced by an actual USE-QA model, however, they have been projected downwards from ℝ500 to ℝ3 to assist the reader’s visualization.
Question-answer retrieval systems also rely on the ability to understand semantics. For example, consider a possible query to one such system, Google Talk to Books, which was launched in early 2018 and backed by a sentence-level index of over 100,000 books. A query, “What fragrance brings back memories?”, yields the result, “And for me, the smell of jasmine along with the pan bagnat, it brings back my entire carefree childhood.” Without specifying any explicit rules or substitutions, the vector encoding captures the semantic similarity between the terms fragrance and smell. The advantage provided by the USE-QA module is that it can extend question-answer retrieval tasks such as this to multilingual applications.

For Researchers and Developers
We're pleased to share the latest additions to the Universal Sentence Encoder family with the research community, and are excited to see what other applications will be found. These modules can be used as-is, or fine tuned using domain-specific data. Lastly, we will also host the semantic similarity for natural language page on Cloud AI Workshop to further encourage research in this area.

Acknowledgements
Mandy Guo, Daniel Cer, Noah Constant, Jax Law, Muthuraman Chidambaram for core modeling, Gustavo Hernandez Abrego, Chen Chen, Mario Guajardo-Cespedes for infrastructure and colabs, Steve Yuan, Chris Tar, Yunhsuan Sung, Brian Strope, Ray Kurzweil for discussion of the model architecture.

Source: Google AI Blog


VidCon 2019

Happy 10th annual VidCon, creators! We're here with our Chief Product Officer, Neal Mohan, who's keynoting it up. His main message? All the ways YouTube will continue to support and help drive new opportunities for you in the next decade and beyond. Read how YouTube's planning to do all this — thanks to some of our new initiatives — here.

— The YouTube Team

Live from VidCon: Creating new opportunities for creators

For the last decade, VidCon has brought fans, creators and industry leaders together to celebrate the power of online video. In honor of VidCon’s 10th anniversary, I took the stage to highlight how YouTube will continue to support and spark new opportunities for creators for the next ten years and beyond.

More revenue streams, more money for creators

Last year at VidCon, I announced our next big step for creator monetization with new ways for creators to engage with their community while generating revenue. We've built on a number of these initiatives and added a few more.
  • Super Chat allows fans to purchase messages that stand out within a live chat during live streams and Premieres. There are now over 90,000 channels who have received Super Chats, with some streams earning more than $400 per minute. And Super Chat is now the number one revenue stream on YouTube for nearly 20,000 channels - an increase of over 65% over last year.
  • Leaning into this momentum, we’re introducing Super Stickers. This new feature will allow fans to purchase animated stickers during live streams and Premieres to show their favorite creators just how much they enjoy their content. Stickers will come in a variety of designs across different languages and categories, such as gaming, fashion and beauty, sports, music, food, and more. These stickers are fun, and we can’t wait for you to use them in the coming months!
  • With Channel Memberships, fans pay a monthly fee of $4.99 to get unique badges, new emojis, and access to special perks, such as exclusive live streams, extra videos, or shoutouts. Today, we’re adding one of the most-requested features: membership levels. With levels, creators can now set up to five different price points for channel memberships, each with varying perks. We've been testing levels with creators like the Fine Brothers Entertainment on their REACT channel, who have seen their memberships revenue increase by 6 times after introducing two higher-priced tiers.
  • Our Merch shelf with Teespring allows creators to sell merch to their fans directly from their channel. And today, we are adding 5 new partners, so eligible creators merchandising with Crowdmade, DFTBA, Fanjoy, Represent, and Rooster Teeth can also use the Merch shelf.
Early last year, creator revenue on YouTube from Super Chat, Channel Memberships and merch was nearly zero. Today, these products are generating meaningful results to creators across the globe. In fact, thousands of channels have more than doubled their total YouTube revenue by using these new tools in addition to advertising.

Helping creators amplify their positive impact

Every day, people from around the world come to YouTube to learn something new - from math, science and literature to language lessons, music tutorials and test prep. Today, we’re introducing Learning Playlists to provide a dedicated learning environment for people who come to YouTube to learn. New organizational features will provide more structure, dividing a collection of videos into chapters around key concepts, starting from beginner to more advanced. Additionally, recommendations will be hidden from the watch page, allowing the viewer to focus on the lesson at hand. We understand the importance of getting this right, so we will start with content from a handful of our most trusted partners, like Khan Academy, TED-Ed and Crash Course, testing a variety of categories from professional skills like working in Java, to academic topics such as chemistry.
We’ve also seen creators use their megaphone to inspire their communities to join them in supporting those in need. To make that even easier, last year, we began to test YouTube Giving, our fundraising tool that allows creators to use their voice on YouTube to support the charitable causes they care about. YouTube Giving is moving out of beta and will be available to thousands of creators in the U.S. in the coming months! Creators simply select a nonprofit to create a fundraising campaign right next to their videos and live streams. Fans can donate directly on YouTube via a “Donate” button, making it easier than ever for creators and fans to raise funds for causes they care about on the platform.
YouTube creators are living proof that an open and responsible internet can change the world for the better. We’re going to continue working to give them the tools they need to do that.

Posted by Neal Mohan, Chief Product Officer

Source: YouTube Blog


Making Google Voice easier to use on your computer

What’s changing

We’re making some improvements to the Google Voice web app. These will make it easier to find the right contact, quicker to place calls, and simpler to control audio settings. Specific improvements include:

  • Always-visible call panel
  • One-click calling
  • Quick access to mic and audio settings

See more information below.

Who’s impacted

End users

Why you’d use it

It’s important for a telephony system to be quick and intuitive to use. These improvements will make it simpler to use Google Voice, so users spend less time navigating the product interface and more time communicating through it.

How to get started



Additional details


Always visible call panel
The new call panel will be in the same place regardless of what you’re doing in the Google Voice app—checking messages, listening to voicemails, or something else. This will make it quicker and easier to place calls when you need to.

One-click calling
A new quick call option will appear when users hover over a contact in their call list. This will allow users to make calls faster.

Quick access to mic and audio settings
A new icon in the main action bar will give instant access to common audio settings. These include what microphone and audio output to use before or during a call, as well as what device should ring for incoming calls.


Helpful links

Help Center: Call someone with Google Voice
Help Center: Change the microphone or speakers on your computer

Availability


Rollout details



G Suite editions
Available to all G Suite editions with Google Voice licenses


On/off by default?
These features will be ON by default.


Stay up to date with G Suite launches