What’s new with Fast Pair

Posted by Catherina Xu (Product Manager)

Last November, we released Fast Pair with the Jaybird Tarah Bluetooth headphones. Since then, we’ve engaged with dozens of OEMs, ODMs, and silicon partners to bring Fast Pair to even more devices. Last month, we held a talk at I/O announcing 10+ certified devices, support for Qualcomm’s Smart Headset Development Kit, and upcoming experiences for Fast Pair devices.

The Fast Pair team presenting at I/O 2019.

The Fast Pair team presenting at I/O 2019.

Upcoming experiences

Fast Pair makes pairing seamless across Android phones - this year, we are introducing additional features to improve Bluetooth device management.

  • True Wireless Features. As True Wireless Stereo (TWS) headphones continue to gain momentum in the market and with users, it is important to build system-wide support for TWS. Later this year, TWS headsets with Fast Pair will be able to broadcast individual battery information for the case and buds. This enables features such as case open and close battery notifications and per-component battery reporting throughout the UI.

     Detailed battery level notifications surfaced during “case open” for TWS headphones.

    Detailed battery level notifications surfaced during “case open” for TWS headphones.

    • Find My Device. Fast Pair devices will soon be surfaced in the Find My Device app and website, allowing users to easily track down lost devices. Headset owners can view the location and time of last use, as well as unpair or ring the buds to locate when they are in range.

    Connected Device Details. In Android Q, Fast Pair devices will have an enhanced Bluetooth device details page to centralize management and key settings. This includes links to Find My Device, Assistant settings (if available), and additional OEM-specified settings that will link to the OEM’s companion app.

    The updated Device details screen in Q allows easy access to key settings and the headphone’s companion app.

    The updated Device details screen in Q allows easy access to key settings and the headphone’s companion app.

    Compatible Devices

    Below is a list of devices that were showcased during our I/O talk:

    • Anker Spirit Pro GVA
    • Anker SoundCore Flare+ (Speaker)
    • JBL Live 220BT
    • JBL Live 400BT
    • JBL Live 500BT
    • JBL Live 650BT
    • Jaybird Tarah
    • 1More Dual Driver BT ANC
    • LG HBS-SL5
    • LG HBS-PL6S
    • LG HBS-SL6S
    • LG HBS-PL5
    • Cleer Ally Plus

    Interested in Fast Pair?

    If you are interested in creating Fast Pair compatible Bluetooth devices, please take a look at:

    Once you have selected devices to integrate, head to our Nearby Devices console to register your product. Reach out to us at fast-pair-integrations@google.com if you have any questions.

  • 2019 Scholar Metrics Released


    Scholar Metrics provide an easy way for authors to quickly gauge the visibility and influence of recent articles in scholarly publications. Today, we are releasing the 2019 version of Scholar Metrics. This release covers articles published in 2014–2018 and includes citations from all articles that were indexed in Google Scholar as of July 2019.

    Scholar Metrics include journals from websites that follow our inclusion guidelines and selected conferences in Engineering & Computer Science. Publications with fewer than 100 articles in 2014-2018, or publications that received no citations over these years are not included.

    You can browse publications in specific categories such as Ceramic Engineering, High Energy & Nuclear Physics, or Film as well as broad areas like Engineering & Computer Science or Humanities, Literature & Arts . You will see the top 20 publications ordered by their five-year h-index and h-median metrics. You also can browse the top 100 publications in several languages - for example, Portuguese and Spanish. For each publication, you can view the top papers by clicking on the h5-index.

    Scholar Metrics include a large number of publications beyond those listed on the per-category and per-language pages. You can find these by typing words from the title in the search box, e.g., [security], [soil], [medicina].

    For more details, see the Scholar Metrics help page.

    Posted by: Anurag Acharya, Distinguished Engineer

    Building SMILY, a Human-Centric, Similar-Image Search Tool for Pathology



    Advances in machine learning (ML) have shown great promise for assisting in the work of healthcare professionals, such as aiding the detection of diabetic eye disease and metastatic breast cancer. Though high-performing algorithms are necessary to gain the trust and adoption of clinicians, they are not always sufficient—what information is presented to doctors and how doctors interact with that information can be crucial determinants in the utility that ML technology ultimately has for users.

    The medical specialty of anatomic pathology, which is the gold standard for the diagnosis of cancer and many other diseases through microscopic analysis of tissue samples, can greatly benefit from applications of ML. Though diagnosis through pathology is traditionally done on physical microscopes, there has been a growing adoption of “digital pathology,” where high-resolution images of pathology samples can be examined on a computer. With this movement comes the potential to much more easily look up information, as is needed when pathologists tackle the diagnosis of difficult cases or rare diseases, when “general” pathologists approach specialist cases, and when trainee pathologists are learning. In these situations, a common question arises, “What is this feature that I’m seeing?” The traditional solution is for doctors to ask colleagues, or to laboriously browse reference textbooks or online resources, hoping to find an image with similar visual characteristics. The general computer vision solution to problems like this is termed content-based image retrieval (CBIR), one example of which is the “reverse image search” feature in Google Images, in which users can search for similar images by using another image as input.

    Today, we are excited to share two research papers describing further progress in human-computer interaction research for similar image search in medicine. In “Similar Image Search for Histopathology: SMILY” published in Nature Partner Journal (npj) Digital Medicine, we report on our ML-based tool for reverse image search for pathology. In our second paper, Human-Centered Tools for Coping with Imperfect Algorithms During Medical Decision-Making(preprint available here), which received an honorable mention at the 2019 ACM CHI Conference on Human Factors in Computing Systems, we explored different modes of refinement for image-based search, and evaluated their effects on doctor interaction with SMILY.

    SMILY Design
    The first step in developing SMILY was to apply a deep learning model, trained using 5 billion natural, non-pathology images (e.g., dogs, trees, man-made objects, etc.), to compress images into a “summary” numerical vector, called an embedding. The network learned during the training process to distinguish similar images from dissimilar ones by computing and comparing their embeddings. This model is then used to create a database of image patches and their associated embeddings using a corpus of de-identified slides from The Cancer Genome Atlas. When a query image patch is selected in the SMILY tool, the query patch’s embedding is similarly computed and compared with the database to retrieve the image patches with the most similar embeddings.
    Schematic of the steps in building the SMILY database and the process by which input image patches are used to perform the similar image search.
    The tool allows a user to select a region of interest, and obtain visually-similar matches. We tested SMILY’s ability to retrieve images along a pre-specified axis of similarity (e.g. histologic feature or tumor grade), using images of tissue from the breast, colon, and prostate (3 of the most common cancer sites). We found that SMILY demonstrated promising results despite not being trained specifically on pathology images or using any labeled examples of histologic features or tumor grades.
    Example of selecting a small region in a slide and using SMILY to retrieve similar images. SMILY efficiently searches a database of billions of cropped images in a few seconds. Because pathology images can be viewed at different magnifications (zoom levels), SMILY automatically searches images at the same magnification as the input image.
    Second example of using SMILY, this time searching for a lobular carcinoma, a specific subtype of breast cancer.
    Refinement tools for SMILY
    However, a problem emerged when we observed how pathologists interacted with SMILY. Specifically, users were trying to answer the nebulous question of “What looks similar to this image?” so that they could learn from past cases containing similar images. Yet, there was no way for the tool to understand the intent of the search: Was the user trying to find images that have a similar histologic feature, glandular morphology, overall architecture, or something else? In other words, users needed the ability to guide and refine the search results on a case-by-case basis in order to actually find what they were looking for. Furthermore, we observed that this need for iterative search refinement was rooted in how doctors often perform “iterative diagnosis”—by generating hypotheses, collecting data to test these hypotheses, exploring alternative hypotheses, and revisiting or retesting previous hypotheses in an iterative fashion. It became clear that, for SMILY to meet real user needs, it would need to support a different approach to user interaction.

    Through careful human-centered research described in our second paper, we designed and augmented SMILY with a suite of interactive refinement tools that enable end-users to express what similarity means on-the-fly: 1) refine-by-region allows pathologists to crop a region of interest within the image, limiting the search to just that region; 2) refine-by-example gives users the ability to pick a subset of the search results and retrieve more results like those; and 3) refine-by-concept sliders can be used to specify that more or less of a clinical concept be present in the search results (e.g., fused glands). Rather than requiring that these concepts be built into the machine learning model, we instead developed a method that enables end-users to create new concepts post-hoc, customizing the search algorithm towards concepts they find important for each specific use case. This enables new explorations via post-hoc tools after a machine learning model has already been trained, without needing to re-train the original model for each concept or application of interest.
    Through our user study with pathologists, we found that the tool-based SMILY not only increased the clinical usefulness of search results, but also significantly increased users’ trust and likelihood of adoption, compared to a conventional version of SMILY without these tools. Interestingly, these refinement tools appeared to have supported pathologists’ decision-making process in ways beyond simply performing better on similarity searches. For example, pathologists used the observed changes to their results from iterative searches as a means of progressively tracking the likelihood of a hypothesis. When search results were surprising, many re-purposed the tools to test and understand the underlying algorithm, for example, by cropping out regions they thought were interfering with the search or by adjusting the concept sliders to increase the presence of concepts they suspected were being ignored. Beyond being passive recipients of ML results, doctors were empowered with the agency to actively test hypotheses and apply their expert domain knowledge, while simultaneously leveraging the benefits of automation.
    With these interactive tools enabling users to tailor each search experience to their desired intent, we are excited for SMILY’s potential to assist with searching large databases of digitized pathology images. One potential application of this technology is to index textbooks of pathology images with descriptive captions, and enable medical students or pathologists in training to search these textbooks using visual search, speeding up the educational process. Another application is for cancer researchers interested in studying the correlation of tumor morphologies with patient outcomes, to accelerate the search for similar cases. Finally, pathologists may be able to leverage tools like SMILY to locate all occurrences of a feature (e.g. signs of active cell division, or mitosis) in the same patient’s tissue sample to better understand the severity of the disease to inform cancer therapy decisions. Importantly, our findings add to the body of evidence that sophisticated machine learning algorithms need to be paired with human-centered design and interactive tooling in order to be most useful.

    Acknowledgements
    This work would not have been possible without Jason D. Hipp, Yun Liu, Emily Reif, Daniel Smilkov, Michael Terry, Craig H. Mermel, Martin C. Stumpe and members of Google Health and PAIR. Preprints of the two papers are available here and here.

    Source: Google AI Blog


    Get the scoop: The ice cream America is searching for

    Nothing says summer like the jingle of an ice cream truck—and cooling off with a (quickly melting) tasty treat. But these days, Americans aren’t just settling for chocolate and vanilla.  To celebrate National Ice Cream Day on July 21, we’ve rounded up this year’s top trending ice cream-related searches across the U.S.—and found more people are looking to experience new flavors, types, forms and even temperatures. 

    Global treats    

    This year, searches for ice cream have moved away from your typical neighborhood ice cream truck and gone international. Searches for Mexican ice cream have gone up, thanks to people looking to have a taste of the raw milk, hand-churned, wooden-barrelled, sweet and spicy creation. Japan’s creations are also trending, with chewy and colorful mochi sparking interest, along with “fish ice cream,” or taiyaki, fish-shaped cakes that make tasty ice cream cones. And the Italian classic, gelato, has U.S. searchers craving its dense, silky texture. 

    Gym worthy

    “Keto ice cream” has reached the dessert menu, with people searching for options that cut out carbs. Similarly, Americans are searching for “protein ice cream,” which boosts protein levels by using milk protein concentrate or whey protein. Others who aren’t so diet conscious are searching for fried ice cream. After breading, the scoop is quickly deep-fried to create a crispy shell around it. It’s served warm from the outside, but with a cold, sweet heart.

    Unconventionally frosty   

    Chocolate, vanilla and strawberry are still ice cream royalty when it comes to searches. But they have some competition. Filipino Ube ice cream has warmed up to Americans with its intense purple color. And green ice cream, like matcha and avocado varieties, has also seen searches grow this year. Snow ice cream is also a big thing this year, and you won’t believe its main ingredient: actual snow!

    In case you need a little push to decide what to order, here’s the full list of trending searches on this tasty topic:  

    Top trending ice cream types in 2019 in the U.S: 

    1. Snow ice cream

    2. Keto ice cream

    3. Mexican ice cream

    4. Ice cream bars

    5. Fish ice cream

    6. Mochi ice cream

    7. Gelato

    8. Ice cream sundae

    9. Fried ice cream

    10. Protein ice cream

    Top trending ice cream flavors in 2019 in the U.S.: 

    1. Strawberry ice cream

    2. Ube ice cream

    3. Chocolate ice cream

    4. Coffee ice cream

    5. Vanilla ice cream

    6. Oreo ice cream

    7. Mango ice cream

    8. Coconut ice cream

    9. Matcha ice cream

    10. Avocado ice cream

    Source: Search


    Official Google Australia Blog 2019-07-19 08:06:00

    Google’s free digital skills training program Grow With Google will visit all states and territories in 2020, making training available for people right across Australia.

    Grow with Google aims to give all Australians access to digital skills training, both online and in-person, to help them make the most of the Web.

    I shared this news in front of 200 small businesses, community organisations and individuals today at a Grow with Google event at the Cronulla Sharks Leagues Club.
    Caption: More than 200 small businesses, community organisations and individuals joined the Grow with Google Sutherland Shire event. 

    At today’s event, local Sutherland Shire businesses learned how to grow their presence online and find new customers, and individuals at all stages of the digital journey picked up new skills and tips. 

    Caption: Google Australia Country Director Mel Silva with Kirsty Tilla, owner of Cronulla business LOAF Sandwiches

    It was great to bring these workshops to the Sutherland Shire to help more people get the digital skills they need to grow and thrive.

    We are thrilled to be taking Grow with Google national in 2020, visiting metropolitan and regional centres across Australia so that everyone has the opportunity to participate.

    Since 2014, Google has trained more than half a million people across Australia through online and in-person digital skills training, as well as curriculum integrated through school and partner programs.

    Grow with Google aims to create opportunity for all Australians to grow their skills, careers, and businesses with free tools, training, and events. The next Grow with Google event will be held in Canberra on 11 September. Find out more at: g.co/GrowAustralia

    Posted by Mel Silva, Country Director, Google Australia

    “We did it”: Today’s Doodle for the 50th anniversary of the moon landing

    Fifty years ago on July 20, 1969, astronauts from NASA’s Apollo 11 mission stepped foot on the moon. Today, you can relive the Apollo 11 journey from blast-off to re-entry in an epic video Doodle narrated by former astronaut and Apollo 11 command module pilot Michael Collins. 


    Collins was one of three astronauts on the mission, along with Neil Armstrong and Edwin “Buzz” Aldrin. While Armstrong and Aldrin “frolicked” on the moon’s surface (Collins’ words, not ours!), he was the one who stayed behind in the command module, which would eventually bring all three astronauts back home to Earth. In the Doodle, you can hear him describe their “adventure,” beginning when a Saturn V rocket blasted off from Florida’s Kennedy Space Center on July 16. Four days later, the lunar module, known as “the Eagle,” made its 13-minute journey to the “Sea of Tranquility” on the moon’s surface. And the rest, as they say, was history.


    To create today’s Doodle, the team worked closely with NASA to understand the ins and outs of the mission and ensure the most accurate representation possible. In the Doodle, you can learn about the 400,000 people who worked on the Apollo project, the onboard computer, and the "barbecue roll" which was used to regulate the spacecraft’s temperature. Learn more about the process of creating the doodle in our behind-the-scenes video:

    Behind the Doodle: 50th Anniversary of the Moon Landing

    Apollo 11 archival audio clips courtesy of NASA

    You can also see early storyboard sketches and concept art from Doodler Pedro Vergani:

    The moon landing radically reshaped the way people thought about our world and what is possible. To this day, it is an inspiration for doers and dreamers around the globe—the very Earth that Collins describes in the Doodle as “the main show.” We hope today’s Doodle is a fitting tribute to this monumental human achievement. To quote Collins:


    “We, you and me, the inhabitants of this wonderful Earth. We did it!"

    Beta Channel Update for Desktop

    The beta channel has been updated to 76.0.3809.71 for Windows & Linux, and 76.0.3809.71 or 76.0.3809.72 for Mac.

    A full list of changes in this build is available in the log. Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.



    Abdul Syed
    Google Chrome

    Kotlin named Breakout Project of the Year at OSCON

    Posted by Wojtek Kaliciński, Developer Advocate, Android

    Stephanie on Stage with Kotlin on screen

    Stephanie Saad Cuthbertson announces support for Kotlin during the Developer Keynote at I/O 2017.

    Today at OSCON (the O'Reilly Open Source Software Conference), Kotlin was awarded the Open Source Award for Breakout Project of the Year.

    There is no doubt to us why Kotlin received this award: it’s a fast moving (but thoughtfully developed) programming language that lets you write better code, faster. It’s great to see Kotlin continue to receive the sort of recognition as Breakout Project of the Year, building on other awards like #1 fastest growing language on Github.

    We’re big fans of Kotlin, and we’ve heard that you are too – feedback from you is in part why we announced support for the language over two years ago. This meant bundling the Kotlin plugin in Android Studio, along with promising to support Kotlin-built apps going forward.

    But there was a long way to go for many teams at Google to provide a first class experience with Kotlin in the Android ecosystem, and to convince developers that Kotlin on Android is not just a fad, but is here to stay.

    If you haven’t tried Kotlin yet, now is a great time to start! In fact, in the past two years, we’ve been adding a number of new features and upgrades to the Kotlin for Android experience, including:

    • Android Jetpack APIs now have first class support for Kotlin Coroutines, transforming the way we do async operations on Android. This includes Room, LiveData, ViewModels, WorkManager and more coming in the future.

    • Many Jetpack libraries have Kotlin extension libraries (KTX) to make using them even more fluent with Kotlin.
    • The compilation toolchain has received many improvements for Kotlin, including compiler enhancements, incremental annotation processing with KAPT, and Kotlin-specific R8 optimizations.
    • All of our documentation pages now contain Kotlin code snippets, so you can easily compare how our APIs work in both languages.
    Kotlin code snippet
    • Most of our flagship samples are also written in Kotlin (including IOSched, Plaid, Sunflower and many more), along with any new samples that we make in the future.
    • We've added a language switcher to our API reference pages, so you can have a Kotlin view of the AndroidX library and the Android framework.
    Kotlin view of the AndroidX library
    • We doubled down on providing guidance to developers and teams who want to switch to Kotlin on our developers.android.com/kotlin pages.
    • Our Developer Relations engineers are posting real life examples and guides on integrating Kotlin in your apps on our Medium publication, such as the great intro to Coroutines on Android series and many more.
    • If you prefer to learn Kotlin in person, you can join one of the many Kotlin/Everywhere events happening around the world. If you are an organizer in a local developer community, consider signing up to host your own event!
      This initiative is a cooperation between JetBrains and Google.
    • For those of you who don't have access to in-person training, we added a new, free course on Udacity for Developing Android apps in Kotlin. Our Kotlin Bootcamp for Programmers course is still available as well!
    • We have worked with many external partners to gather feedback and learn about their experiences with Kotlin, such as this case study with Square.
    • And lastly, we've enabled Kotlin as a supported language for Android app teams at Google. We're already seeing adoption in apps such as Google Home, Google Drive, Android System UI, Nest, with many more to follow.

    The road to fully supporting Kotlin on Android was not always easy, but it was truly rewarding seeing Kotlin adoption among professional Android developers rise from a handful of early adopters to around 50% since the original announcement!

    We were confident when we announced earlier this year at Google I/O 2019 that Android is going increasingly Kotlin-first, opening up the possibility for APIs built specifically around Kotlin and for Kotlin users, starting with the new, declarative UI toolkit - Jetpack Compose (still in early development).

    We want to congratulate JetBrains, our partners through the Kotlin Foundation and creators of Kotlin, on receiving the OSCON Open Source Award today. It shows how disruptive and transformative Kotlin has been, and not just for the Android developer community, but beyond.

    We know one thing: on Android, Kotlin is here to stay.

    Protecting private browsing in Chrome

    Chrome’s Incognito Mode is based on the principle that you should have the choice to browse the web privately. At the end of July, Chrome will remedy a loophole that has allowed sites to detect people who are browsing in Incognito Mode. This will affect some publishers who have used the loophole to deter metered paywall circumvention, so we’d like to explain the background and context of the change.

    Private browsing principles

    People choose to browse the web privately for many reasons. Some wish to protect their privacy on shared or borrowed devices, or to exclude certain activities from their browsing histories. In situations such as political oppression or domestic abuse, people may have important safety reasons for concealing their web activity and their use of private browsing features.

    We want you to be able to access the web privately, with the assurance that your choice to do so is private as well. These principles are consistent with emerging web standards for private browsing modes

    Closing the FileSystem API loophole

    Today, some sites use an unintended loophole to detect when people are browsing in Incognito Mode. Chrome’s FileSystem API is disabled in Incognito Mode to avoid leaving traces of activity on someone’s device. Sites can check for the availability of the FileSystem API and, if they receive an error message, determine that a private session is occurring and give the user a different experience.  

    With the release of Chrome 76 scheduled for July 30, the behavior of the FileSystem API will be modified to remedy this method of Incognito Mode detection. Chrome will likewise work to remedy any other current or future means of Incognito Mode detection.

    Publisher impact and strategies

    The change will affect sites that use the FileSystem API to intercept Incognito Mode sessions and require people to log in or switch to normal browsing mode, on the assumption that these individuals are attempting to circumvent metered paywalls. 

    Unlike hard paywalls or registration walls, which require people to log in to view any content, meters offer a number of free articles before you must log in. This model is inherently porous, as it relies on a site’s ability to track the number of free articles someone has viewed, typically using cookies. Private browsing modes are one of several tactics people use to manage their cookies and thereby "reset" the meter count.

    Sites that wish to deter meter circumvention have options such as reducing the number of free articles someone can view before logging in, requiring free registration to view any content, or hardening their paywalls. Other sites offer more generous meters as a way to develop affinity among potential subscribers, recognizing some people will always look for workarounds.  We suggest publishers monitor the effect of the FileSystem API change before taking reactive measures since any impact on user behavior may be different than expected and any change in meter strategy will impact all users, not just those using Incognito Mode.

    Our News teams support sites with meter strategies and recognize the goal of reducing meter circumvention, however any approach based on private browsing detection undermines the principles of Incognito Mode. We remain open to exploring solutions that are consistent with user trust and private browsing principles.


    Source: Google Chrome


    Beta Channel Update for Chrome OS

    The Beta channel has been updated to 76.0.3809.68 (Platform version: 12239.44.0) for most Chrome OS devices. This build contains a number of bug fixes, security updates and feature enhancements. Changes can be viewed here.

    If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser).


    Cindy Bayless
    Google Chrome