4 ways to use Touch to Search on Chrome

Whether you’re on a vacation or just running to your next work meeting, we want to make it easier for you to quickly find information without having to type things out on a cramped smartphone keyboard. That’s why we introduced “Touch to Search” in the Chrome app for Android a few years ago. Touch to Search lets you do things like press on specific words on web pages to quickly search them, whether it’s to learn more about a place of interest or get help with a translation.

Here are four helpful tips on how to make the most of Touch to Search when you’re on the go:

1. Get faster translations.

Let’s say you're checking out the menu of a new restaurant and you come across a French phase you don’t understand. Press on the word, and if you’ve fully enabled Touch to Search in Settings, you’ll see an immediate translation in the bar on the bottom of your screen. You can also tap or swipe up on the bar to visit a search results page for the word you selected. Of course, Chrome can also automatically translate a full page in your desired language.

Image of Pixel 6 device with a French word “nouvelles fonctionnalites” being translated into English in Chrome browser

2. Get helpful info in context.

In addition to translations, if you stumble across a person, word or place you’re unfamiliar with, Touch to Search can get you up to speed — right in context. Press on the word in question, and you’ll see an informative card from Touch to Search.

Image of Pixel 6 device with the word “Oakland'' highlighted in Chrome browser. There is a description at the bottom of “Oakland” being described as a “City in California”.

3. Tap or hold down to get results.

When we say “press” on words, that means you can usually tap or hold-down on words to activate Touch to Search, so it’s simple for you to search in context. Recently, we standardized what happens when you tap and when you hold down, so you’ll get the same experience of Touch to Search regardless of which gesture you prefer. One caveat: There’s some sites on the web with unique interfaces which disable tapping words, so if you encounter that, try holding down instead.

4. Use more simple settings.

We want controlling your settings in Chrome to feel intuitive, so we updated Touch to Search’s settings to give you more fine-grained control over how you want to use the feature. Now when you enable “Include surrounding text in Google searches,” you’ll be more likely to get high-quality results – including through translations and definitions right on the page.

Image of Pixel 6 devices side by side showing the setting options for the “Touch to Search” feature.

There’s more to come

We’re always exploring more ways to make it easier to search on-the-go in Chrome. One feature we’re testing out is related searches. This adds suggestions into Touch to Search that are based on what you’ve selected, to make it simple to learn even more about what you’ve just seen. For example, if you select the words “San Francisco,” we could show helpful suggestions like “San Francisco population” or “San Francisco events.” We’re continually exploring ways to make it easier to find information in Chrome, including the unique needs people have when searching on their Android or iOS phones. Look out for more features soon.

The handwriting fonts that help Australian students learn how to read and write are now available in Google Workspace

Foundation fonts for Australian schools, 4 styles, 5 fonts


Google for Education Australia and Google Fonts partnered to make Foundation Fonts for Australian Schools available on Google Workspace, including Google Workspace for Education. The fonts are also available for download from the Google Fonts website. 

 

Australian teachers are required to use state-mandated handwriting styles to teach reading and writing to school children from ages four to nine. Designed by Tina and Corey Anderson, the five Foundation Fonts exemplify proper handwriting for English and other languages using the Latin writing system, and include common math symbols. The regular weight of each font imitates the pencil thickness of handwriting, making the fonts easy for students to recognize as they learn how to write letter shapes. 

 

The availability of these fonts on Google Docs, Sheets, and Slides is also important for the adoption of Chromebooks and Google Workspace for Education in Australian schools.

“Using the state-prescribed fonts in Google Workspace makes it easier for students and teachers to collaborate and create documents and projects using Chromebooks and Google Workspace for Education.  We are thrilled to improve our platforms to align with Australian education standards," explained Kimberley Hall, Australian Teaching and Learning Lead, Google for Education.

 

Google for Education Australia received many requests for these fonts to be added to Google’s products and since their release, teachers have expressed their excitement that the fonts are finally available.

“Having the Foundation Fonts available on Google Workspace and Google Fonts is important to Early Stage 1 (Kindergarten, ages 4-5), Stage 1 (Years 1-2, aged 5-7) and Stage 2 (Years 3-4, ages 7-9) teachers. These fonts are required in our English syllabus. To expose students to the correct ways to write, we use these fonts in worksheets, wall displays, posters, and other written materials. I did professional development with an occupational therapist who stated that exposure to Foundation Fonts in the early years is essential for children to recognize letter shapes so they can read and write,” explained Alfina Jackson, a teacher in New South Wales. 

 

The Foundation Fonts are in the OpenType variable font format and are available in weights ranging from 400 to 700

 

To use these fonts in Workspace products, select “More” in the Fonts menu and type the name of the font, or “Edu” in the search bar.  


Cursor selecting "hello world" text, selecting More in fonts menu, typing Edu in search

Select the “More” menu to find the Foundation Fonts for Australian Schools.

To download the fonts from Google Fonts, visit:


 

For more information, visit the AU School Handwriting GitHub page

 

To see how these fonts are used to teach children to write, visit the New South Wales Department of Education’s Handwriting guide for parents

 

Posted by Susanna Zaraysky, Google Fonts Content Strategist

The handwriting fonts that help Australian students learn how to read and write are now available in Google Workspace

Foundation fonts for Australian schools, 4 styles, 5 fonts


Google for Education Australia and Google Fonts partnered to make Foundation Fonts for Australian Schools available on Google Workspace, including Google Workspace for Education. The fonts are also available for download from the Google Fonts website. 

 

Australian teachers are required to use state-mandated handwriting styles to teach reading and writing to school children from ages four to nine. Designed by Tina and Corey Anderson, the five Foundation Fonts exemplify proper handwriting for English and other languages using the Latin writing system, and include common math symbols. The regular weight of each font imitates the pencil thickness of handwriting, making the fonts easy for students to recognize as they learn how to write letter shapes. 

 

The availability of these fonts on Google Docs, Sheets, and Slides is also important for the adoption of Chromebooks and Google Workspace for Education in Australian schools.

“Using the state-prescribed fonts in Google Workspace makes it easier for students and teachers to collaborate and create documents and projects using Chromebooks and Google Workspace for Education.  We are thrilled to improve our platforms to align with Australian education standards," explained Kimberley Hall, Australian Teaching and Learning Lead, Google for Education.

 

Google for Education Australia received many requests for these fonts to be added to Google’s products and since their release, teachers have expressed their excitement that the fonts are finally available.

“Having the Foundation Fonts available on Google Workspace and Google Fonts is important to Early Stage 1 (Kindergarten, ages 4-5), Stage 1 (Years 1-2, aged 5-7) and Stage 2 (Years 3-4, ages 7-9) teachers. These fonts are required in our English syllabus. To expose students to the correct ways to write, we use these fonts in worksheets, wall displays, posters, and other written materials. I did professional development with an occupational therapist who stated that exposure to Foundation Fonts in the early years is essential for children to recognize letter shapes so they can read and write,” explained Alfina Jackson, a teacher in New South Wales. 

 

The Foundation Fonts are in the OpenType variable font format and are available in weights ranging from 400 to 700

 

To use these fonts in Workspace products, select “More” in the Fonts menu and type the name of the font, or “Edu” in the search bar.  


Cursor selecting "hello world" text, selecting More in fonts menu, typing Edu in search

Select the “More” menu to find the Foundation Fonts for Australian Schools.

To download the fonts from Google Fonts, visit:


 

For more information, visit the AU School Handwriting GitHub page

 

To see how these fonts are used to teach children to write, visit the New South Wales Department of Education’s Handwriting guide for parents

 

Posted by Susanna Zaraysky, Google Fonts Content Strategist

How we’re improving search results when you use quotes

Sometimes people know they absolutely, positively only want webpages that mention a particular word or phrase. For example, maybe you want to find out about phone chargers but only those that support wireless charging. Fortunately, Google Search has a special operator for that: quotation marks. Put quotes around any word or phrase, such as [“wireless phone chargers”], and we’ll only show pages that contain those exact words or phrases.

Now we’re making quoted searches better. The snippets we display for search results (meaning the text you see describing web content) will be formed around where a quoted word or phrase occurs in a web document. That means you can more easily identify where to find them after you click the link and visit the content. On desktop, we’ll also bold the quoted material.

For example, if you did a search such as [“google search”], the snippet will show where that exact phrase appears:

Picture of Google search results for ["google search"] showing two listings and how the words "google search" are bolded in the snippets for each listing.

In the past, we didn’t always do this because sometimes the quoted material appears in areas of a document that don’t lend themselves to creating helpful snippets. For example, a word or phrase might appear in the menu item of a page, where you’d navigate to different sections of the site. Creating a snippet around sections like that might not produce an easily readable description.

We’ve heard feedback that people doing quoted searches value seeing where the quoted material occurs on a page, rather than an overall description of the page. Our improvement is designed to help address this.

Things to keep in mind about quoted searches

For those doing quoted searches, here are some more tips, along with caveats on how quoted searching works.

Quoted searches may match content not readily visible on a page. As referenced above, sometimes quoted searches match content contained within a web page that isn’t readily visible, making it seem like the content isn’t on the page when it actually is present.

For example, content in a meta description tag is looked at for matches, even though that content isn’t visible on the web page itself. ALT text that describes images is considered, as is the text within a page’s URL. Material brought in through inline frames (iframes) is also matched. Google may also see content that doesn’t initially load on a page when you go to it, such content rendered through JavaScript that only appears if you click to make it display.

Pro tip: Sometimes people use the standard Find command in a browser to jump to the phrase they want, after arriving on a page. If that doesn’t work, though, you can try using a developer tools option. For instance, in Chrome, you can search from within Developer Tools to match against all rendered text, which would include the text in drop-down menus and other areas of the site.

Pages may have changed since Google last visited them. While Google revisits pages across the web regularly, they can change in between visits. This means quoted material might appear on a page when we saw it, but it no longer exists on the current page. If available, viewing the Google cached copy may show where the quoted content appeared on the version of the page we visited.

Quoted terms may only appear in title links and URLs. Quoted terms won’t appear in web page snippets if they only appear within title links or URLs of a web page. We also do not bold matches that happen in title links and URLs.

Punctuation is sometimes seen as spaces. Our systems see some punctuation as spaces, which impacts quoted searches. For example, a search for [“don’t doesn’t”] tells our systems to find content that contains all these letters in this order:

don t doesn t

As a result, we’ll match content like the ones below, where punctuation like commas or hyphens break up words — because when you remove the punctuation, the letter patterns are the same:

  • don’t, doesn’t
  • don’t / doesn’t
  • don’t - doesn’t

Snippets might not show multiple quoted terms. If a search involves multiple quoted terms, the snippet may not show all of them if they are far apart from each other. Similarly, if quoted material appears several times on a page, a snippet will show what seems to be the most relevant occurrence.

We mainly bold quoted content for web page snippets on desktop.

Our new bolding of quoted content generally only works for web page snippets on desktop. Bolding won’t appear in snippets for recipe or video boxes, and it also won’t appear when using some special modes such as image or news search. However, anything listed in these boxes or special modes will contain the quoted terms. Bolding also doesn’t work for mobile results.

Quoted searches don’t work for local results. Quote restriction does not work for results in our local box where listings usually appear with a map; we’ll be looking more at this area in the future.

To quote or not to quote?

Using quotes can definitely be a great tool for power users. We generally recommend first doing any search in natural language without resorting to operators like quotation marks. Years ago, many people used operators because search engines sometimes needed additional guidance. Things have advanced since then, so operators are often no longer necessary.

By default, our systems are designed to look for both the exact words and phrases entered and related terms and concepts, which is often useful. If you use a quoted search, you might miss helpful content that uses closely related words.

Of course, there are those times when the exact word being on a page makes all the difference. For those situations, quoted searches remain available and are now even better.

Source: Search


How we’re improving search results when you use quotes

Sometimes people know they absolutely, positively only want webpages that mention a particular word or phrase. For example, maybe you want to find out about phone chargers but only those that support wireless charging. Fortunately, Google Search has a special operator for that: quotation marks. Put quotes around any word or phrase, such as [“wireless phone chargers”], and we’ll only show pages that contain those exact words or phrases.

Now we’re making quoted searches better. The snippets we display for search results (meaning the text you see describing web content) will be formed around where a quoted word or phrase occurs in a web document. That means you can more easily identify where to find them after you click the link and visit the content. On desktop, we’ll also bold the quoted material.

For example, if you did a search such as [“google search”], the snippet will show where that exact phrase appears:

Picture of Google search results for ["google search"] showing two listings and how the words "google search" are bolded in the snippets for each listing.

In the past, we didn’t always do this because sometimes the quoted material appears in areas of a document that don’t lend themselves to creating helpful snippets. For example, a word or phrase might appear in the menu item of a page, where you’d navigate to different sections of the site. Creating a snippet around sections like that might not produce an easily readable description.

We’ve heard feedback that people doing quoted searches value seeing where the quoted material occurs on a page, rather than an overall description of the page. Our improvement is designed to help address this.

Things to keep in mind about quoted searches

For those doing quoted searches, here are some more tips, along with caveats on how quoted searching works.

Quoted searches may match content not readily visible on a page. As referenced above, sometimes quoted searches match content contained within a web page that isn’t readily visible, making it seem like the content isn’t on the page when it actually is present.

For example, content in a meta description tag is looked at for matches, even though that content isn’t visible on the web page itself. ALT text that describes images is considered, as is the text within a page’s URL. Material brought in through inline frames (iframes) is also matched. Google may also see content that doesn’t initially load on a page when you go to it, such content rendered through JavaScript that only appears if you click to make it display.

Pro tip: Sometimes people use the standard Find command in a browser to jump to the phrase they want, after arriving on a page. If that doesn’t work, though, you can try using a developer tools option. For instance, in Chrome, you can search from within Developer Tools to match against all rendered text, which would include the text in drop-down menus and other areas of the site.

Pages may have changed since Google last visited them. While Google revisits pages across the web regularly, they can change in between visits. This means quoted material might appear on a page when we saw it, but it no longer exists on the current page. If available, viewing the Google cached copy may show where the quoted content appeared on the version of the page we visited.

Quoted terms may only appear in title links and URLs. Quoted terms won’t appear in web page snippets if they only appear within title links or URLs of a web page. We also do not bold matches that happen in title links and URLs.

Punctuation is sometimes seen as spaces. Our systems see some punctuation as spaces, which impacts quoted searches. For example, a search for [“don’t doesn’t”] tells our systems to find content that contains all these letters in this order:

don t doesn t

As a result, we’ll match content like the ones below, where punctuation like commas or hyphens break up words — because when you remove the punctuation, the letter patterns are the same:

  • don’t, doesn’t
  • don’t / doesn’t
  • don’t - doesn’t

Snippets might not show multiple quoted terms. If a search involves multiple quoted terms, the snippet may not show all of them if they are far apart from each other. Similarly, if quoted material appears several times on a page, a snippet will show what seems to be the most relevant occurrence.

We mainly bold quoted content for web page snippets on desktop.

Our new bolding of quoted content generally only works for web page snippets on desktop. Bolding won’t appear in snippets for recipe or video boxes, and it also won’t appear when using some special modes such as image or news search. However, anything listed in these boxes or special modes will contain the quoted terms. Bolding also doesn’t work for mobile results.

Quoted searches don’t work for local results. Quote restriction does not work for results in our local box where listings usually appear with a map; we’ll be looking more at this area in the future.

To quote or not to quote?

Using quotes can definitely be a great tool for power users. We generally recommend first doing any search in natural language without resorting to operators like quotation marks. Years ago, many people used operators because search engines sometimes needed additional guidance. Things have advanced since then, so operators are often no longer necessary.

By default, our systems are designed to look for both the exact words and phrases entered and related terms and concepts, which is often useful. If you use a quoted search, you might miss helpful content that uses closely related words.

Of course, there are those times when the exact word being on a page makes all the difference. For those situations, quoted searches remain available and are now even better.

Source: Search


Chrome Beta for Android Update

Hi everyone! We've just released Chrome Beta 105 (105.0.5195.17) for Android. It's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Krishna Govind
Google Chrome

Chrome Beta for Android Update

Hi everyone! We've just released Chrome Beta 105 (105.0.5195.17) for Android. It's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Krishna Govind
Google Chrome

How a love of art and engineering led Nichole to YouTube

Welcome to the latest edition of “My Path to Google,” where we talk to Googlers, interns, apprentices and alumni about how they got to Google, what their roles are like and even some tips on how to prepare for interviews.

Today’s post is all about Nichole Lasater, a software engineer at YouTube, whose background in both art and engineering led her to Google.

How did you first get interested in software engineering?

I originally planned to study veterinary medicine, but I took a computer science course in college (practically on a whim) and fell in love with software engineering. After graduating with a degree in computer science and game design, I built video games with a group of my former classmates before joining Google in 2019.

What do you do here at Google?

I've worked on a few different teams at YouTube, including Trust and Safety and YouTube Kids Web. Right now, I work for YouTube on TV, where I help bring YouTube to living room devices, game consoles and all sorts of entertainment systems. It’s inspiring to work on a product that so many people (including myself) use every day. I also have a background in art — I grew up painting and took digital art classes in college — and I like how this role allows me to bring that passion into my work.

Tell us more about how you bring art into your engineering work.

I care a lot about user experience and user interface (UI). I've helped several Google teams revamp their internal websites using Material Design, a set of design tools and best practices from Google. I even built a brand identity for an internal tool — I came up with a color scheme, typography and iconography to help it look and feel more like a modern app. All these projects helped me flex both my technical and design skills and gave my teammates a better experience using these resources.

Anything you wish you’d known when you started the recruiting process?

I have a very different background from many of my teammates — I grew up studying art, planned to major in microbiology and didn’t write any code until college. I was concerned that I wasn’t as knowledgeable as my peers and that I wouldn’t be taken seriously as a software engineer. But I’ve found the opposite is true. My recruiter shared that my background in both art and engineering actually helped me stand out in the interview process. And my team values the unique perspective I bring to this role. I’m not only building products and writing code, I’m helping them look good too.

What did you learn from your job search?

I applied to every opportunity I spotted, even if it wasn’t something I was entirely interested in. Every application was worth the practice. I sent out many more resumes than I got interviews — but looking back, I’m OK with that. It helped me build my confidence and made me less afraid of rejection.

Any tips to share with aspiring Googlers?

I was really afraid at first. I was scared that I wouldn’t fit in since I didn’t have a coding background. But I’ve learned that if something fascinates you, whether it’s art or software engineering, just go for it. Anyone who is passionate and genuinely enjoys the work can be successful. You will find your community.

Building Efficient Multiple Visual Domain Models with Multi-path Neural Architecture Search

Deep learning models for visual tasks (e.g., image classification) are usually trained end-to-end with data from a single visual domain (e.g., natural images or computer generated images). Typically, an application that completes visual tasks for multiple domains would need to build multiple models for each individual domain, train them independently (meaning no data is shared between domains), and then at inference time each model would process domain-specific input data. However, early layers between these models generate similar features, even for different domains, so it can be more efficient — decreasing latency and power consumption, lower memory overhead to store parameters of each model — to jointly train multiple domains, an approach referred to as multi-domain learning (MDL). Moreover, an MDL model can also outperform single domain models due to positive knowledge transfer, which is when additional training on one domain actually improves performance for another. The opposite, negative knowledge transfer, can also occur, depending on the approach and specific combination of domains involved. While previous work on MDL has proven the effectiveness of jointly learning tasks across multiple domains, it involved a hand-crafted model architecture that is inefficient to apply to other work.

In “Multi-path Neural Networks for On-device Multi-domain Visual Classification”, we propose a general MDL model that can: 1) achieve high accuracy efficiently (keeping the number of parameters and FLOPS low), 2) learn to enhance positive knowledge transfer while mitigating negative transfer, and 3) effectively optimize the joint model while handling various domain-specific difficulties. As such, we propose a multi-path neural architecture search (MPNAS) approach to build a unified model with heterogeneous network architecture for multiple domains. MPNAS extends the efficient neural architecture search (NAS) approach from single path search to multi-path search by finding an optimal path for each domain jointly. Also, we introduce a new loss function, called adaptive balanced domain prioritization (ABDP) that adapts to domain-specific difficulties to help train the model efficiently. The resulting MPNAS approach is efficient and scalable; the resulting model maintains performance while reducing the model size and FLOPS by 78% and 32%, respectively, compared to a single-domain approach.

Multi-Path Neural Architecture Search
To encourage positive knowledge transfer and avoid negative transfer, traditional solutions build an MDL model so that domains share most of the layers that learn the shared features across domains (called feature extraction), then have a few domain-specific layers on top. However, such a homogenous approach to feature extraction cannot handle domains with significantly different features (e.g., objects in natural images and art paintings). On the other hand, handcrafting a unified heterogeneous architecture for each MDL model is time-consuming and requires domain-specific knowledge.

NAS is a powerful paradigm for automatically designing deep learning architectures. It defines a search space, made up of various potential building blocks that could be part of the final model. The search algorithm finds the best candidate architecture from the search space that optimizes the model objectives, e.g., classification accuracy. Recent NAS approaches (e.g., TuNAS) have meaningfully improved search efficiency by using end-to-end path sampling, which enables us to scale NAS from single domains to MDL.

Inspired by TuNAS, MPNAS builds the MDL model architecture in two stages: search and training. In the search stage, to find an optimal path for each domain jointly, MPNAS creates an individual reinforcement learning (RL) controller for each domain, which samples an end-to-end path (from input layer to output layer) from the supernetwork (i.e., the superset of all the possible subnetworks between the candidate nodes defined by the search space). Over multiple iterations, all the RL controllers update the path to optimize the RL rewards across all domains. At the end of the search stage, we obtain a subnetwork for each domain. Finally, all the subnetworks are combined to build a heterogeneous architecture for the MDL model, shown below.

Since the subnetwork for each domain is searched independently, the building block in each layer can be shared by multiple domains (i.e., dark gray nodes), used by a single domain (i.e., light gray nodes), or not used by any subnetwork (i.e., dotted nodes). The path for each domain can also skip any layer during search. Given the subnetwork can freely select which blocks to use along the path in a way that optimizes performance (rather than, e.g., arbitrarily designating which layers are homogenous and which are domain-specific), the output network is both heterogeneous and efficient.

Example architecture searched by MPNAS. Dashed paths represent all the possible subnetworks. Solid paths represent the selected subnetworks for each domain (highlighted in different colors). Nodes in each layer represent the candidate building blocks defined by the search space.

The figure below demonstrates the searched architecture of two visual domains among the ten domains of the Visual Domain Decathlon challenge. One can see that the subnetwork of these two highly related domains (one red, the other green) share a majority of building blocks from their overlapping paths, but there are still some differences.

Architecture blocks of two domains (ImageNet and Describable Textures) among the ten domains of the Visual Domain Decathlon challenge. Red and green path represents the subnetwork of ImageNet and Describable Textures, respectively. Dark pink nodes represent the blocks shared by multiple domains. Light pink nodes represent the blocks used by each path. The model is built based on MobileNet V3-like search space. The “dwb” block in the figure represents the dwbottleneck block. The “zero” block in the figure indicates the subnetwork skips that block.

Below we show the path similarity between domains among the ten domains of the Visual Domain Decathlon challenge. The similarity is measured by the Jaccard similarity score between the subnetworks of each domain, where higher means the paths are more similar. As one might expect, domains that are more similar share more nodes in the paths generated by MPNAS, which is also a signal of strong positive knowledge transfer. For example, the paths for similar domains (like ImageNet, CIFAR-100, and VGG Flower, which all include objects in natural images) have high scores, while the paths for dissimilar domains (like Daimler Pedestrian Classification and UCF101 Dynamic Images, which include pedestrians in grayscale images and human activity in natural color images, respectively) have low scores.

Confusion matrix for the Jaccard similarity score between the paths for the ten domains. Score value ranges from 0 to 1. A greater value indicates two paths share more nodes.

Training a Heterogeneous Multi-domain Model
In the second stage, the model resulting from MPNAS is trained from scratch for all domains. For this to work, it is necessary to define a unified objective function for all the domains. To successfully handle a large variety of domains, we designed an algorithm that adapts throughout the learning process such that losses are balanced across domains, called adaptive balanced domain prioritization (ABDP).

Below we show the accuracy, model size, and FLOPS of the model trained in different settings. We compare MPNAS to three other approaches:

  • Domain independent NAS: Searching and training a model for each domain separately.
  • Single path multi-head: Using a pre-trained model as a shared backbone for all domains with separated classification heads for each domain.
  • Multi-head NAS: Searching a unified backbone architecture for all domains with separated classification heads for each domain.

From the results, we can observe that domain independent NAS requires building a bundle of models for each domain, resulting in a large model size. Although single path multi-head and multi-head NAS can reduce the model size and FLOPS significantly, forcing the domains to share the same backbone introduces negative knowledge transfer, decreasing overall accuracy.

Model   Number of parameters ratio     GFLOPS     Average Top-1 accuracy  
Domain independent NAS     5.7x 1.08 69.9
Single path multi-head 1.0x 0.09 35.2
Multi-head NAS 0.7x 0.04 45.2
MPNAS 1.3x 0.73 71.8
Number of parameters, gigaFLOPS, and Top-1 accuracy (%) of MDL models on the Visual Decathlon dataset. All methods are built based on the MobileNetV3-like search space.

MPNAS can build a small and efficient model while still maintaining high overall accuracy. The average accuracy of MPNAS is even 1.9% higher than the domain independent NAS approach since the model enables positive knowledge transfer. The figure below compares per domain top-1 accuracy of these approaches.

Top-1 accuracy of each Visual Decathlon domain.

Our evaluation shows that top-1 accuracy is improved from 69.96% to 71.78% (delta: +1.81%) by using ABDP as part of the search and training stages.

Top-1 accuracy for each Visual Decathlon domain trained by MPNAS with and without ABDP.

Future Work
We find MPNAS is an efficient solution to build a heterogeneous network to address the data imbalance, domain diversity, negative transfer, domain scalability, and large search space of possible parameter sharing strategies in MDL. By using a MobileNet-like search space, the resulting model is also mobile friendly. We are continuing to extend MPNAS for multi-task learning for tasks that are not compatible with existing search algorithms and hope others might use MPNAS to build a unified multi-domain model.

Acknowledgements
This work is made possible through a collaboration spanning several teams across Google. We’d like to acknowledge contributions from Junjie Ke, Joshua Greaves, Grace Chu, Ramin Mehran, Gabriel Bender, Xuhui Jia, Brendan Jou, Yukun Zhu, Luciano Sbaiz, Alec Go, Andrew Howard, Jeff Gilbert, Peyman Milanfar, and Ming-Tsuan Yang.

Source: Google AI Blog


Firebase Stories: Celebrating our developer community

Posted by Akua Prempeh Developer Marketing

When we ask you what you like best about Firebase, a lot of you tell us it’s the community that makes Firebase special. We are excited to highlight developers in the community who are using Firebase in their apps through a new series called, Firebase Stories.

Firebase Stories celebrates developers whose apps are helping promote positive change in their communities. Starting today, and over the coming months, you'll hear from developers and founders from around the world about their app development journeys. Additionally, these developers will demo how they are using Firebase tools in their projects so you can apply Firebase to your own apps. Everyone can watch the demos together and chat with both the developers and members of the Firebase team along the way. We’ll also share guided codelabs on these Firebase features so you can get hands-on experience with them. Stay tuned for more details!

Lastly, we’d love to hear from you too. Use the hashtag #FirebaseStories on your social channels to share how Firebase empowers you throughout your app development journey. We will regularly select and share some stories on our channels.

To learn more about this campaign, visit our website, follow us on Twitter and subscribe to the Firebase YouTube channel.