Getting to know a research intern: Cathin Wong

Google Research tackles the most challenging problems in CS and related fields. Being bold and taking risks is essential to what we do, and research teams are embedded throughout Google, allowing our discoveries to affect billions of users each day.

The compelling benefit to researchers is that their innovations can be implemented fast and big. Google’s unique infrastructure facilitates ideas’ speed to market — allowing their ideas to be trialled by millions of users before their paper is even published.

Today we’re talking to Cathin Wong, a former Research intern. Read on!
Left: Cathin; Right: her fellow intern

Can you tell us about yourself and your masters topic?
I’m a masters student at Stanford University, where I’m a part of the Computational

Vision and Geometry Lab — I actually just joined this October, and I’m working on projects related to semantic segmentation. I also studied at Stanford as an undergrad, and previously I worked under Sebastian Thrun with Andre Esteva and Brett Kuprel on deep learning for skin cancer detection. So I work on a lot of vision projects, and I’m especially interested in projects that lie at the intersection of machine learning and healthcare. I’m also really interested in human cognition! I loved reading books by Oliver Sacks and other neuroscientists as a kid, but when I first started in computer science, I never considered that there would be much of a direct overlap where I’d get to actually mess around in both fields. Within artificial intelligence research, though, it seems like we still have a lot to learn from actual human brains.

How did you get to work in this area?
There’s this class at Stanford, CS231N, on deep learning for computer vision. On the very first day, I remember that the professor who co-taught the class — Dr. Fei Fei Li — went through this presentation, and one of the slides was about how the initial layers of convolutional neural networks learn basic edge detecting filters that actually closely parallel the basic edge detectors found in cat and human visual cortices, suggesting that there was a deeper and more fundamental connection between these two vision systems. I thought that was insane, and also insanely cool. I joined Sebastian Thrun’s lab a little later, and have been working on AI research since then.

Why did you apply for an internship at Google and how supportive was your masters advisor?
I’d heard really great things about research at Google, and even in my classes and labs, read lots of very impressive work coming from teams in Mountain View, London, and Zurich. I was hoping to get a better sense of what research looks like outside of an academic setting, and the scope of projects and expertise was a huge draw. Also, zillions of GPUs.

My master’s advisor at Stanford is Dan Jurafsky, who is the man. He’s a computer scientist and linguist, has written a book about the language of food, and is basically the best, as far as I’m concerned. He was super supportive.

What project was your internship focused on?

I worked under Andrea Gesmundo from the Applied Machine Intelligence team on Multitask Neural Model Search, a framework to automate deep learning architecture design using reinforcement learning. This work builds off of the Neural Architecture Search research done by Barrett and Quoc from the Google Brain team - that framework was one of the first to successfully apply reinforcement learning to automatically generate convolutional neural networks.

Our project focused on extending that framework so that we could automatically design architectures for multiple different tasks, simultaneously. For example, the same framework could design a model that worked well for sentiment analysis tasks, and another that worked well for language identification, at the same time.

We then showed that it was possible to transfer learn that framework, so that knowledge learned from designing architectures for previous tasks could be reused in totally new, unseen settings. When actual humans design machine learning models, we don’t start completely from scratch every time — we can take advantage of general intuitive design patterns we’ve observed before, as well as remember what models did and didn’t work on similar tasks in the past — and this research tries to take a step closer to doing the same thing in our automated model design.

Did you publish at Google during your internship?
Yes! We submitted our work to ICML, where it’s currently under review (so fingers crossed). The pre-print is also up on Arxiv.

How closely connected was the work you did during your internship to your masters topic?
Although Andrea and I discussed a bunch of project ideas in the months before the internship, this project was actually a chance to try something fairly different from my master’s research at Stanford. For me, at least, that turned out to be one of the best things about this internship — I really loved the chance to explore a very different aspect of AI research, especially one that benefited from the guidance and computational resources available within Google, and I left with a much deeper interest in reinforcement learning that I’ve continued to explore back at Stanford.

Did you write your own code?
Heck, yeah! And then I deployed it cavalierly with enormous care across tons of GPUs. One really awesome thing about interning, though, is the chance to build off of the collaborative engineering effort of other incredibly talented engineers and researchers. I worked pretty closely with code that was being updated almost daily by researchers on the Brain team over in Mountain View, and that kind of cross-continental engineering work feels really neat.

This is your third internship at Google. What were the reasons to come back to Google Zurich?
Third time’s the charm? But actually, I’ve been lucky enough to work at a different office, on very different projects, during all three internships at Google — after my freshman year, I worked with the Glass team in Mountain View, and later I worked in New York on Google Classroom. Each time, I left with a much deeper understanding and appreciation for that particular field, and the care and expertise each of those teams brought to those particular domains. This summer, though, I wanted to come back to work on research in particular. Both of my previous internships had been very software engineering focused, and I was excited to work on AI research that more closely parallels the work I’m excited about at Stanford.

Also, Zurich! I’ve never been to Switzerland before, and this summer one of my fellow interns and I took a train out to hike past the Matterhorn. She wisely remembered to bring along a Toblerone bar for comparison. The real thing is much more breathtaking (but a lot less chocolatey.) 
[Editor’s note: the photo referenced is the photo at the beginning of this post!]

What key skills have you gained from your time at Google?
My team held a weekly reading group, where we’d gather to read and discuss cutting-edge AI papers chosen by different members of the team. This turned out to be one of the very best experiences of the internship — it was incredibly helpful to step back and get a better sense of what’s happening within a very rapidly changing field. Listening to colleagues step through these papers helped me learn to more rigorously assess any given paper — to ask what the experiments really mean, and how its conclusions could generalize to our own current and future projects. Those are questions that I’ve tried to ask more about any work since the summer. That commitment to keeping up with the very coolest things happening within the field also just serves to remind me, often, of what exactly I love about this work and how much there is left to tackle.

What impact has this internship experience had on your masters?
A ton. I really enjoyed diving deeply into research that was largely outside of my own master’s expertise. So much is changing within reinforcement learning right now, and I’ve definitely brought back what I learned — and a sparked interest in related work — to my research here.

Looking back on your experiences now: Why should a masters student apply for an internship at Google? Any advice to offer?
There’s a kind of magical combination of people and resources that means you can work and learn so much within so short a time- especially if you love research and haven’t yet done a PhD, like myself, the internship offers that same rigor and breadth of very cool projects in a very compressed package.

When you’re here, definitely definitely ask questions. Talk to other people about their research, because it’s going to be very awesome and maybe even directly relevant. Join a reading group. Or start a reading group. And get someone to show you how to actually use the espresso machines. That milk frothy thingie? Life changing.

Google Pay’s got your transit ticket, starting in Las Vegas

Crowded public transportation can completely derail your day—especially when you're standing in line to buy a ticket and the train whizzes by. But the next time you’re traveling around Las Vegas, you can skip the line and get there faster with Google Pay. Today, we’re launching mobile tickets for the Las Vegas Monorail, which is powered by NXP’s MIFARE contactless technology. Now you’ll be able to purchase your ticket online, save it to Google Pay instantly, and use your phone to ride—no need to open the app.

lvm detail 4

The Las Vegas Monorail is the first transit agency where you can use prepaid tickets or passes with Google Pay instead of a credit or debit card, and it's coming to more cities soon. Once you’ve saved your ticket, you’ll find info in the app to guide you along your journey—you can see recent transactions, trips, or the location of the nearest Monorail station.

lvm gif

Ready to give it a go? Make sure you have the latest version of Google Pay, then purchase a ticket on the Las Vegas Monorail site and save it to the app. If you bought your ticket on a mobile device, you’re ready to ride! Just hold your phone near the fare gate. Once you see a check mark, you’re good to go.

Launchpad Accelerator Africa: growing a community of startup influencers in Africa

Last year at Google for Nigeria, Google’s CEO Sundar Pichai announced the Launchpad Accelerator Africa program, which includes over $3 million in equity-free support to more than 60 African tech startups over three years including mentorship, working space and access to technology and startup experts from Google and our external communities all over the world.

Launchpad Accelerator Africa is based on Google’s global Launchpad Accelerator program, tailored to the African market. Nine African startups have participated in Launchpad Accelerator, the global accelerator for growth-stage startups in Silicon Valley, to date. We are delighted to now bring Launchpad to Africa, to benefit African startups on their own continent and wish the first Launchpad Accelerator Africa class all the best for the program and the future.
In November 2017 we opened applications for the first class of Launchpad Accelerator Africa, and we’re proud to announce that the first class of Launchpad Accelerator Africa begins today. This inaugural class includes 12 startups from across Africa, including Ghana, Kenya, Nigeria, South Africa, Tanzania, and Uganda. The startups for Class 1 are:
  • Babymigo (Nigeria) - a trusted social community for expecting mothers and young parents.
  • Flexpay (Kenya) - an automated and secured layaway e-commerce system.
  • Kudi (Nigeria) - payment for Africa through messaging. 
  • OkadaBooks (Nigeria) - a social platform that allows users easily create, spread and sell their stories/books/documents in a matter of minutes.
  • OMG Digital (Ghana) - a media platform which produces hyper-local, engaging and entertaining content that African millennials love to consume and share.
  • Pezesha (Kenya) - a scalable Peer to Peer microlending marketplace which allows Kenyans to loan to Kenyans, via mobile money using big data and credit analytics.
  • (Nigeria) - allows Africans put aside little amounts of money periodically till they reach a savings target.
  • Riby (Nigeria) - a peer-to-peer banking platform for cooperatives and their members that allows them to save, borrow and invest, together.
  • swiftVEE (South Africa) - a platform for connecting livestock agencies to a network of buyers and sellers.
  • TangoTv (Tanzania) - a media streaming and video on demand service for African local content; films and shows.
  • Teheca (Uganda) - helps families and individuals find the right health care providers/workers in Uganda.
  • Thrive Agric (Nigeria) - crowdfunds investments for small holder farmers, and provide this to them in form of inputs, tech driven advisory and access to market.
Google is committed to the Sub-Saharan Africa developer and startup ecosystem, and has hosted 13 Launchpad Build and Start events across Kenya, Nigeria and South Africa since April 2016, featuring 228 speakers and mentors, engaging 590 attendees from local startups in each country.

Google also supports developer communities across Sub-Saharan Africa, including Google Developer Groups and Women Techmakers, providing training and support for developers aligned with real-life job competency requirements. Community groups engage in activities like Study Jams: study groups facilitated by developers, for developers. Today there are over 120 active developer communities across 25 countries in Sub-Saharan Africa. As part of their activities, 61 of these groups hosted 81 Study Jams for mobile web and Google Cloud developers in 10 countries, reaching over 5,000 developers in the last year.

Posted by Andy Volk, Head of Sub-Saharan Africa Developer Ecosystem and Fola Olatunji-David, Head of Launchpad Accelerator Africa Startup Success and Services


Programme Launchpad Accelerator Afrique : développer une communauté de créateurs de start-up en Afrique

L’an dernier, à l’occasion de l’événement Google pour le Nigeria, Sundar Pichai, le PDG de Google a annoncé le lancement du programme Launchpad Accelerator Afrique. Ce dispositif comprend une aide financière de plus de 3 millions de dollars sur trois ans destinée à une soixantaine de start-up technologiques basées en Afrique. Ce programme prévoit également de l’accompagnement, la mise à disposition d’espaces de travail et l’accès à la technologie et aux experts en start-up de chez Google ainsi qu’à nos communautés externes, partout dans le monde.

Launchpad Accelerator Africa est basé sur le programme mondial Launchpad Accelerator conçu par Google, pour le marché africain. À ce jour, neuf start-up africaines ont participé au programme Launchpad Accelerator, l’accélérateur mondial des start-up en phase de croissance de la Silicon Valley. Nous sommes très heureux de lancer le programme Launchpad en Afrique qui s’adresse aux start-up installées sur ce continent et nous souhaitons tous nos vœux de réussite aux participants au programme Launchpad Accelerator Afriqued'aujourd'hui et de demain.
En novembre 2017 nous avons lancé l’appel à candidatures pour la première session du programme Launchpad Accelerator Africa, et nous sommes fiers d’annoncer que les premiers participants commencent aujourd’hui. Cette session inaugurale comprend 12 start-up de plusieurs pays d’Afrique, dont le Ghana, le Kenya, le Nigeria, l’Afrique du Sud, la Tanzanie, et l’Ouganda. Les start-up retenues pour la première Session sont les suivantes :
  • Babymigo (Nigeria) - communauté sociale reconnue pour les femmes enceintes et les jeunes parents.
  • Flexpay (Kenya) - système d'achat en ligne avec mis en dépôt ("layaway") automatisé et sécurisé.
  • Kudi (Nigeria) - système de paiement pour l’Afrique par messagerie. 
  • OkadaBooks (Nigeria) - plateforme sociale qui permet aux utilisateurs de créer, de diffuser et de vendre en toute facilité leurs récits/livres/documents en quelques minutes.
  • OMG Digital (Ghana) - plateforme média qui produit des contenus attrayant et divertissants, axés sur l'hyper-local, très prisés par la jeunesse africaine.
  • Pezesha (Kenya) - place de marché de microcrédit évolutive en "peer-to-peer" qui permet aux Kényans de prêter de l’argent à leurs concitoyens via un service de paiement mobile basé sur le Big Data et l’analyse de crédit.
  • (Nigeria) - permet aux Africains d’économiser de petites sommes d’argent régulièrement jusqu’à ce qu’ils atteignent un montant d’épargne fixé.
  • Riby (Nigeria) - plateforme de services bancaires en "peer-to-peer" destinée à des coopératives et à leurs membres qui leur permet d’épargner, d’emprunter et d’investir ensemble.
  • swiftVEE (Afrique du Sud) - plateforme destinée à mettre en relation des structures d'élevage de bétail avec un réseau d’acheteurs et de vendeurs.
  • TangoTv (Tanzanie) - service de diffusion en streaming et de vidéo à la demande de contenus africains, de films et de niveau local.
  • Teheca (Uganda) - aide les familles et les individus à trouver les prestataires/professionnels de soins en Ouganda.
  • Thrive Agric (Nigeria) - plateforme de crowdfunding permettant à de petits exploitants agricoles de réaliser des investissements fournis sous forme de contributions, de conseils techniques et d’aide à l’accès au marché.
Google s’engage à soutenir l’écosystème de développeurs et de start-up en Afrique subsaharienne. Depuis avril 2016, la firme américaine a ainsi organisé 13 événements Launchpad Build and Start au Kenya, au Nigeria et en Afrique du Sud, ayant mobilisé 228 intervenants et mentors pour quelque 590 participants de start-up locales dans chaque pays.

Google soutient également les communautés de développeurs dans toute l’Afrique subsaharienne, notamment les groupes de développeurs Google etWomen Techmakers, en proposant aux développeurs de la formation et un accompagnement en phase avec les exigences du monde professionnel en matière de compétences. Des groupes communautaires mettent en place des activités comme Study Jams : des groupes de travail animés par des développeurs pour des développeurs. On compte aujourd’hui plus de 120 communautés actives de développeurs dans 25 pays d’Afrique subsaharienne. Dans le cadre de leurs activités, 61 groupes ont organisé des Study Jams pour des développeurs web mobile et Google Cloud dans 10 pays, touchant plus de 5 000 professionnels l’an dernier.

Poste par Andy Volk, Responsable de l’Écosystème de développeurs en Afrique subsaharienne et Fola Olatunji-David, Responsable du programme Launchpad Accelerator Afrique pour les start-up

Help shoppers take action, wherever and however they choose to shop

Today’s consumers don’t just want answers; more and more, they’re craving relevant, meaningful, and immediate assistance in completing their day-to-day shopping tasks. We see this in our data: mobile searches for “where to buy” grew over 85% over the past 2 years.1 Moreover, 44% of those who use their voice-activated speaker at least weekly say they use the device to order products they need like groceries and household items at least once a week.2

It’s clear that people want helpful, personal, and frictionless interactions that allow them to shop wherever and however they want -- from making decisions on what to buy, to building baskets, to checking out more quickly than ever before. Put simply, they want an easier way to get their shopping tasks done.

Introducing Shopping Actions

That’s why we’re introducing our Shopping Actions program. It gives customers an easy way to shop your products on the Google Assistant and Search with a universal cart, whether they’re on mobile, desktop or even a Google Home device.

By participating in the Shopping Actions program, you’ll be able to:
  • Surface your products on new platforms like the Google Assistant with voice shopping. Leverage our deep investments in machine learning, AI and natural language processing to offer your customers a hands-free, voice-driven shopping experience. 
  • Help your customers shop effortlessly with you, across Google. A shareable list, universal shopping cart and instant checkout with saved payment credentials work across and the Google Assistant -- allowing your customers to seamlessly turn browsing into buying. For example, shopper Kai can do a search on Google for moisturizing hand soap, see a listing for up & up brand soap from Target, and add it to a Google Express cart. Later, in the kitchen, Kai can reorder foil through voice, add it to the same cart using Google Home, and purchase all items at once through a Google-hosted checkout flow.
  • Increase loyalty and engagement with your highest value customers. 1-click re-ordering, personalized recommendations, and basket-building turn one-time shoppers into repeat customers. If Kelly does a search for “peach blush,” for example, and she has opted to link her Google account with her Ultamate Rewards status, we’ll recognize this and surface relevant blush results as well as related items -- like makeup brushes -- from Ulta Beauty to help her build a basket with her preferred retailer. If we know she regularly purchases makeup remover on a monthly basis, we’ll surface the same brand of makeup remover to her, right when she has the highest intent to re-order.
In addition, Shopping Actions uses a pay-per-sale model, meaning you only pay when a sale actually takes place.
                      Three properties where transactable inventory will surface 

Early retail partners are realizing results

Across the board, our focus on surfacing highly relevant offers has not only delivered a positive experience to consumers, but has helped our retail partners as well. Early testing indicates that participating retailers on average see an increase in total conversions at a lower cost, compared to running Shopping ads alone.3 We have also seen an approximately 30% average increase in basket size for merchants participating in Shopping Actions.4

Furthermore, we partnered with MasterCard to do a pre/post study among a representative subset of five Shopping Actions merchants. We learned that after using Shopping Actions, a customer spent more with that group of merchants in the 4 month post period.5

Target and Google have a long-standing partnership, and a history of innovating together to make shopping easier and more inspiring. Target was one of the first retailers to test Google Express (now Shopping Actions), and last year expanded the offering nationwide. Over the last six months, Target has seen the size of guests’ Express baskets increase by nearly 20%, with strong adoption in new markets.

“Our guests love the ease and convenience of making their Target Run without lifting a finger by using voice interface. And since the orders are shipped from a nearby Target store, they’ll have their items delivered to their home in just two days,” said Mike McNamara, Target’s Chief Information and Digital Officer. “This is just the beginning for Target and Google. Through our partnership, we’ll continue to add new benefits to help guests save time and money. Shoppers will soon be able to link their and Google accounts, creating a more personalized and intuitive shopping experience. And later this year, Target guests will be able to use Target’s REDcard when shopping through Google, providing 5 percent off purchases and free shipping.”

Ulta Beauty has seen improvements in loyalty and customer engagement on Google Express, with average order value (AOV) increasing 35% since 2016. “Ulta Beauty looks to offer a seamless shopping experience to our guests wherever they are – instore or online at As a long term strategic partner, Google helps us build bridges between the digital and physical experiences we offer with last mile fulfillment that leverages our stores and expands our inventory across the Google Assistant and Search,” said Mary Dillon, CEO of Ulta Beauty. “When our guests buy through Shopping Actions, they can enjoy the immediate gratification they experience in-store and are coming to expect online. They also spend more with us and enjoy other benefits of being an Ulta Beauty shopper, like earning loyalty points through our award-winning Ultamate Rewards program.”

Floral and gift retailer sees Shopping Actions as a way to get back to its roots of delivering immediate, personalized customer service. “Our job is not to tell customers they have to call us or visit us in a certain way, but to actually be where the customers have chosen to be. If we can make it a one-stop shopping experience, we must,” said Amit Shah, CMO of “On Shopping Actions, you can buy something from Costco for yourself but at the same time deliver a gift from to your niece who is graduating high school. From the customer's point of view, it provides a very seamless multi-channel and multi-mindset experience.”

Ready to get started?

To learn more about Shopping Actions, sign up through our interest form to be contacted with more information, or check out our Help Center.

We look forward to making the shopping journey more helpful, personal, and frictionless with you!

1. Google Data, US, Jan- Jun 2015 vs. Jan-Jun 2017.
2. Google/Peerless Insights, ‘Voice-Activated Speakers: People’s Lives Are Changing’, Aug. 2017, n=1,642, U.S. monthly active voice-activated speaker owners (Amazon Echo/Dot and Google Home), A18+.
3. Google internal data, Feb - March 2018.
4. Google internal data, Q1 2017 vs 2018 YTD.
5. Mastercard Pre / Post Analysis of Shopping Actions Performance, March 2018. Analysis conducted among MasterCard users of 5 high frequency Shopping Actions Merchants From 4/26/17 – 8/24/17 (Pre) compared to 9/30/17 – 1/28/18 (Post) among 660,171 Existing merchant customers (shopped in last 6 months) , 703,848 New merchant customers (did not shop in last 6 months) and the subset of Shopping Actions customers 10,963 (pre) 11,385 (post). Additional $38 spent at merchants among Google Express Users in 4 month post period.

Source: Inside AdWords

Dev Channel Update for Desktop

The dev channel has been updated to 67.0.3371.0 for Windows, Mac and Linux.

A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

Krishna Govind
Google Chrome

The High Five: “A Brief History” of this week’s searches

Sifting through the week’s news can feel like sinking into a black hole. Luckily, we have some standout trends this week, gathered with data from Google News Lab. They start with a tribute to legendary physicist and black hole escape artist Stephen Hawking, who passed away Wednesday at age 76.

“Look up”
Stephen Hawking’s intelligence was a cut above the rest, in life and in Search: interest in “Stephen Hawking IQ” was 170 percent higher than “Stephen Hawking quote” over the past week. But of his many memorable quotes, here’s the most searched: “Look up at the stars and not down at your feet. Be curious. And however difficult life may seem, there is always something you can do and succeed at.”

Turbulent times
“What happened on United Airlines?” was a trending question this week. The company faced scrutiny after a French bulldog—the second most searched dog breed this week—suffocated in an overhead compartment and a pet German Shepherd was accidentally shipped to Japan. For those searching for canine breeds this week, Rhodesian Ridgebacks were top dog.

A cue from teens
Search interest in “walkout” has reached an all-time high in the U.S. this month. On Wednesday, students around the country participated in a walkout to call on elected officials to take action on gun laws—the top cities searching for “walkout” were Charlottesville, VA, Fort Smith, AR, and Madison, WI.

It’s bracket season 
March Madness is in full swing, especially for North Carolina, Duke and Kentucky fans, whose teams have been the most searched in the past week. The top-searched celebrity brackets are from basketball commentator Jay Bilas, former President Barack Obama, and Warren Buffett. And the winner is anyone’s guess: Michigan State, favored by both Bilas and Obama, wasn’t among the top 10 teams being searched this week.

Go green
Saturday marks St. Patrick’s Day and, in true spirit, corned beef and cabbage is the top trending St. Patrick’s Day recipe this week, followed by … jello shots 🤔. If you’re feeling lucky, you might be among those searching for lucky horseshoes, lucky cats and lucky clovers (the top searched “lucky” items in the past week). And although New York has the biggest parade and Boston the biggest reputation, the top states searching for the holiday are Connecticut, Kansas, and Delaware. Illinois, where Chicagoans annually dye their river green, comes in at number four.

Team Pixel is in bloom this spring

Our community of photographers is on the rise, and the #teampixel tribe is officially 35,000 members strong (and counting)! This week’s highlights range from colorful plum blossoms in Sakura, Japan to a confetti-filled wedding.

If you’re looking for a daily dose of #teampixel photos, follow our feed on Instagram and keep spreading the loves and likes with fellow Pixel photographers.

Introducing Skaffold: Easy and repeatable Kubernetes development

As companies on-board to Kubernetes, one of their goals is to provide developers with an iteration and deployment experience that closely mirrors production. To help companies achieve this goal, we recently announced Skaffold, a command line tool that facilitates continuous development for Kubernetes applications. With Skaffold, developers can iterate on application source code locally while having it continually updated and ready for validation or testing in their local or remote Kubernetes clusters. Having the development workflow automated saves time in development and increases the quality of the application through its journey to production.

Kubernetes provides operators with APIs and methodologies that increase their agility and facilitates reliable deployment of their software. Kubernetes takes bespoke deployment methodologies and provides programmatic ways to achieve similar if not more robust procedures. Kubernetes’ functionality helps operations teams apply common best practices like infrastructure as code, unified logging, immutable infrastructure and safer API-driven deployment strategies like canary and blue/green. Operators can now focus on the parts of infrastructure management that are most critical to their organizations, supporting high release velocity with a minimum of risk to their services.

But in some cases, developers are the last people in an organization to be introduced to Kubernetes, even as operations teams are well versed in the benefits of its deployment methodologies. Developers may have already taken steps to create reproducible packaging for their applications with Linux containers, like Docker. Docker allows them to produce repeatable runtime environments where they can define the dependencies and configuration of their applications in a simple and repeatable way. This allows developers to stay in sync with their development runtimes across the team, however, it doesn’t introduce a common deployment and validation methodology. For that, developers will want to use the Kubernetes APIs and methodologies that are used in production to create a similar integration and manual testing environment.

Once developers have figured out how Kubernetes works, they need to actuate Kubernetes APIs to accomplish their tasks. In this process they'll need to:
  1. Find or deploy a Kubernetes cluster 
  2. Build and upload their Docker images to a registry that's enabled in their cluster 
  3. Use the reference documentation and examples to create their first Kubernetes manifest definitions 
  4. Use the kubectl CLI or Kubernetes Dashboard to deploy their application definitions 
  5. Repeat steps 2-4 until their feature, bug fix or changeset is complete 
  6. Check in their changes and run them through a CI process that includes:
    • Unit testing
    • Integration testing
    • Deployment to a test or staging environment

Steps 2 through 5 require developers to use many tools via multiple interfaces to update their applications. Most of these steps are undifferentiated for developers and can be automated, or at the very least guided by a set of tools that are tailored to a developer’s experience.

Enter Skaffold, which automates the workflow for building, pushing and deploying applications. Developers can start Skaffold in the background while they're developing their code, and have it continually update their application without any input or additional commands. It can also be used in an automated context such as a CI/CD pipeline to leverage the same workflow and tooling when moving applications to production.

Skaffold features

Skaffold is an early phase open-source project that includes the following design considerations and capabilities:
  • No server-side components mean no overhead to your cluster. 
  • Allows you to detect changes in your source code and automatically build/push/deploy. 
  • Image tag management. Stop worrying about updating the image tags in Kubernetes manifests to push out changes during development. 
  • Supports existing tooling and workflows. Build and deploy APIs make each implementation composable to support many different workflows. 
  • Support for multiple application components. Build and deploy only the pieces of your stack that have changed. 
  • Deploy regularly when saving files or run one off deployments using the same configuration.


Skaffold has a pluggable architecture that allows you to choose the tools in the developer workflow that work best for you.
Get started with Skaffold on Kubernetes Engine by following the Getting Started guide or use Minikube by following the instructions in the README. For discussion and feedback join the mailing list or open an issue on GitHub.

If you haven’t tried GCP and Kubernetes Engine before, you can quickly get started with our $300 free credits.


Using Deep Learning to Facilitate Scientific Image Analysis

Many scientific imaging applications, especially microscopy, can produce terabytes of data per day. These applications can benefit from recent advances in computer vision and deep learning. In our work with biologists on robotic microscopy applications (e.g., to distinguish cellular phenotypes) we've learned that assembling high quality image datasets that separate signal from noise is a difficult but important task. We've also learned that there are many scientists who may not write code, but who are still excited to utilize deep learning in their image analysis work. A particular challenge we can help address involves dealing with out-of-focus images. Even with the autofocus systems on state-of-the-art microscopes, poor configuration or hardware incompatibility may result in image quality issues. Having an automated way to rate focus quality can enable the detection, troubleshooting and removal of such images.

Deep Learning to the Rescue
In “Assessing Microscope Image Focus Quality with Deep Learning”, we trained a deep neural network to rate the focus quality of microscopy images with higher accuracy than previous methods. We also integrated the pre-trained TensorFlow model with plugins in Fiji (ImageJ) and CellProfiler, two leading open source scientific image analysis tools that can be used with either a graphical user interface or invoked via scripts.
A pre-trained TensorFlow model rates focus quality for a montage of microscope image patches of cells in Fiji (ImageJ). Hue and lightness of the borders denote predicted focus quality and prediction uncertainty, respectively.
Our publication and source code (TensorFlow, Fiji, CellProfiler) illustrate the basics of a machine learning project workflow: assembling a training dataset (we synthetically defocused 384 in-focus images of cells, avoiding the need for a hand-labeled dataset), training a model using data augmentation, evaluating generalization (in our case, on unseen cell types acquired by an additional microscope) and deploying the pre-trained model. Previous tools for identifying image focus quality often require a user to manually review images for each dataset to determine a threshold between in and out-of-focus images; our pre-trained model requires no user set parameters to use, and can rate focus quality more accurately as well. To help improve interpretability, our model evaluates focus quality on 84×84 pixel patches which can be visualized with colored patch borders.

What about Images without Objects?
An interesting challenge we overcame was that there are often "blank" image patches with no objects, a scenario where no notion of focus quality exists. Instead of explicitly labeling these "blank" patches and teaching our model to recognize them as a separate category, we configured our model to predict a probability distribution across defocus levels, allowing it to learn to express uncertainty (dim borders in the figure) for these empty patches (e.g. predict equal probability in/out-of-focus).

What's Next?
Deep learning-based approaches for scientific image analysis will improve accuracy, reduce manual parameter tuning and may reveal new insights. Clearly, the sharing and availability of datasets and models, and implementation into tools that are proven to be useful within respective communities, will be important for widespread adoption.

We thank Claire McQuin, Allen Goodman, Anne Carpenter of the Broad Institute and Kevin Eliceiri of the University of Wisconsin at Madison for assistance with CellProfiler and Fiji integration, respectively.

Chrome Beta for Android Update

Ladies and gentlemen, behold!  Chrome Beta 66 (66.0.3359.30) for Android has been released and is available in Google Play.  A partial list of the changes in this build is available in the Git log. Details on new features is available on the Chromium blog, and developers should check out our updates related to the web platform here.

If you find a new issue, please let us know by filing a bug. More information about Chrome for Android is available on the Chrome site.

Estelle Yomba
Google Chrome