Monthly Archives: July 2018

Accelerated Training and Inference with the Tensorflow Object Detection API



Last year we announced the TensorFlow Object Detection API, and since then we’ve released a number of new features, such as models learned via Neural Architecture Search, instance segmentation support and models trained on new datasets such as Open Images. We have been amazed at how it is being used – from finding scofflaws on the streets of NYC to diagnosing diseases on cassava plants in Tanzania.
Today, as part of Google’s commitment to democratizing computer vision, and using feedback from the research community on how to make this codebase even more useful, we’re excited to announce a number of additions to our API. Highlights of this release include:
  • Support for accelerated training of object detection models via Cloud TPUs
  • Improving the mobile deployment process by accelerating inference and making it easy to export a model to mobile with the TensorFlow Lite format
  • Several new model architecture definitions including:
Additionally, we are releasing pre-trained weights for each of the above models based on the COCO dataset.

Accelerated Training via Cloud TPUs
Users spend a great deal of time on optimizing hyperparameters and retraining object detection models, therefore having fast turnaround times on experiments is critical. The models released today belong to the single shot detector (SSD) class of architectures that are optimized for training on Cloud TPUs. For example, we can now train a ResNet-50 based RetinaNet model to achieve 35% mean Average Precision (mAP) on the COCO dataset in < 3.5 hrs.
Accelerated Inference via Quantization and TensorFlow Lite 
To better support low-latency requirements on mobile and embedded devices, the models we are providing are now natively compatible with TensorFlow Lite, which enables on-device machine learning inference with low latency and a small binary size. As part of this, we have implemented: (1) model quantization and (2) detection-specific operations natively in TensorFlow Lite. Our model quantization follows the strategy outlined in Jacob et al. (2018) and the whitepaper by Krishnamoorthi (2018) which applies quantization to both model weights and activations at training and inference time, yielding smaller models that run faster.
Quantized detection models are faster and smaller (e.g., a quantized 75% depth-reduced SSD Mobilenet model runs at >15 fps on a Pixel 2 CPU with a 4.2 Mb footprint) with minimal loss in detection accuracy compared to the full floating point model.
Try it Yourself with a New Tutorial!
To get started training your own model on Cloud TPUs, check out our new tutorial! This walkthrough will take you through the process of training a quantized pet face detector on Cloud TPU then exporting it to an Android phone for inference via TensorFlow Lite conversion.

We hope that these new additions will help make high-quality computer vision models accessible to anyone wishing to solve an object detection problem, and provide a more seamless user experience, from training a model with quantization to exporting to a TensorFlow Lite model ready for on-device deployment. We would like to thank everyone in the community who have contributed features and bug fixes. As always, contributions to the codebase are welcome, and please stay tuned for more updates!

Acknowledgements
This post reflects the work of the following group of core contributors: Derek Chow, Aakanksha Chowdhery, Jonathan Huang, Pengchong Jin, Zhichao Lu, Vivek Rathod, Ronny Votel and Xiangxin Zhu. We would also like to thank the following colleagues: Vasu Agrawal, Sourabh Bajaj, Chiachen Chou, Tom Jablin, Wenzhe Li, Tsung-Yi Lin, Hernan Moraldo, Kevin Murphy, Sara Robinson, Andrew Selle, Shashi Shekhar, Yash Sonthalia, Zak Stone, Pete Warden and Menglong Zhu.

Source: Google AI Blog


FCC Supports OTMR – Faster and Fairer Rules for Pole Attachments



When we started Google Fiber eight years ago, we knew that building a new fiber network was going to be hard, slow and expensive. But what we didn’t fully appreciate were the obstacles we would face around a key part of the process: gaining timely access to space on utility and telephone poles to place new communications equipment.




One particular challenge revolves around making poles ready for new attachments. This “make ready” work has to be done to make room for new attachers’ equipment. The current system for make ready is done sequentially, and often involves multiple crews visiting the same pole several times over many months. This results in long delays, inflated costs and a frustrated community.




Fortunately, there is a better way. It is called One Touch Make Ready (OTMR), which is a system where a new attacher does much of the make ready work itself, all at one time. OTMR is a common sense policy that will dramatically improve the ability of new broadband providers to enter the market and offer competitive service, reducing delays and lowering costs by allowing the necessary work on utility poles to be done much more efficiently. This also means fewer crews coming through neighborhoods and disrupting traffic, making it safer for both workers and residents.




That’s why we’re so excited by the news that the FCC is poised to pass a rule that would institute a national One Touch Make Ready system, with the goal of significantly increasing the deployment of high-speed broadband across the United States. As the FCC stated, “OTMR speeds and reduces the cost of broadband deployment by allowing the party with the strongest incentive — the new attacher — to prepare the pole quickly to perform all of the work itself, rather than spreading the work across multiple parties.”




We fully support this effort by the FCC and applaud the efforts of Chairman Pai to remove obstacles that reduce choice and competition for broadband consumers. As the FCC says in its order, One Touch Make Ready “will serve the public interest through greater broadband deployment and competitive entry” — we couldn’t agree more.




By John Burchett, Director of Public Policy

High Fry-ve: sundaes for everyone this weekend

I've been told I have a flair for the dramatic (see byline), but if I were you, I'd stay away from ladders and look out for black cats today. Friday the 13th inspired lots of searches this week, and here’s a look at a few of the other trending searches, with data from the Google News Lab.


It’s Fry Day Fry Day, gotta get down on Fry Day

Today is Friday but also National Fry Day, as if I need another excuse to shove fried potatoes down my throat. If search interest is any indication of America’s fav fry, McDonald’s would take the top spot, followed by Burger King, Wendy’s and Five Guys. If you’re waffling over the best type of fry, curly fries are a cut above—they’re searched 14 percent more than waffle fries. Oh but wait, there’s more healthy food to celebrate as Sunday (or shall I say Sundae) is National Ice Cream Day. Search interest in frozen treats always spikes each summer, but July 2018 has recorded the highest search interest ever for ice cream in the U.S., great work everyone! More of a fro-yo guy myself.

I’m not superstitious, but I’m a little stitious

The planets are in retrograde, plus it’s Friday the 13th which explains my wild hormonal swings this week. Looks like Nevada is the most intrigued by this spooky day as it holds the top spot for all-time search interest in Friday the 13th. You do you, Nevada. Looking further into the data, one of the top-searched questions around this trend is, “Is today Friday the 13th?” (Might be quicker to glance at a calendar). If you’re really into Friday the 13th and you live in Arizona, you can get inked for $13, and people have been flocking to Google to figure out where they can get those services.

All about that cash 

You know what they say, mo money, mo problems, but my life is already pretty problematic so might as well sprinkle a few million on top. We’ve got a massive lotto drawing coming up, and people from New Jersey, Maryland and Massachusetts are searching the most for that $340 million cash prize. One of the top-searched questions about the lottery was, “What to do when you win the lottery?” Glad you asked. I’d buy an island, and plant a bunch of bushes, then trim those bushes to depict scenes from my favorite “Friends” episodes and run around on the island drinking mimosas listening to Taylor Swift. Ugh, one can dream.

A miraculous rescue

The world watched with intrigue and optimism as the Thai soccer team, trapped in an ocean cave, were rescued in a three-day operation involving 19 divers. Search interest in Thailand is at an all-time high in the U.S., having spiked by 600 percent this month, but Singapore, New Zealand and Australia had the most searches worldwide. Everyone made it out safely through the maze of rock and rope, complete with plastic cocoons and floating stretchers.

Blessed be the fruit

It’s been said that we’re living in the golden age of television, lucky us! Emmy noms are hot off the press, and one of this week’s top-searched questions about the awards was, “Who won the most Emmys?” We’ll have to wait until September to find out, but here are the winners for the week’s most-searched shows: of the nominees for Outstanding Drama, it’s “Game of Thrones.” And for a comedy series, it’s “Atlanta.” And for best show of all time, “Friends.” Okay made that one up, but man, that show is great.

#teampixel’s cool inspiration on hot summer days

Have you ever dreamed of the perfect summer vacation? If it includes endless blue skies, colorful cafes, or ancient cobblestone streets, check out the latest round of shots from #teampixel’s favorite summer spots.

When you go on your next adventure, remember to take us with you by tagging #teampixel. You might find yourself featured on The Keyword, @google or @madebygoogle the next time we’re looking for some cool inspiration on these hot summer days.

Googlers on the road: CLS and OSCON 2018

Next week a veritable who’s who of free and open source software luminaries, maintainers and developers will gather to celebrate the 20th annual OSCON and the 20th anniversary of the Open Source Definition. Naturally, the Google Open Source and Google Cloud teams will be there too!

Program chairs at OSCON 2017, left to right:
Rachel Roumeliotis, Kelsey Hightower, Scott Hanselman.
Photo used with permission from O'Reilly Media.
This year OSCON returns to Portland, Oregon and runs from July 16-19. As usual, it is preceded by the free-to-attend Community Leadership Summit on July 14-15.

If you’re curious about our outreach programs, our approach to open source, or any of the open source projects we’ve released, please find us! We’re eager to chat. You’ll find us and many other Googlers throughout the week on stage, in the expo hall, and at several special events that we’re running, including:
Here’s a rundown of the sessions we’re hosting this year:

Sunday, July 15th (Community Leadership Summit)

11:45am   Asking for time and/or money by Cat Allman

Monday, July 16th (Tutorials)

9:00am    Getting started with TensorFlow by Josh Gordon
1:30pm    Introduction to natural language processing with Python by Barbara Fusinska

Tuesday, July 17th (Tutorials)

9:00am    Istio Day opening remarks by Kelsey Hightower
9:00am    TensorFlow Day opening remarks by Edd Wilder-James
9:05am    Sailing to 1.0: Istio community update by April Nassi
9:05am    The state of TensorFlow by Sandeep Gupta
9:30am    Introduction to fairness in machine learning by Hallie Benjamin
9:55am    Farm to table: A TensorFlow story by Gunhan Gulsoy
11:00am  Hassle-free, scalable machine learning with Kubeflow by Barbara Fusinska
11:05am  Istio: Zero-trust communication security for production services by Samrat Ray, Tao Li, and Mak Ahmad
12:00pm  Project Magenta: Machine learning for music and art by Sherol Chen
1:35pm    Istio à la carte by Daniel Ciruli

Wednesday, July 18th (Sessions)

9:00am    Wednesday opening welcome by Kelsey Hightower
11:50am  Machine learning for continuous integration by Joseph Gregorio
1:45pm    Live-coding a beautiful, performant mobile app from scratch by Emily Fortuna and Matt Sullivan
2:35pm    Powering TensorFlow with big data using Apache Beam, Flink, and Spark by Holden Karau
5:25pm    Teaching the Next Generation to FLOSS by Josh Simmons

Thursday, July 19th (Sessions)

9:00am    Thursday opening welcome by Kelsey Hightower
9:40am    20 years later, open source is as important as ever by Sarah Novotny
11:50am  Google’s approach to distributed systems observability by Jaana B. Dogan
2:35pm    gRPC versus REST: Let the battle begin with Alex Borysov
5:05pm    Shenzhen Go: A visual Go environment for everybody, even professionals by Josh Deprez

We look forward to seeing you and the rest of the community there!

By Josh Simmons, Google Open Source

Lifelong learning for everyone: What we’ve learned from our European social innovation partners

The skills needed in today's workplace are changing fast. A recent report from McKinsey forecasts that the demand for technological, cognitive, creative and interpersonal skills will accelerate by 2030.

Despite technology offering more learning options than ever before, we haven’t yet figured out a way that this can truly benefit everyone. Take, for example, the early enthusiasm for bringing university content online to democratize higher education—and the sobering reality that these online courses are overwhelmingly used by people who already have a higher education. We learned from a recent IPPR report that it is vital for digital skills programs to address a diverse audience and provide skills for the future as well as skills for immediate use. And as the labor market transforms, it's clear we need a more flexible model for facilitating reskilling and lifelong learning for both current and future workers.

So how can we make sure that technology supports lifelong learning for those who need it most? About a year ago, we launched the Google.org Work Initiative, a $50 million fund to support social innovators tackling this question. In addition, since 2015 our Grow with Google programs have been equipping people with the digital skills they need to succeed in the digital economy. What we’ve learned from our Google.org and Grow with Google collaboration with European partners and social entrepreneurs is that making lifelong learning a success requires four tactics: working with organizations who are on the frontline of serving the most disadvantaged, developing clearer signals about the pay-off of engaging in learning new skills, using technology to drive incentives to persevere throughout the learning experience, and developing better ways to signal skills to employers.  

Technology can make learning more accessible

First and foremost, learning must continue to become more accessible. The biggest opportunities for people to upgrade their skills are at work, but the options to retrain are few for those without a workplace. According to research from the European Commission, only 9 percent of people who are out of work have access to upskilling opportunities, compared to almost one in two people on permanent contracts.


To tackle this, public institutions and nonprofits must integrate skill-building opportunities into their programs. Google.org grantee Bayes Impact, a tech nonprofit in France, is an example of this approach in action. Bayes's machine learning-powered search assistant recommends training resources and learning opportunities for people who are out of work. With an unemployment rate of nearly 10 percent in France, Bayes helps millions of job seekers fill their skill gaps through smart technology and a partnership with the country's national unemployment agency.


The pay-off of learning needs to be clear

We know that time and financial investment are key considerations when people think about engaging with new learning opportunities. This means that the pay-off must be clear for learners from the outset. If people don't feel like training will lead them to a better life, there's little chance that they'll take advantage.

A practical way to address this is by helping learners clearly identify the benefits of engaging in a particular course at the beginning of their journey. OpenClassrooms, one of Europe's leading providers of vocational education online and another organization we’re supporting, does this by promising to reimburse course fees if learners haven't found a job six months after completing their certificate. They've also partnered with European government agencies to get official accreditation for several of their courses, further ensuring that the value of a commitment to learning is clear.

Targeting completion is key

The most effective learning experiences are those built with completion in mind. A couple of years ago, the online learning platform Coursera shared that only 4 percent of its users completed the course and earned a credential. When we built the Google IT Professional Support certificate—a Grow with Google program that enables anyone to become an IT support specialist in eight to 12 months without a college degree—we thought carefully about how to ensure as many people as possible complete a course.

One thing we found to be useful is to support blended learning experiences, where online learning is complemented by in-person coaching and meetings with other students. To achieve this, we've partnered with nonprofit organizations to bring an additional layer of support for students to the Google IT Support Professional Certificate. In Germany, we've piloted this approach with Kiron, a nonprofit that is supporting refugees to continue their education and provide access to employment opportunities.

We’ve also been experimenting with using machine learning to identify when students might be in need of support to help them keep going with a course. For example, we’ve rolled out machine learning prompts that show up at key moments as a means to motivate learners.

Capabilities must be expressed in new ways

The final piece of the puzzle is to enable people to showcase their abilities in a format that's convincing to employers. French social enterprise Chance, another one of our grantees, uses a semi-automated system to match companies with candidates who have the capabilities they require but don’t necessarily have access to the professional networks who can help their resume stand out of the pile, or don’t know what employers are looking for and therefore don’t know how to express their own abilities. Backed by AI, projects like these enable a wider pool of job seekers to find the right opportunities for their skill set.

The labor market will continue to evolve, and technology can ensure we keep pace with the growing demands of the future workplace. As our European partners show, it's possible to improve the access, design and experience of learning new skills—putting lifelong learning for everyone firmly within our grasp.

Dev Channel Update for Chrome OS

The Beta channel has been updated to 69.0.3486.0 (Platform version: 10866.1.0) for most Chrome OS devices. This build contains a number of bug fixes, security updates and feature enhancements. A list of changes can be found here.

If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser).

Cindy Bayless
Google Chrome

AdWords click measurement improvements and migration

Earlier this month, the following improvements were made available to all users:
  • Setting finalUrlSuffix at the customer, campaign, ad group, ad, and extension level in all AdWords accounts. Previously, this feature was only available in test accounts.
  • Specifying up to eight custom parameters. Previously, the maximum number of custom parameters was three.
As announced earlier this year, starting October 30, 2018, parallel tracking will be required for all AdWords accounts. With the above features in place, the AdWords API now supports all the features needed to migrate your accounts to parallel tracking, so we encourage you to get started on the migration as soon as possible. The detailed AdWords API guide and accompanying implementation checklist will walk you through the required changes.

If you have questions or need help with the migration, please email us at ads-clicktracking-support@google.com.

DFP API is becoming Google Ad Manager API



With DoubleClick for Publishers becoming Google Ad Manager, the API will be undergoing changes to follow suit. Over the next month, there will be changes to our documentation and client libraries, but no API entities are changing yet.

Documentation

In late July, references to “DFP” will become “Google Ad Manager” or just “Ad Manager”. Also, the documentation URL will change from https://developers.google.com/doubleclick-publishers/ to https://developers.google.com/ad-manager/.

We will support redirects to all documentation pages. For example, if you have a bookmark for the ReportService, you don’t need to do anything for this bookmark to continue working.

Client Libraries

Each of our client libraries will be updated to remove references to DFP in the v201808 release, which is currently scheduled for August 14, 2018. For example, in the Java client library, the DfpServices class will be renamed to AdManagerServices.

Each client library will have its own guidelines for what needs to be updated. In the announcement blog post for v201808 we will link to these guides. Keep in mind that you’ll only need to refactor your code once upgrading to v201808 and beyond, so using the DFP names will still be supported by some versions of the client library until August, 2019.


As always, if you have any questions or suggestions, feel free to reach out to us on our forum.

Beta Channel Update for Chrome OS

The Beta channel has been updated to 68.0.3440.59 (Platform version: 10718.50.0) for most Chrome OS devices. This build contains a number of bug fixes, security updates and feature enhancements. A list of changes can be found here.

If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser).

Kevin Bleicher
Google Chrome