Monthly Archives: June 2019

Four tips for taking delectable food photos with Pixel 2

For Samira Kazan—best known to her online followers as @AlphaFoodie—food art was an unexpectedly welcome addition to the equation.

Samira, who holds a Ph.D. in engineering from Oxford, began experimenting with creating beautiful dishes in her free time as a way to explore healthier eating. Since then, she’s taken and shared thousands of photos of her food art creations on her blog and Instagram. She’s also a Pixel 2 user, and has learned a few tips along the way about what works best for her food shots. Below are her top four, in her own words:

1. Document every delicious detail with Portrait Mode

From a single currant to the perfect drop of icing, Portrait Mode makes it possible to capture the finest details of your food concoctions. Try getting close to the subject—this amplifies the background blur effect to create a striking photo.

2. Use Slow Motion to capture mouthwatering videos

Slow Motion shoots a high number of frames per second and you can adjust which parts of the videos to slow down, so a lot less editing is required. I love shooting videos of pouring drinks or chopping fruit—between the high sound quality and slow motion, I can capture enticing food envy footage for Instagram!

3. Shoot in all kinds of lighting

If you’re taking photos outdoors on a sunny day, shoot in the shade so that the photo color is more saturated and less overexposed. In overall low-light settings, the wider aperture can allow more light into the camera, making getting a clear photo a breeze. You often don’t even need to use flash at night.

4. Easily rediscover your best shots

It’s easy to capture amazing smartphone photos with one click, meaning you often have a lot of options to choose from. Pixel 2 has unlimited storage via Google Photos, which is a lifesaver for me as a food blogger. It’s easy for me to search Google Photos to find specific past photos—for example, if I’m looking for photos with bananas, I can find all of the photos I’ve taken that have bananas in it through a quick search.

#PrideForever: Seven Googlers on the fight for LGBTQ+ rights

Earlier this month, we launched Pride Forever, celebrating the past, present and future of the LGBTQ+ community by elevating stories from around the world, like the ones from the Stonewall Forever living monument. This interactive digital monument was created by the LGBT Community Center of New York City (“the Center”) with support from Google, and it connects diverse voices and stories from the 50 years since the Stonewall riots to the modern-day movement for LGBTQ+ rights. 

Those voices include members of Google’s LGBTQ+ community, too. In offices all over the world, Googlers are reflecting on their own journeys and sharing their stories with the world. Here’s a glimpse of what seven Googlers say pride means to them.

Pride Forever

“For over a decade, I struggled to accept that I could possibly be trans. Then in 2012, Argentina passed its gender identity law–the first in the world to allow gender self-determination. While far removed from my home in Indonesia, it meant that people like me might finally have a chance at transitioning and living without harmful legal and medical gatekeeping. It gave me the courage to accept myself and start standing up for my right to be.” — Jean, Singapore

Pride Forever

“Two years ago, I was honored to create a Google Doodle for Gilbert Baker, creator of the rainbow flag representing diversity, unity, acceptance and pride. The first flag was made by hand, so I wanted to create a Doodle with the same handmade feeling. I learned to sew (not easy!) and recreated the flag in my tiny kitchen just a few blocks from where Baker made his original eight-color flag back in 1978. As an LGBTQ+ person, the flag and this Doodle were beyond personal to me, and it’s part of why I joined the Google Doodle team, in hopes of having opportunities to brighten and strengthen people’s days.” — Nate, San Francisco

Pride Forever

“We were both engineers working in male-dominated industries where being a lesbian was difficult. We were asked on a regular basis about husbands or why we weren’t married. California’s Prop 8 in 2008 (banning same-sex marriage) was an eye-opening moment for us. Although the prop passed, there was a large public opposition campaign standing up for the rights of the LGBTQ+ community. It felt like a turning point for people across the United States showing that it was OK to support the LGBTQ+ cause without substantial retribution.”  — Candace and Michelle, South Carolina

Pride Forever

William, left, with his family. 

"Many people have shaped my life—but perhaps the most meaningful people in my life are my husband, whom I have been with for nearly 30 years, and my son, who gives me more joy (and a fair amount of frustration) than I could have ever imagined. For them, I owe thanks in large part to a valiant handful of New Yorkers whom I've never met. Their act of defiance at the Stonewall Inn 50 years ago ultimately enabled me to live, love and be who I am." — William, New York

Pride Forever

“When I first came out to my parents, my dad told me I’d never get a good job, and I’d lose all my friends unless I ‘changed my mind’ about being gay. That really hurt—that being gay is still seen as different, even to well-meaning people. Marriage equality in the U.K. in 2013 felt like a huge validation. The fact that this was part of an international wave, it was really a feeling of progressive acceptance.” — Nick, London

Pride Forever

“The original LGBTQ+ initialism was created in the late 1980s to introduce a more inclusive name for the gay community. To me, the LGBTQ+ acronym represents a diverse group of people that are unique and resilient. I am so proud to be a part of a community that is constantly evolving its boundaries for inclusion and actively championing societal equality. Even though there is still more to be done, being able to lean on one another for support—no matter where in the LGBTQ+ spectrum you fall—binds us together and has enabled us to make impressive progress across the globe.” — Andrew, Sydney


Announcing the YouTube-8M Segments Dataset



Over the last two years, the First and Second YouTube-8M Large-Scale Video Understanding Challenge and Workshop have collectively drawn 1000+ teams from 60+ countries to further advance large-scale video understanding research. While these events have enabled great progress in video classification, the YouTube dataset on which they were based only used machine-generated video-level labels, and lacked fine-grained temporally localized information, which limited the ability of machine learning models to predict video content.

To accelerate the research of temporal concept localization, we are excited to announce the release of YouTube-8M Segments, a new extension of the YouTube-8M dataset that includes human-verified labels at the 5-second segment level on a subset of YouTube-8M videos. With the additional temporal annotations, YouTube-8M is now both a large-scale classification dataset as well as a temporal localization dataset. In addition, we are hosting another Kaggle video understanding challenge focused on temporal localization, as well as an affiliated 3rd Workshop on YouTube-8M Large-Scale Video Understanding at the 2019 International Conference on Computer Vision (ICCV’19).



YouTube-8M Segments
Video segment labels provide a valuable resource for temporal localization not possible with video-level labels, and enable novel applications, such as capturing special video moments. Instead of exhaustively labeling all segments in a video, to create the YouTube-8M Segments extension, we manually labeled 5 segments (on average) per randomly selected video on the YouTube-8M validation dataset, totalling ~237k segments covering 1000 categories.

This dataset, combined with the previous YouTube-8M release containing a very large number of machine generated video-level labels, should allow learning temporal localization models in novel ways. Evaluating such classifiers is of course very challenging if only noisy video-level labels are available. We hope that the newly added human-labeled annotations will help ensure that researchers can more accurately evaluate their algorithms.

The 3rd YouTube-8M Video Understanding Challenge
This year the YouTube-8M Video Understanding Challenge focuses on temporal localization. Participants are encouraged to leverage noisy video-level labels together with a small segment-level validation set in order to better annotate and temporally localize concepts of interest. Unlike last year, there is no model size restriction. Each of the top 10 teams will be awarded $2,500 to support their travel to Seoul to attend ICCV’19. For details, please visit the Kaggle competition page.

The 3rd Workshop on YouTube-8M Large-Scale Video Understanding
Continuing in the tradition of the previous two years, the 3rd workshop will feature four invited talks by distinguished researchers as well as presentations by top-performing challenge participants. We encourage those who wish to attend to submit papers describing their research, experiments, or applications based on the YouTube-8M dataset, including papers summarizing their participation in the challenge above. Please refer to the workshop page for more details.

It is our hope that this newest extension will serve as a unique playground for temporal localization that mimics real world scenarios. We also look forward to the new challenge and workshop, which we believe will continue to advance research in large-scale video understanding. We hope you will join us again!

Acknowledgements
This post reflects the work of many machine perception researchers including Ke Chen, Nisarg Kothari, Joonseok Lee, Hanhan Li, Paul Natsev, Joe Yue-Hei Ng, Naderi Parizi, David Ross, Cordelia Schmid, Javier Snaider, Rahul Sukthankar, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan, Yexin Wang, Zheng Xu, as well as Julia Elliott and Walter Reade from Kaggle. We are also grateful for the support and advice from our partners at YouTube.

Source: Google AI Blog


Ready for takeoff: Meet the Doodle for Google national finalists

This January, we kicked off the 11th year of Doodle for Google, our annual art contest for students across the country. We challenged kids across the U.S. to create a visual interpretation of this year’s theme: “When I grow up, I hope…” And it’s clear this year’s students have a lot of hopes, whether it’s becoming a cartoonist, growing your own food or simply never growing up.

In the beginning of June, we asked you to help us judge this year’s state and territory winners’ Doodles, and after a million public votes, we’re ready to introduce our five national finalists, one from each grade group. Meet these talented students and learn about their hopes, how they come up with their Doodle and what age they think they will be when they are “grown up.”

Natalia Pepe, Grade Group K-3

Hometown: Cheshire, Connecticut

Doodle Title: Farmers

Doodle for Google, Connecticut

How did you come up with the idea for your Doodle? I was inspired by my town of Cheshire, Connecticut, where there are a lot of farms and orchards, and where a lot of people have gardens to grow their own food. I thought that if there were more of this in the world, people would be healthier and it would be better for the planet. Plus, it's just really cool to see things grow!

What else do you like to draw and doodle for fun?I like drawing little monsters and all kinds of dogs. I especially like drawing comics and illustrating fun stories.

What age do you think you'll be when you're officially a "grownup?" I think that I will officially be a grownup when I am 20 years old, because this is the age when I will be out of my teens. That's not for a long time!

Amadys Lopez Velasquez, Grade Group 4-5

Hometown: Dorado, Puerto Rico

Doodle Title: When I Grow Up, I Hope…  ¡Que Todos Seamos Niños Otra Vez!

Doodle for Google, Puerto Rico

How did you come up with the idea for your Doodle?My family always tells me to enjoy my childhood. That adults would like to be children again. It's funny and weird but it seems like the key to happiness.

What else do you like to draw and doodle for fun? I like to have fun drawing animals and my pets, and then transform them by drawing as if they were human.

What's your favorite thing to learn about in school, and why? My favorite thing to learn is history because I like to know interesting facts about my country and other parts of the world. It is like traveling in time and being able to know the past and understand the things of the present.

Christelle Matildo, Grade Group 6-7

Hometown: Lancaster, Texas

Doodle Title: A Hopeful Future

Doodle for Google, Texas

How did you come up with the idea for your Doodle?I came up with the idea of my Doodle from current issues and topics that stand out the most to me. 

What else do you like to draw and doodle for fun? I like to draw mostly dragons for fun. Sometimes I draw made-up creatures because I think they look cool in my imagination. 

What age do you think you'll be when you're officially a "grownup?" I think I'll officially be a "grownup" at the age of 18. I can act like or be a grownup, but my "official title" isn't there yet.  

Jeremy Henskens, Grade Group 8-9

Hometown: Burlington, New Jersey

Doodle Title: Cartooning Doodle

Doodle for Google, New Jersey

How did you come up with the idea for your Doodle?I want to be a cartoonist when I grow up, so I made my Doodle resemble a comic strip from a comic book. 

What else do you like to draw and doodle for fun?Random people with big heads and odd objects.

What's your favorite thing to learn about in school, and why?Social studies, because people did some strange things in the past, and it is cool to learn about them.

What age do you think you'll be when you're officially a "grownup?" 108.

Arantza Peña Popo, Grades 10-12

Hometown: Lithonia, Georgia

Doodle Title: Once you get it, give it back

Doodle for Google, Georgia

How did you come up with the idea for your Doodle? I came up with the idea at the last minute, actually the day of the deadline. I looked at the photograph of my mother (the real version that inspired the drawing) and thought, "Hey, why don't I reverse it?" I wanted to focus more on a message of helping out my awesome mother, more than anything else.

What's your favorite thing to learn about in school, and why?I like to learn about literature that focuses on more diverse perspectives of our society.

What age do you think you'll be when you're officially a "grownup?" I think at 30 years old I'll feel like a grownup. I'm 18 now and I still feel like a kid.

Responsible AI: Putting our principles into action

Every day, we see how AI can help people from around the world and make a positive difference in our lives—from helping radiologists detect lung cancer, to increasing literacy rates in rural India, to conserving endangered species. These examples are just scratching the surface—AI could also save lives through natural disaster mitigation with our flood forecasting initiative and research on predicting earthquake aftershocks

As AI expands our reach into the once-unimaginable, it also sparks conversation around topics like fairness and privacy. This is an important conversation and one that requires the engagement of societies globally. A year ago, we announced Google’s AI Principles that help guide the ethical development and use of AI in our research and products. Today we’re sharing updates on our work.

Internal education

We’ve educated and empowered our employees to understand the important issues of AI and think critically about how to put AI into practice responsibly. This past year, thousands of Googlers have completed training in machine learning fairness. We’ve also piloted ethics trainings across four offices and organized an AI ethics speaker series hosted on three continents.

Tools and research

Over the last year, we’ve focused on sharing knowledge, building technical tools and product updates, and cultivating a framework for developing responsible and ethical AI that benefits everyone. This includes releasing more than 75 research papers on topics in responsible AI, including machine learning fairness, explainability, privacy, and security, and developed and open sourced 12 new tools. For example:

  • The What-If Tool is a new feature that lets users analyze an ML model without writing code. It enables users to visualize biases and the effects of various fairness constraints as well as compare performance across multiple models.
  • Google Translate reduces gender bias by providing feminine and masculine translations for some gender-neutral words on the Google Translate website.
  • We expanded our work in federated learning, a new approach to machine learning that allows developers to train AI models and make products smarter without your data ever leaving your device. It’s also now open-sourced as TensorFlow Federated.
  • Our People + AI Guidebook is a toolkit of methods and decision-making frameworks for how to build human-centered AI products. It launched in May and includes contributions from 40 Google product teams. 

We continue to update the Responsible AI Practices quarterly, as we reflect on the latest technical ideas and work at Google.

Review process

Our review process helps us meet our AI Principles. We encourage all Google employees to consider how the AI Principles affect their projects, and we’re evolving our processes to ensure we’re thoughtfully considering and assessing new projects, products, and deals. In each case we consider benefits and assess how we can mitigate risks. Here are two examples:

Cloud AI Hub

With Cloud AI Hub, enterprises and other organizations can share and more readily access a variety of already-trained machine learning models. Much of AI Hub’s content would be published by organizations outside of Google, which would make it difficult for us to evaluate all the content along the AI Principles. As a result, we evaluated the ethical considerations around releasing the AI Hub, such as the potential for harmful dual use, abuse, or presenting misleading information. 

In the course of the review, the team developed a two-tiered strategy for handling potentially risky and harmful content: 

  1. Encouraging community members to weigh in on issues like unfair bias. To support the community, Cloud AI provides resources (like the inclusive ML guide) to help users identify trustworthy content.
  2. Crafting a Terms of Service for Cloud AI Hub, specifically the sections on content and conduct restrictions.

These safeguards made it more likely that the AI Hub’s content ecosystem would be useful and well-maintained and as a result, we went ahead with launching the AI Hub.

Text-to-speech (TTS) research paper

A research group within Google wrote an academic paper that addresses a major challenge in AI research: systems often need to be retrained from scratch, with huge amounts of data, to take on even slightly different tasks. This paper detailed an efficient text-to-speech (TTS) network, which allows a system to be trained once and then adapted to new speakers with much less time and data.


While smarter text-to-speech networks could help individuals with voice disabilities, ALS, or tracheotomies, we recognize the potential for such technologies to be used for harmful applications, like synthesizing an individual’s voice for deceptive purposes.


Ultimately we determined that the technology described in the paper had limited potential for misuse for several reasons, including the quality of data required to make it work. Arbitrary recordings from the internet would not satisfy these requirements. In addition, there are enough differences between samples generated by the network and speakers’ voices for listeners to identify what’s real and what’s not. As a result, we concluded that this paper aligned with our AI Principles, but this exercise reinforced our commitment to identifying and preempting the potential for misuse.

Engaging with external stakeholders

Ongoing dialogue with the broader community is essential to developing socially responsible AI. We’ve engaged with policymakers and the tech community, participated in more than 100 workshops, research conferences and summits, and directly engaged with more than 4,000 stakeholders across the world.


As advances in AI continue, we’ll continue to share our perspectives and engage with academia, industry, and policymakers to promote the responsible development of AI. We support smart regulation tailored to specific sectors and use cases, and earlier this year we published this white paper to help promote pragmatic and forward-looking approaches to AI governance. It outlines five areas where government should work with civil society and AI practitioners to cultivate a framework for AI.


We recognize there’s always more to do and will continue working with leaders, policymakers, academics, and other stakeholders from across industries to tackle these important issues. Having these conversations, doing the proper legwork, and ensuring the inclusion of the widest array of perspectives, is critical to ensuring that AI joins the long list of technologies transforming life for the better. 

Changes to the user management interface in the Admin console

What’s changing 

We’re making some changes to the interface you use to manage users in the Admin console. Specifically you may notice the following updates when you go to Admin console > Users:

  • New text buttons for user management. The buttons that appear when you hover over a user in the user list have been changed from icons to text. 
  • New text links to add users. You can now use text buttons at the top of the table. These replace the ‘+’ button that was previously used to add users. 
  • Dynamic table title bar. There are now different options displayed in the table depending on whether you have any rows selected (see image below). 


See below for more details and images of the new interface.

Who’s impacted 

Admins only

Why you’d use it 

These changes will make it easier to find common user management features and therefore manage users more quickly through the Admin console.

How to get started 




Additional details 

New text buttons for user management 

Instead of icon buttons, you’ll now have text buttons to complete common user management functions, such as resetting passwords, renaming users, adding to groups, and more.


New text links to add users 

To add users individually or in bulk, use the text links at the top of the user table. Note that these options change when rows are selected (see ‘dynamic table title bar,’ below).


A new way to add users  

Dynamic table title bar  

Options in the table’s title bar will change when you have user rows selected.


Helpful links 

Help Center: Add and manage users 

Availability 

Rollout details 



G Suite editions 
Available to all G Suite editions

On/off by default? 
This feature will be ON by default.

Stay up to date with G Suite launches

Get in the game: Vote for your favourite AFL players with Search


We know that sport holds a special place in Aussies’ hearts – so we’re always looking for ways to help you get closer to the action. Last year, we made updates to Search and the Google Assistant to give you live scores, match results, upcoming fixtures and ladders for AFL, NRL, Rugby, Cricket and more.
The 2019 AFL season has already brought many moments to remember – like Liam Ryan's stunning mark and Eddie Betts delivering magic in the pocket. And with a cracking season underway, we’ve partnered with the Australian Football League to help bring you a new way to get into the game. Starting today, you’ll be able to vote for your favourite AFL players right on Google Search.


To have your say, just search for “AFL vote”. You can vote once per day when you’re signed into your Google Account. There will be one winner for each category, and the results will be shared on AFL.com.au soon after voting closes.

So you can mark your calendar, here are the categories and timeframes for voting:

  • Best on Ground: Voting will be open for every Friday night match from 8:30pm to 10:45pm AEST from Round 15 for the rest of the home and away season and in the Finals.
  • Player of the Round: You can vote the best player from the weekend from 3pm Monday to 3pm AEST Wednesday during the home and away season, from Round 15 onwards.
  • Fan Awards: Launching at the end of the home and away season. Stay tuned for more exciting categories!

Once you’ve voted, you can also keep up to date with the latest scores, news and the ladder. Just head to Google and type in “AFL” so you don’t miss any hangers, bumps and goals from the weekend.
Whether you’re cheering for Nat Fyfe, Tim Kelly, Luke Parker or one of my favourite Saints, like Jack Billings, be sure to cast your vote and have your say.

Happy Voting!

Predicting Bus Delays with Machine Learning



Hundreds of millions of people across the world rely on public transit for their daily commute, and over half of the world's transit trips involve buses. As the world's cities continue growing, commuters want to know when to expect delays, especially for bus rides, which are prone to getting held up by traffic. While public transit directions provided by Google Maps are informed by many transit agencies that provide real-time data, there are many agencies that can’t provide them due to technical and resource constraints.

Today, Google Maps introduced live traffic delays for buses, forecasting bus delays in hundreds of cities world-wide, ranging from Atlanta to Zagreb to Istanbul to Manila and more. This improves the accuracy of transit timing for over sixty million people. This system, first launched in India three weeks ago, is driven by a machine learning model that combines real-time car traffic forecasts with data on bus routes and stops to better predict how long a bus trip will take.

The Beginnings of a Model
In the many cities without real-time forecasts from the transit agency, we heard from surveyed users that they employed a clever workaround to roughly estimate bus delays: using Google Maps driving directions. But buses are not just large cars. They stop at bus stops; take longer to accelerate, slow down, and turn; and sometimes even have special road privileges, like bus-only lanes.

As an example, let’s examine a Wednesday afternoon bus ride in Sydney. The actual motion of the bus (blue) is running a few minutes behind the published schedule (black). Car traffic speeds (red) do affect the bus, such as the slowdown at 2000 meters, but a long stop at the 800 meter mark slows the bus down significantly compared to a car.
To develop our model, we extracted training data from sequences of bus positions over time, as received from transit agencies’ real time feeds, and aligned them to car traffic speeds on the bus's path during the trip. The model is split into a sequence of timeline units—visits to street blocks and stops—each corresponding to a piece of the bus's timeline, with each unit forecasting a duration. A pair of adjacent observations usually spans many units, due to infrequent reporting, fast-moving buses, and short blocks and stops.

This structure is well suited for neural sequence models like those that have recently been successfully applied to speech processing, machine translation, etc. Our model is simpler. Each unit predicts its duration independently, and the final output is the sum of the per-unit forecasts. Unlike many sequence models, our model does not need to learn to combine unit outputs, nor to pass state through the unit sequence. Instead, the sequence structure lets us jointly (1) train models of individual units' durations and (2) optimize the "linear system" where each observed trajectory assigns a total duration to the sum of the many units it spans.
To model a bus trip (a) starting at the blue stop, the model (b) adds up the delay predictions from timeline units for the blue stop, the three road segments, the white stop, etc.
Modeling the "Where"
In addition to road traffic delays, in training our model we also take into account details about the bus route, as well as signals about the trip's location and timing. Even within a small neighborhood, the model needs to translate car speed predictions into bus speeds differently on different streets. In the left panel below, we color-code our model's predicted ratio between car speeds and bus speeds for a bus trip. Redder, slower parts may correspond to bus deceleration near stops. As for the fast green stretch in the highlighted box, we learn from looking at it in StreetView (right) that our model discovered a bus-only turn lane. By the way, this route is in Australia, where right turns are slower than left, another aspect that would be lost on a model that doesn’t consider peculiarities of location.
To capture unique properties of specific streets, neighborhoods, and cities, we let the model learn a hierarchy of representations for areas of different size, with a timeline unit's geography (the precise location of a road or a stop) represented in the model by the sum of the embeddings of its location at various scales. We first train the model with progressively heavier penalties for finer-grain locations with special cases, and use the results for feature selection. This ensures that fine-grained features in areas complex enough where a hundred meters affects bus behavior are taken into account, as opposed to open countryside where such fine-grained features seldom matter.

At training time, we also simulate the possibility of later queries about areas that were not in the training data. In each training batch, we take a random slice of examples and discard geographic features below a scale randomly selected for each. Some examples are kept with the exact bus route and street, others keep only neighborhood- or city-level locations, and others yet have no geographical context at all. This better prepares the model for later queries about areas where we were short on training data. We expand the coverage of our training corpus by using anonymized inferences about user bus trips from the same dataset that Google Maps uses for popular times at businesses, parking difficulty, and other features. However, even this data does not include the majority of the world's bus routes, so our models must generalize robustly to new areas.

Learning the Local Rhythms
Different cities and neighborhoods also run to a different beat, so we allow the model to combine its representation of location with time signals. Buses have a complex dependence on time — the difference between 6:30pm and 6:45pm on a Tuesday might be the wind-down of rush hour in some neighborhoods, a busy dining time in others, and entirely quiet in a sleepy town elsewhere. Our model learns an embedding of the local time of day and day of week signals, which, when combined with the location representation, captures salient local variations, like rush hour bus stop crowds, that aren't observed via car traffic.

This embedding assigns 4-dimensional vectors to times of the day. Unlike most neural net internals, four dimensions is almost few enough to visualize, so let's peek at how the model arranges times of day in three of those dimensions, via the artistic rendering below. The model indeed learns that time is cyclical, placing time in a "loop". But this loop is not just the flat circle of a clock's face. The model learns wide bends that let other neurons compose simple rules to easily separate away concepts like "middle of the night" or "late morning" that don't feature much bus behavior variation. On the other hand, evening commute patterns differ much more among neighborhoods and cities, and the model appears to create more complex "crumpled" patterns between 4pm-9pm that enable more intricate inferences about the timings of each city's rush hour.
The model's time representation (3 out of 4 dimensions) forms a loop, reimagined here as the circumference of a watch. The more location-dependent time windows like 4pm-9pm and 7am-9am get more complex "crumpling", while big featureless windows like 2am-5am get bent away with flat bends for simpler rules. (Artist's conception by Will Cassella, using textures from textures.com and HDRIs from hdrihaven.)
Together with other signals, this time representation lets us predict complex patterns even if we hold car speeds constant. On a 10km bus ride through New Jersey, for example, our model picks up on lunchtime crowds and weekday rush hours:
Putting it All Together
With the model fully trained, let's take a look at what it learned about the Sydney bus ride above. If we run the model on that day's car traffic data, it gives us the green predictions below. It doesn't catch everything. For instance, it has the stop at 800 meters lasting only 10 seconds, though the bus stopped for at least 31 sec. But we stay within 1.5 minutes of the real bus motion, catching a lot more of the trip's nuances than the schedule or car driving times alone would give us.
The Trip Ahead
One thing not in our model for now? The bus schedule itself. So far, in experiments with official agency bus schedules, they haven't improved our forecasts significantly. In some cities, severe traffic fluctuations might overwhelm attempts to plan a schedule. In others, the bus schedules might be precise, but perhaps because transit agencies carefully account for traffic patterns. And we infer those from the data.

We continue to experiment with making better use of schedule constraints and many other signals to drive more precise forecasting and make it easier for our users to plan their trips. We hope we'll be of use to you on your way, too. Happy travels!

Acknowledgements
This work was the joint effort of James Cook, Alex Fabrikant, Ivan Kuznetsov, and Fangzhou Xu, on Google Research, and Anthony Bertuca, Julian Gibbons, Thierry Le Boulengé, Cayden Meyer, Anatoli Plotnikov, and Ivan Volosyuk on Google Maps. We thank Senaka Buthpitiya, Da-Cheng Juan, Reuben Kan, Ramesh Nagarajan, Andrew Tomkins, and the greater Transit team for support and helpful discussions; as well as Will Cassella for the inspired reimagining of the model's time embedding. We are also indebted to our partner agencies for providing the transit data feeds the system is trained on.

Source: Google AI Blog


Beta Channel Update for Desktop

The beta channel has been updated to 76.0.3809.46 for Windows, Mac, and, Linux.

A full list of changes in this build is available in the log. Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.



Abdul Syed
Google Chrome

It’s time for a new international tax deal

Finance ministers from the world’s largest economies recently came together and agreed on the need for the most significant reforms to the global tax system in a century. That’s great news.

We support the movement toward a new comprehensive, international framework for how multinational companies are taxed. Corporate income tax is an important way companies contribute to the countries and communities where they do business, and we would like to see a tax environment that people find reasonable and appropriate.

While some have raised concerns about where Google pays taxes, Google’s overall global tax rate has been over 23 percent for the past 10 years, in line with the 23.7 percent average statutory rate across the member countries of the Organization for Economic Co-operation and Development (OECD). Most of these taxes are due in the United States, where our business originated, and where most of our products and services are developed. The rest we paid in the roughly fifty countries around the world where we have offices helping to sell our services.

We’re not alone in paying most of our corporate income tax in our home country. That allocation  reflects long-standing rules about how corporate profits should be split among various countries. American companies pay most of their corporate taxes in the United States—just as German, British, French and Japanese firms pay most of their corporate taxes in their home countries. 

For over a century, the international community has developed treaties to tax foreign firms in a coordinated way. This framework has always attributed more profits to the countries where products and services are produced, rather than where they are consumed. But it’s time for the system to evolve, ensuring a better distribution of tax income.

The United States, Germany, and other countries have put forward new proposals for modernizing tax rules, with more taxes paid in countries where products and services are consumed. We hope governments can develop a consensus around a new framework for fair taxation, giving companies operating around the world clear rules that promote a sensible business investment.

The need for modernization isn’t limited to the technology sector. Both the OECD and a group of EU experts have concluded that the wider economy is “digitizing,” creating a need for broad-based reform of current rules. Almost all multinational companies use data, computers, and internet connectivity to power their products and services. And many are seeking ways to integrate these technologies, creating “smart” appliances, cars, factories, homes and hospitals. 

But even as this multilateral process is advancing, some countries are considering going it alone, imposing new taxes on foreign companies. Without a new, comprehensive and multilateral agreement, countries might simply impose discriminatory unilateral taxes on foreign firms in various sectors. Indeed, we already see such problems in some of the specific proposals that have been put forward.   

That kind of race to the bottom would create new barriers to trade, slow cross-border investment, and hamper economic growth. We’re already seeing this in a handful of countries proposing new taxes on all kinds of goods—from software to consumer products—that involve intellectual property. Specialized taxes on a handful of U.S. technology companies would do little more than claim taxes that are currently owed in the U.S., heightening trade tensions. But if governments work together, more taxes can be paid where products and services are consumed, in a coordinated and mutually acceptable way. This give-and-take is needed to ensure a better, more balanced global tax system.

We believe this approach will restore confidence in the international tax system and promote more cross-border trade and investment. We strongly support the OECD’s work to end the current uncertainty and develop new tax principles. We call on governments and companies to work together to accelerate this reform and forge a new, lasting, and global agreement.