Check out the Google Assistant talks at I/O 2019

Posted by Mary Chen, Strategy Lead, Actions on Google

This year at Google I/O, the Actions on Google team is sharing new ways developers of all types can use the Assistant to help users get things done. Whether you’re making Android apps, websites, web content, Actions, or IoT devices, you’ll see how the Assistant can help you engage with users in natural and conversational ways.

Tune in to our announcements during the developer keynote, and then dive deeper with our technical talks. We listed the talks out below by area of interest. Make sure to bookmark them and reserve your seat if you’re attending live, or check back for livestream details if you’re joining us online.


For anyone new to building for the Google Assistant


For Android app developers


For webmasters, web developers, and content creators


For smart home developers


For anyone building an Action from scratch


For insight and beyond


In addition to these sessions, stay tuned for interactive demos and codelabs that you can try at I/O and at home. Follow @ActionsOnGoogle for updates and highlights before, during, and after the festivities.

See you soon!

Dev Channel Update for Desktop

The dev channel has been updated to 75.0.3766.2 for Windows, Mac & Linux.


A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.
Srinivas Sista
Google Chrome

Take Your Best Selfie Automatically, with Photobooth on Pixel 3



Taking a good group selfie can be tricky—you need to hover your finger above the shutter, keep everyone’s faces in the frame, look at the camera, make good expressions, try not to shake the camera and hope no one blinks when you finally press the shutter! After building the technology behind automatic photography with Google Clips, we asked ourselves: can we bring some of the magic of this automatic picture experience to the Pixel phone?

With Photobooth, a new shutter-free mode in the Pixel 3 Camera app, it’s now easier to shoot selfies—solo, couples, or even groups—that capture you at your best. Once you enter Photobooth mode and click the shutter button, it will automatically take a photo when the camera is steady and it sees that the subjects have good expressions with their eyes open. And in the newest release of Pixel Camera, we’ve added kiss detection to Photobooth! Kiss a loved one, and the camera will automatically capture it.

Photobooth automatically captures group shots, when everyone in the photo looks their best.
Photobooth joins Top Shot and Portrait mode in a suite of exciting Pixel camera features that enable you to take the best pictures possible. However, unlike Portrait mode, which takes advantage of specialized hardware in the back-facing camera to provide its most accurate results, Photobooth is optimized for the front-facing camera. To build Photobooth, we had to solve for three challenges: how to identify good content for a wide range of user groups; how to time the shutter to capture the best moment; and how to animate a visual element that helps users understand what Photobooth sees and captures.

Models for Understanding Good Content
In developing Photobooth, a main challenge was to determine when there was good content in either a typical selfie, in which the subjects are all looking at the camera, or in a shot that includes people kissing and not necessarily facing the camera. To accomplish this, Photobooth relies on two distinct models to capture good selfies—a model for facial expressions and a model to detect when people kiss.

We worked with photographers to identify five key expressions that should trigger capture: smiles, tongue-out, kissy/duck face, puffy-cheeks, and surprise. We then trained a neural network to classify these expressions. The kiss detection model used by Photobooth is a variation of the Image Content Model (ICM) trained for Google Clips, fine tuned specifically to focus on kissing. Both of these models use MobileNets in order to run efficiently on-device while continuously processing the images at high frame rate. The outputs of the models are used to evaluate the quality of each frame for the shutter control algorithm.

Shutter Control
Once you click the shutter button in Photobooth mode, a basic quality assessment based on the content score from the models above is performed. This first stage is used as a filter that avoids moments that either contain closed eyes, talking, or motion blur, or fail to detect the facial expressions or kissing actions learned by the models. Photobooth temporally analyzes the expression confidence values to detect their presence in the photo, making it robust to variations in the output of machine learning (ML) models. Once the first stage is successfully passed, each frame is subjected to a more fine-grained analysis, which outputs an overall frame score.

The frame score considers both facial expression quality and the kiss score. As the kiss detection model operates on the entire frame, its output can be used directly as a full-frame score value for kissing. The face expressions model outputs a score for each identified expression. Since a variable number of faces may be present in each frame, Photobooth applies an attention model using the detected expressions to iteratively compute an expression quality representation and weight for each face. The weighting is important, for example, to emphasize the expressions in the foreground, rather than the background. The model then calculates a single, global score for the quality of expressions in the frame.

The final image quality score used for triggering the shutter is computed by a weighted combination of the attention based facial expression score and the kiss score. In order to detect the peak quality, the shutter control algorithm maintains a short buffer of observed frames and only saves a shot if its frame score is higher than the frames that come after it in the buffer. The length of the buffer is short enough to give users a sense of real time feedback.

Intelligence Indicator
Since Photobooth uses the front-facing camera, the user can see and interact with the display while taking a photo. Photobooth mode includes a visual indicator, a bar at the top of the screen that grows in size when photo quality scores increase, to help users understand what the ML algorithms see and capture. The length of the bar is divided into four distinct ranges: (1) no faces clearly seen, (2) faces seen but not paying attention to the camera, (3) faces paying attention but not making key expressions, and (4) faces paying attention with key expressions.

In order to make this indicator more interpretable, we forced the bar into these ranges, which prevented the bar scaling from being too rapid. This resulted in smooth variability of the bar length as the quality score changes and improved the utility. When the indicator bar reaches a length representative of a high quality score, the screen flashes to signify that a photo was captured and saved.
Using ML outputs directly as intelligence feedback results in rapid variation (left), whereas specifying explicit ranges creates a smooth signal (right).
Conclusion
We’re excited by the possibilities of automatic photography on camera phones. As computer vision continues to improve, in the future we may generally trust smart cameras to select a great moment to capture. Photobooth is an example of how we can carve out a useful corner of this space—selfies and group selfies of smiles, funny faces, and kisses—and deliver a fun and useful experience.

Acknowledgments
Photobooth was a collaboration of several teams at Google. Key contributors to the project include: Kojo Acquah, Chris Breithaupt, Chun-Te Chu, Geoff Clark, Laura Culp, Aaron Donsbach, Relja Ivanovic, Pooja Jhunjhunwala, Xuhui Jia, Ting Liu, Arjun Narayanan, Eric Penner, Arushan Raj, Divya Tyam, Raviteja Vemulapalli, Julian Walker, Jun Xie, Li Zhang, Andrey Zhmoginov, Yukun Zhu.

Source: Google AI Blog


At Tech Day, hundreds of kids dive deep into STEM

On April 13 and 14, Google’s Mountain View campus suddenly had a much younger population. That’s because 875 high school students stopped by for Google’s fourth annual Tech Day. Over 150 Google and Alphabet volunteers joined the kids in 129 interactive STEM (science, technology, engineering and math) activities to empower them with knowledge and inspire them to get started in disciplines like computer science.

But Tech Day isn’t just about fun and games. The event was designed for students who may not have regular access to technology classes at their schools. The students who attend Tech Day have very little experience in technology and computing, but they might leave the event inspired to pursue a new career path.

Software engineer Matthew Dierker started Tech Day in 2016, based on a similar event at his alma mater. He started his university’s program along with a friend at the University of Illinois, and decided to bring the idea to the Bay Area. “I thought it'd be a natural fit here, given the large number of passionate engineers in Silicon Valley, plus I like organizing stuff,” he says. “I gathered a few friends and that effort found a good home in Google's engEDU initiative.”

Students learn technology at Google's Tech Day event.

Since then, Tech Day has expanded to a full weekend, with three times the students it had in 2016. And the list of activities has grown beyond just classes. Kids can now participate in games and breakout sessions that help them loosen up around technology. The event’s organizers say one of the biggest obstacles the kids face is not seeing all the career options they may have. “They might think they can’t work in any role in tech just because they struggle with math. This isn’t the case,” says Melaena Roberts, a software engineer and volunteer team lead.

User experience designer Bingying Xia says she volunteers at Tech Day because she’d like to let students know that there's more to tech than computer science. “The world also needs smart, creative designers to find user problems and come up with innovative design solutions,” she says.

Even if students aren’t interested in pursuing a career in the industry, one of Tech Day’s biggest goals is to make technology seem less intimidating. “Technological skills apply to any job, even outside of the technology industry. Tech isn't all sitting at a desk in front of a computer,” Matthew says. “If that inspires enough curiosity to keep someone learning, the skills they learn will almost certainly be useful regardless of what they wind up doing.”

Organizers and volunteers really invest themselves during Tech Day to give students as much knowledge as they can, but they learn a lot from the students, too. Melaena says student feedback has informed how Tech Day has changed over the years. Volunteer Volker Grabe, a software engineer at Waymo, says he notices kids speak their minds more as the day goes on and they realize the day isn’t as tough or competitive as they expected.

Their main takeaway from the students? They’re curious about tech and excited to learn outside the classroom. “I saw raw passion, curiosity, and excitement in the students,” says volunteer Hannah Huynh, a product design engineer. “I was impressed that these students were so dedicated to give up their weekend to learn about engineering.”

New research shows how Android helps companies build a digital workforce

IDC reports that by 2022, 75 percent of CIOs who don’t transition their organization to flexible IT product teams that use technology to solve problems in new ways will fall behind the competition. According to IDC, mobility is the key to building a connected workforce that’s agile, particularly when the organization is going through rapid change.  


In new research sponsored by Google, IDC asserts that teams can thrive with platforms that feature a diversity of hardware, offer strong security, and support IT management that balances with user experience. This series of whitepapers, videos and blog posts detail the critical role that mobility plays in achieving these core pillars and the strengths that Android offers as a strategic platform of choice for enterprise.


Phil Hochmuth, Program Director of IDC Mobility, said that for businesses to transform how their workers do their jobs with mobility, they must address key challenges around mobile computing risk, device capabilities, and form-factor selection, as well as the underlying provisioning and management of mobile end-user technology. IDC sees Android as a strategic platform that addresses each pillar to consider when choosing a mobile platform: Overall security, solution breath, and IT management capabilities balanced with user experience.

Android security extends from the hardware to the application stack, ensuring corporate data is kept secure. Our broad set of OEM partners offers a wide range of both price points as well as form factors that can enable every worker. And Android IT management capabilities span from the Work Profile, which separates personal data from corporate data access on a BYOD or personally enabled device, to locked down modes that control the device experience to a set of IT approved applications. Combined with innovative tools that bring machine learning, immersive experiences, and both native and web apps to users, Android is well suited to powering an organization’s digital transformation efforts.

Explore the IDC findings to discover how Android powers a mobile, connected workforce and can help your company take the next steps toward transitioning to a digital workforce.

Improvements to organizing and finding Team Drives

What’s changing 

As a result of your feedback, we are introducing improvements to how you organize and find files in Team Drives. These improvements include the ability to:
  • Hide Team Drives on web and mobile 
  • Search by Team Drive file creator 

Who’s impacted 

End users

Why you’d use it 

These improvements allow you to quickly access the Team Drives or files within Team Drives that are most important to you by:
  • Slimming down your list of Team Drives by hiding and unhiding as needed. 
  • Searching for items that have been created by a user in a Team Drive, similar to the search by owner in My Drive. 

How to get started 

  • Admins: No action needed. 
  • End users: 
    •  Hiding Team Drives 
      • See our Help Center for details on how to hide and unhide Team Drives 
      • You can select more than one Team Drive to hide on web
    •  Search by Team Drive file creator 
      • On the web: to search for files originally created in a Team Drive by a specific user, use “creator: email address.” 

Additional details 

Streamline your list of Team Drives by hiding inactive or irrelevant Team Drives
You may have a long list of Team Drives in your Drive left hand panel. Now, you can hide a Team Drive for a completed project or that isn’t relevant to you. Hide individual Team Drives as needed, or select multiple Team Drives and hide all at once. Hiding Team Drives is available on web and mobile.



Search for files located in a Team Drive 

People can search for My Drive files by owner, but Team Drive files are owned by the team. This makes them harder to search for. Now, you can search by “creator” for files located in a Team Drive. 

Often you remember the name of the person that created the content, instead of where it might be located. Search by “creator” will fulfill this need. To learn more about finding files in Google Drive, see here.


Helpful links 

To learn more about finding files in Google Drive, see here.
To learn more about sharing files with Team Drives, see here.
To learn more about Team Drives limits, see here.
To learn more about known issues with Team Drives, see here.

Availability 

Rollout details 
G Suite editions 
  • Available to G Suite Business, G Suite Enterprise, G Suite Enterprise for Education and G Suite for Nonprofits. 
  • Not available to G Suite Basic. 
On/off by default? 

  • These features will be ON by default

Stay up to date with G Suite launches

Improving the update process with your feedback

Posted by Sameer Samat, VP of Product Management, Android & Google Play

Thank you for all the feedback about updates we’ve been making to Android APIs and Play policies. We’ve heard your requests for improvement as well as some frustration. We want to explain how and why we’re making these changes, and how we are using your feedback to improve the way we roll out these updates and communicate with the developer community.

From the outset, we’ve sought to craft Android as a completely open source operating system. We’ve also worked hard to ensure backwards compatibility and API consistency, out of respect and a desire to make the platform as easy to use as possible. This developer-centric approach and openness have been cornerstones of Android’s philosophy from the beginning. These are not changing.

But as the platform grows and evolves, each decision we make comes with trade-offs. Everyday, billions of people around the world use the apps you’ve built to do incredible things like connect with loved ones, manage finances or communicate with doctors. Users want more control and transparency over how their personal information is being used by applications, and expect Android, as the platform, to do more to provide that control and transparency. This responsibility to users is something we have always taken seriously, and that’s why we are taking a comprehensive look at how our platform and policies reflect that commitment.

Taking a closer look at permissions

Earlier this year, we introduced Android Q Beta with dozens of features and improvements that provide users with more transparency and control, further securing their personal data. Along with the system-level changes introduced in Q, we’re also reviewing and refining our Play Developer policies to further enhance user privacy. For years, we’ve required developers to disclose the collection and use of personal data so users can understand how their information is being used, and to only use the permissions that are really needed to deliver the features and services of the app. As part of Project Strobe, which we announced last October, we are rolling out specific guidance for each of the Android runtime permissions, and we are holding apps developed by Google to the same standard.

We started with changes to SMS and Call Log permissions late last year. To better protect sensitive user data available through these permissions, we restricted access to select use cases, such as when an app has been chosen by the user to be their default text message app. We understood that some app features using this data would no longer be allowed -- including features that many users found valuable -- and worked with you on alternatives where possible. As a result, today, the number of apps with access to this sensitive information has decreased by more than 98%. The vast majority of these were able to switch to an alternative or eliminate minor functionality.

Learning from developer feedback

While these changes are critical to help strengthen privacy protections for our users, we’re sensitive that evolving the platform can lead to substantial work for developers. We have a responsibility to make sure you have the details and resources you need to understand and implement changes, and we know there is room for improvement there. For example, when we began enforcing these new SMS and Call Log policies, many of you expressed frustration about the decision making process. There were a number of common themes that we wanted to share:

  • Permission declaration form. Some of you felt that the use case descriptions in our permissions declaration form were unclear and hard to complete correctly.
  • Timeliness in review and appeals process. For some of you, it took too long to get answers on whether apps met policy requirements. Others felt that the process for appealing a decision was too long and cumbersome.
  • Getting information from a ‘real human’ at Google. Some of you came away with the impression that our decisions were automated, without human involvement. And others felt that it was hard to reach a person who could help provide details about our policy decisions and about new use cases proposed by developers.

In response, we are improving and clarifying the process, including:

  • More detailed communication. We are revising the emails we send for policy rejections and appeals to better explain with more details, including why a decision was made, how you can modify your app to comply, and how to appeal.
  • Evaluations and appeals. We will include appeal instructions in all enforcement emails and the appeal form with details can also be found in our Help Center. We will also be reviewing and improving our appeals process.
  • Growing the team. Humans, not bots, already review every sensitive decision but we are improving our communication so responses are more personalized -- and we are expanding our team to help accelerate the appeals process.

Evaluating developer accounts

We have also heard concerns from some developers whose accounts have been blocked from distributing apps through Google Play. While the vast majority of developers on Android are well-meaning, some accounts are suspended for serious, repeated violation of policies that protect our shared users. Bad-faith developers often try to get around this by opening new accounts or using other developers’ existing accounts to publish unsafe apps. While we strive for openness wherever possible, in order to prevent bad-faith developers from gaming our systems and putting our users at risk in the process, we can’t always share the reasons we’ve concluded that one account is related to another.

While 99%+ of these suspension decisions are correct, we are also very sensitive to how impactful it can be if your account has been disabled in error. You can immediately appeal any enforcement, and each appeal is carefully reviewed by a person on our team. During the appeals process, we will reinstate your account if we discover that an error has been made.

Separately, we will soon be taking more time (days, not weeks) to review apps by developers that don’t yet have a track record with us. This will allow us to do more thorough checks before approving apps to go live in the store and will help us make even fewer inaccurate decisions on developer accounts.

Thank you for your ongoing partnership and for continuing to make Android an incredibly helpful platform for billions of people around the world.

How useful did you find this blog post?

Avoid double-booking rooms in Calendar

What’s changing

Rooms will no longer accept two Calendar events that overlap in time.

Previously, if an event was created directly on a room’s calendar by someone with manage permissions for the resource, the room would accept this meeting even if another event had already added this room for that same time period.

Now, if the room has already accepted another meeting, creating a new event at the same time directly on the room’s calendar will result in a room decline of this conflicting meeting.

Who’s impacted

End users

Why this matters

This means that you’ll no longer have to scramble to find an alternative room if your meeting room was double-booked.

How to get started


  • Admins: No action required.
  • End users: No action required, this behavior will happen automatically. For situations where you’d like a long room hold (such as an all day event) where you’d like to schedule individual sessions in the same room, we recommend the following work around:
    • Create a long hold booking of the desired room.
    • Create the individual sessions, where you write in the location field or description the room location; for example. “Room A [Separate room hold].”

Manually overwrite the Hangout information for the individual sessions with the Hangout ID of the long hold.

Additional details

This change in behavior only impacts future room bookings. Existing events will not be impacted.

Availability

Rollout details


G Suite editions
Available to all G Suite editions.

On/off by default?
This feature will be ON by default.

Stay up to date with G Suite launches

Want to Change the Game? Design your own with Google Play

Calling all future game creators and designers! We’re looking for teens to share their game idea and vision for the future of gaming for a chance to see their game come to life on Google Play.

Today, we’re opening up our second annual Change the Game Design Challenge with Girls Make Games to inspire teens to consider a career in gaming—and celebrate women as players and creators. The Grand Prize Winner will win a $15,000 college scholarship and $15,000 for their school or community center’s technology program.

The top five finalists will serve as the creative directors for their game, teaming up with Girls Make Games and game industry veterans to develop and launch their game on Google Play. They’ll also receive an all-expenses paid trip to Los Angeles to showcase their game design and meet the mentors who will be helping to build their game. The finalists will join a celebration of women in gaming, get a VIP tour of Google Los Angeles, a scholarship to attend Girls Make Games Summer Camp and more.

The contest is open to U.S. residents only. For more information, including submission guidelines and how to enter, please visit g.co/ctgdesignchallenge. Looking for inspiration on what kind of game to create? Check out what last year’s finalists dreamed up.

Gathering insights in Google Analytics can be as easy as A-B-C

Today’s customers are deeply curious, searching high and low for information about a product before making a purchase. And this curiosity applies to purchases big and small—just consider the fact that mobile searches for “best earbuds” have grown by over 130 percent over the last two years. (Google Data, US, Oct 2015 - Sep 2016 vs. Oct 2017 - Sep 2018. ) To keep up with this curious customer, marketers are putting insights at the center of the strategy so that they can understand customers’ intentions and deliver a helpful, timely experience.

In our new guide about linking Google Analytics and Google Ads, we explore the broad range of reports available in Analytics. These reports give you crucial insights about the customer journey that can then be used to inform your campaigns in Google Ads. Here’s what you should know about the A-B-Cs of reporting.


Acquisition reports

How did your customers end up on your site in the first place? Acquisition reports answer this question, offering insights about how effectively your ads drive users to your site, which keywords and search queries are bringing new users to your site, and much more. This video gives you a quick overview of how Acquisition reports work.  


Behavior reports

How do you users engage with your site once they visit? Behavior reports give you valuable insights about how users respond to the content on your site. You can learn how each page is performing, what actions users are taking on your site, and much more about the site experience. Learn more about behavior reporting here.


Conversion reports

What path are users taking towards conversion? Conversion reporting in Analytics gathers valuable insights about those actions that are important to the success of your business—such as a purchase or completed sign-up for your email newsletter. Goal Flow reports help you see how a user engages as they move toward a conversion while Ecommerce reports are specifically designed to deliver insights for sites centered around purchases.


Reports open up a world of actionable insights that help you deeply understand and then quickly enhance a customer journey that is more complex than ever.


Missed the other posts in this series? Catch up now to read how creating effective campaigns for the modern customer journey can be achieved by bringing Google Analytics and Google Ads together.

And, download our new guide and learn how getting started with these reports is easy as A-B-C.