Tag Archives: developers

New security protections to reduce risk from unverified apps

We’re constantly working to secure our users and their data. Earlier this year, we detailed some of our latest anti-phishing tools and rolled-out developer-focused updates to our app publishing processes, risk assessment systems, and user-facing consent pages. Most recently, we introduced OAuth apps whitelisting in G Suite to enable admins to choose exactly which third-party apps can access user data.

Over the past few months, we’ve required that some new web applications go through a verification process prior to launch based upon a dynamic risk assessment.

Today, we’re expanding upon that foundation, and introducing additional protections: bolder warnings to inform users about newly created web apps and Apps Scripts that are pending verification. Additionally, the changes we're making will improve the developer experience. In the coming months, we will begin expanding the verification process and the new warnings to existing apps as well.

Protecting against unverified apps 

Beginning today, we’re rolling out an “unverified app” screen for newly created web applications and Apps Scripts that require verification. This new screen replaces the “error” page that developers and users of unverified web apps receive today.

The “unverified app” screen precedes the permissions consent screen for the app and lets potential users know that the app has yet to be verified. This will help reduce the risk of user data being phished by bad actors.

The "unverified app" consent flow

This new notice will also help developers test their apps more easily. Since users can choose to acknowledge the ‘unverified app’ alert, developers can now test their applications without having to go through the OAuth client verification process first (see our earlier post for details).

Developers can follow the steps laid out in this help center article to begin the verification process to remove the interstitial and prepare your app for launch.

Extending security protections to Google Apps Script 

We’re also extending these same protections to Apps Script. Beginning this week, new Apps Scripts requesting OAuth access to data from consumers or from users in other domains may also see the "unverified app" screen. For more information about how these changes affect Apps Script developers and users, see the verification documentation page.

Apps Script is proactively protecting users from abusive apps in other ways as well. Users will see new cautionary language reminding them to “consider whether you trust” an application before granting OAuth access, as well as a banner identifying web pages and forms created by other users.
Updated Apps Script pre-OAuth alert with cautionary language
Apps Script user-generated content banner

Extending protections to existing apps 

In the coming months, we will continue to enhance user protections by extending the verification process beyond newly created apps, to existing apps as well. As a part of this expansion, developers of some current apps may be required to go through the verification flow.

To help ensure a smooth transition, we recommend developers verify that their contact information is up-to-date. In the Google Cloud Console, developers should ensure that the appropriate and monitored accounts are granted either the project owner or billing account admin IAM role. For help with granting IAM roles, see this help center article.

In the API manager, developers should ensure that their OAuth consent screen configuration is accurate and up-to-date. For help with configuring the consent screen, see this help center article

We’re committed to fostering a healthy ecosystem for both users and developers. These new notices will inform users automatically if they may be at risk, enabling them to make informed decisions to keep their information safe, and will make it easier to test and develop apps for developers.

After a “close call,” a coding champion

Eighteen-year-old Cameroon resident Nji Collins had just put the finishing touches on his final submission for the Google Code-In competition when his entire town lost internet access. It stayed dark for two months.

“That was a really, really close call,” Nji, who prefers to be called Collins, tells the Keyword, adding that he traveled to a neighboring town every day to check his email and the status of the contest. “It was stressful.”

Google’s annual Code-In contest, an effort to introduce teenagers to the world of open source, invites high school students from around the world to compete. It’s part of our mission to encourage and inspire the next generation of computer scientists, and in turn, the contest allows these young people to play a role in building real technologies.

Over the course of the competition, participants complete open-source coding and design “tasks” administered by an array of tech companies like Wikimedia and OpenMRS. Tasks range from editing webpages to updating databases to making videos; one of Collins’ favorites, for example, was making the OpenMRS home page sensitive to keystrokes. This year, more than 1,300 entrants from 62 countries completed nearly 6,400 assignments.

While Google sponsors and runs the contest, the participating tech organizations, who work most closely with the students, choose the winners. Those who finish the most tasks are named finalists, and the companies each select two winners from that group. Those winners are then flown to San Francisco, CA for an action-packed week involving talks at the Googleplex in Mountain View, office tours, segway journeys through the city, and a sunset cruise on the SF Bay.

Group selfie.jpg
The 2017 Code-In winners

“It’s really fun to watch these kids come together and thrive,” says Stephanie Taylor, Code-In’s program manager. “Bringing together students from, say, Thailand and Poland because they have something in common: a shared love of computer science. Lifelong friendships are formed on these trips.”

Indeed, many Code-In winners say the community is their main motivator for joining the competition. “The people are what brought me here and keep me here,” says Sushain Cherivirala, a Carnegie Mellon computer science major and former Code-In winner who now serves as a program mentor. Mentors work with Code-In participants throughout the course of the competition to help them complete tasks and interface with the tech companies.

Screen Shot 2017-07-11 at 3.45.23 PM.png
Code-In winners on the Google campus

Code-In also acts as an accessible introduction to computer science and the open source world. Mira Yang, a 17-year-old from New Jersey, learned how to code for the first time this year. She says she never would have even considered studying computer science further before she dabbled in a few Code-In tasks. Now, she plans to major in it.

Nji and Mira.JPG
Code-in winners Nji Collins and Mira Yang

“Code-In changed my view on computer sciences,” she says. “I was able to learn that I can do this. There’s definitely a stigma for girls in CS. But I found out that people will support you, and there’s a huge network out there.”

That network extended to Cameroon, where Collins’ patience and persistence paid off as he waited out his town’s internet blackout. One afternoon, while checking his email a few towns away, he discovered he’d been named a Code-In winner. He had been a finalist the year prior, when he was the only student from his school to compete. This year, he’d convinced a handful of classmates to join in.

“It wasn’t fun doing it alone; I like competition,” Collins, who learned how to code by doing his older sister’s computer science homework assignments alongside her, says. “It pushes me to work harder.”

Learn more about the annual Code-In competition.

Identifying app usage in your Google Drive audit logs

If you’re a G Suite admin (or a developer creating apps for admins), it’s important to understand the various applications your company’s employees are using and how they’re accessing them. Today, we’re making that easier by introducing app identification (i.e. originating_app_id) in the Google Drive audit logs within the Admin SDK Reports API.

Now, your apps will be able to determine whether an activity logged was performed by a user in the Drive Android app, Drive iOS app, Google Chrome, or through a variety of other third-party apps that leverage, modify or create files within Google Drive, like Smartsheet or Asana. This will give you a better sense of the apps being used in your organization, as well as the extent and context of that usage.

Note that App IDs that show up in the logs will be numeric. Should you want to retrieve app names, a separate request using the Google Drive REST API is needed. If you already retrieve information through the Drive activity request, you should start seeing the originating_app_ids show up in your logs. Here are a pair of HTTP requests you can use to query this information:



To learn more about this new feature, take a look at the documentation, then integrate into your code so you and other G Suite admins can gain a better understanding of app usage in your domain(s). We look forward to seeing what you build!

Google People API now supports updates to Contacts and Contact Groups

Starting today, the Google People API will get new endpoints for contacts and contact groups. Last year, we launched the Google People API with read-only endpoints with plans to eventually replace the old Contacts API. We’re one step closer to that goal by adding write endpoints that allow developers to create, delete and update a single contact. In addition, there are new contact group endpoints that allow developers to read and write contact groups.

Applications need to be authorized to access the API so to get started, you will need to create a project on the Google Developers Console with the People API enabled to get access to the service. All of the steps to do so are here. If you’re new to the Google APIs and/or the Developers Console, check out this video, the first in a series of videos to help you get up-to-speed.

Once you’re authorized, you can simply create new contacts like this (using the Google APIs Client Library for Java):
Person contactToCreate = new Person();

List names = new ArrayList<>();
names.add(new Name().setGivenName("John").setFamilyName("Doe"));

Person createdContact =

The scope your app needs to authorize with is https://www.googleapis.com/auth/contacts. Full documentation on the people.create method is available here. You can update an existing contact like this:

String resourceName = "people/c12345"; // existing contact resource name
Person contactToUpdate = peopleService.people().get(resourceName)

List emailAddresses = new ArrayList<>();
emailAddresses.add(new EmailAddress().setValue("john.doe@gmail.com"));

Person updatedContact = peopleService.people().updateContact(contactToUpdate)

Full documentation on the people.update  method is available here. We look forward to seeing what you can do with these new features allowing you to modify contacts. To learn more about the People API, check out the official documentation here.

Removing Place Add, Delete & Radar Search features

Back in 2012, we launched the Place Add / Delete feature in the Google Places API to enable applications to instantly update the information in Google Maps’ database for their own users, as well as submit new places to add to Google Maps. We also introduced Radar Search to help users identify specific areas of interest within a geographic area.

Unfortunately, since we introduced these features, they have not been widely adopted, and we’ve recently launched easier ways for users to add missing places. At the same time, these features have proven incompatible with future improvements we plan to introduce into the Places API.

Therefore, we’ve decided to remove the Place Add / Delete and Radar Search features in the Google Places API Web Service and JavaScript Library. Place Add is also being deprecated in the Google Places API for Android and iOS. These features will remain available until June 30, 2018. After that date, requests to the Places API attempting to use these features will receive an error response.

Next steps

We recommend removing these features from all your applications, before they are turned down at the end of June 2018.

Nearby Search can work as an alternative for Radar Search, when used with rankby=distance and without keyword or name. Please check the Developer's Guide for more details, in the Web Service or Places library in the Google Maps JavaScript API.

The Client Libraries for Google Maps Web Services for Python, Node.js, Java and Go are also being updated to reflect the deprecated status of this functionality.

We apologize for any inconvenience this may cause, but we hope that the alternative options we provide will still help meet your needs. Please submit any questions or feedback to our issue tracker.

author image
Posted by Fontaine Foxworth, Product Manager, Google Maps APIs

Modifying events with the Google Calendar API

You might be using the Google Calendar API, or alternatively email markup, to insert events into your users’ calendars. Thankfully, these tools allow your apps to do this seamlessly and automatically, which saves your users a lot of time. But what happens if plans change? You need your apps to also be able to modify an event.

While email markup does support this update, it’s limited in what it can do, so in today’s video, we’ll show you how to modify events with the Calendar API. We’ll also show you how to create repeating events. Check it out:

Imagine a potential customer being interested in your product, so you set up one or two meetings with them. As their interest grows, they request regularly-scheduled syncs as your product makes their short list—your CRM should be able to make these adjustments in your calendar without much work on your part. Similarly, a “dinner with friends” event can go from a “rain check” to a bi-monthly dining experience with friends you’ve grown closer to. Both of these events can be updated with a JSON request payload like what you see below to adjust the date and make it repeating:
var TIMEZONE = "America/Los_Angeles";
var EVENT = {
"start": {"dateTime": "2017-07-01T19:00:00", "timeZone": TIMEZONE},
"end": {"dateTime": "2017-07-01T22:00:00", "timeZone": TIMEZONE},
"recurrence": ["RRULE:FREQ=MONTHLY;INTERVAL=2;UNTIL=20171231"]

This event can then be updated with a single call to the Calendar API’s events().patch() method, which in Python would look like the following given the request data above, GCAL as the API service endpoint, and a valid EVENT_ID to update:
GCAL.events().patch(calendarId='primary', eventId=EVENT_ID,
sendNotifications=True, body=EVENT).execute()

If you missed it, check out this video that shows how you can insert events into Google Calendar as well as the official API documentation. Also, if you have a Google Apps Script app, you can programmatically access Google Calendar with its Calendar service.

We hope you can use this information to enhance your apps to give your users an even better and timely experience.

VIDEO: Part 1—Introducing Team Drives for developers

Enterprises are always looking for ways to operate more efficiently, and equipping developers with the right tools can make a difference. We launched Team Drives this year to bring the best of what users love about Drive to enterprise teams. We also updated the Google Drive API, so that developers can leverage Team Drives in the apps they build.

In this latest G Suite Dev Show video, we cover how you can leverage the functionality of Team Drives in your apps. The good news is you don’t have to learn a completely new API—Team Drives features are built into the Drive API so you can build on what you already know. Check it out:

By the end of this video, you‘ll be familiar with four basic operations to help you build Team Drives functionality right in your apps:
  1. How to create Team Drives 
  2. How to add members/users to your Team Drives 
  3. How to create folders in Team Drives (just like creating a regular Drive folder) 
  4. How to upload/import files to Team Drives folders (just like uploading files to regular folders) 
The Drive API can help a variety of developers create solutions that work with both Google Drive and Team Drives. Whether you’re an Independent Software Vendor (ISV), System Integrator (SI) or work in IT, there are many ways to use the Drive API to enhance productivity, help your company migrate to G Suite, or build tools to automate workflows.

Team Drives features are available in both Drive API v2 and v3, and more details can be found in the Drive API documentation. We look forward to seeing what you build with Team Drives!

Google I/O session recap: how to build custom apps with App Maker

Every company has workflows and processes that are unique to its business, customers and employees. Often, these are captured manually within large spreadsheets or ad-hoc databases with macros and scripts. But what if they could be turned into custom business apps instead? Apps that provide useful UIs and distinct user roles, while helping to minimize data entry errors and increase productivity?

This year at Google I/O, I shared reasons why businesses should use App Maker—our low-code, application development tool that lets companies quickly build custom apps in G Suite. Check it out here:

And for those who’d like more detail, here is a recap of my presentation.

Closing enterprise “app gaps” with App Maker 

“App gaps” are a reality for most companies, even those that embrace major SaaS products. Think about the edge cases that aren’t addressed with a standard CRM offering like conducting territory planning or tracking asset performance.

We experienced similar gaps at Google. A few years ago, our HR recruiters were overwhelmed with the thousands of monthly interviews that each generated lengthy feedback reports from multiple interviewers. This volume made it difficult for hiring committees to calibrate candidates and make timely decisions, and resulted in delayed responses. To fix this, our IT team decided to build an app by cobbling elements from our own infrastructure.

Over time, more app requests came in from other parts of Google, so we created App Maker. What started as a handful of apps within Google, evolved into nearly 400 internal apps used by thousands. Plus, the majority of these apps were built by non-engineers outside of IT.

Today, App Maker gives software engineers and citizen developers—like business analysts or coding enthusiasts—the ability to quickly build and deploy apps to get around their workflow challenges.

How does it work? 

App Maker makes it easy to build apps in days, not months, because of its easy data-binding and drag-and-drop UI design. You can also integrate your apps with various data sources, Google services or APIs to cover broad legacy assets. Any app you create is also a part of Drive in G Suite so your data never leaves your domain.

Here’s how to build an App Maker app in three steps:
  1. Define your data models, by importing existing Google Sheets to App Maker, connecting to Google Cloud SQL instances, or manually defining custom objects field by field.
  2. Build your UI by adding pre-built components like data entry forms, report templates and easily create event triggers and application flows. 
  3. Optionally, add open source HTML, CSS and JavaScript to run on the client UI and on the app server, implementing custom functionality that’s not provided out-of-the-box.
App Maker is currently in Early Adopter Program (EAP) for every G Suite Business customer. To get started, apply here.

Ideas to get started 

By now you’re probably wondering what you can build. Well, based on our customers’ experience, here are some good starting points:
  • If you have a large Sheet with more than a handful of users updating it regularly: Sheets usually have an underlying workflow. An App Maker app will provide a better UI for it—showing the workflow visually, prompting for actions and eliminating data entry errors. 
  • If you perform recurring bulk operations in Calendar or Gmail: Say an employee joins or leaves a department, you can build an App Maker app to generate the appropriate bulk-operations in a few clicks. 
  • If your company is already using Apps Script and BigQuery: This means you’ve already invested in customizing workflows. App Maker can increase the velocity of developing custom apps.
Go build your apps with App Maker in G Suite—sign up for the EAP today.

Get your users where they need to go on any platform with Google Maps URLs

Last week at Google I/O we announced Google Maps URLs, a new way for developers to link directly to Google Maps from any app. Over one billion people use the Google Maps apps and sites every month to get information about the world, and now we're making it easier to leverage the power of our maps from any app or site.

Why URLs?

Maps can be important to help your users get things done, but we know sometimes maps don't need to be a core part of your app or site. Sometimes you just need the ability to complete your users’ journey—including pointing them to a specific location. Maybe they're ready to buy from you and need to find your nearest store, or they want to set up a meeting place with other users. All of these can be done easily in Google Maps already.

What you can do is use Google Maps URLs to link into Google Maps and trigger the functionality you or your users need automatically. Google Maps URLs are not new. You've probably noticed that copying our URLs out of a browser works—on some platforms. While we have Android Intents and an iOS URL Scheme, they only work on their native platforms. Not only is that more work for developers, it means any multi-user functionality is limited to users on that same platform.

Cross platform

So to start, we needed a universal URL scheme we could support cross-platform—Android, iOS, and web. A messaging app user should be able to share a location to meet up with their friend without worrying about whether the message recipient is on Android or iOS. And for something as easy as that, developers shouldn't have to reimplement the same feature with two different libraries either.

So when a Google Maps URL is opened, it will be handled by the Google Maps app installed on the user's device, whatever device that is. If Google Maps for Android or iOS is available, that's where the user will be taken. Otherwise, Google Maps will open in a browser.

Easy to use

Getting started is simple—just replace some values in the URL based on what you're trying to accomplish. That means we made it easy to construct URLs programmatically. Here are a few examples to get you started:

Say someone has finished booking a place to stay and need figure out how to get there or see what restaurants are nearby:
The query parameter does what it says: plugs a query in. Here we've specified a place, but if you do the same link with no location it will search near the user clicking it. Try it out: click here for sushi near you.

This is similar to our query above, but this time we got back a single result, so it gets additional details shown on the page:
The api parameter (mandatory) specifies the version of Maps URLs that you're using. We're launching version 1.

Or if a user has set up their fitness app and want to try out a new route on their bike:
We can specify the travelmode to bicycling, destination to a nearby bike trail, and we're done!

And we can also open StreetView directly with a focus of our choice to give a real sense of what a place is like:
The viewpoint is a LatLng coordinate we want to get imagery for, and heading, pitch, and fov allows you to specify exactly where to look.

Need more functionality?

Google Maps URLs are great to help your users accomplish some tasks in Google Maps. However, when you need more flexibility, customization, or control, we recommend integrating Google Maps into your app or site instead. This is where our more powerful Google Maps APIs come into play. With our feature-rich range of APIs, you can access full functionality and can control your camera, draw shapes on the map, or style your maps to match your apps, brand, or just for better UI. And if you want to go beyond the map we have metadata on Places, images, and much more.

Learn more

When you're happy to delegate the heavy lifting and make use of the Google Maps app for your needs, Maps URLs are for you. Check out our new documentation.

Thank you for using Google Maps URLs and the Google Maps APIs! Be sure to share your feedback or any issues in the issue tracker.

author image
Posted by Joel Kalmanowicz, Product Manager, Google Maps APIs

All 101 announcements from Google I/O ‘17

It’s been a busy three days here in Mountain View, as more than 7,000 developers joined us at Shoreline Amphitheatre for this year’s Google I/O. From AI to VR, and everything in between, here’s an exhaustive—we mean that—recap of everything we announced.


1. The Google Assistant is already available on more than 100 million devices!
2. Soon, with Google Lens—a new way for computers to “see”—you’ll be able to learn more about and take action on the things around you, while you’re in a conversation with your Assistant.
3. We’ve brought your Google Assistant to iPhones.
4. Call me maybe? With new hands-free calling on Google Home, you’ll be able to make calls with the Assistant to landlines and mobile numbers in U.S. and Canada for free.
5. You can now type to your Google Assistant on eligible Android phones and iPhones.
6. Bonjour. Later this year people in Australia, Canada, France, Germany and Japan will be able to give the Assistant on Google Home a try.
7. And Hallo. Soon the Assistant will roll out to eligible Android phones in Brazilian Portuguese, French, German and Japanese. By the end of the year the Assistant will support Italian, Korean and Spanish.
8. We’re also adding transactions and payments to your Assistant on phones—soon you can order and pay for food and more, with your Assistant.  
9. With 70+ home automation partners, you can water your lawn and check the status of your smoke alarm with the Assistant on Google Home and phones.
10. Soon you’ll get proactive notifications for reminders, flight delays and traffic alerts with the Assistant on Google Home and phones. With multi-user support, you can control the type of notifications to fit your daily life.
12. Listen to all your favorite tunes. We’ve added Deezer and Soundcloud as partners, plus Spotify’s free music offering coming soon.  
12. Bluetooth support is coming to Google Home, so you can play any audio from your iOS or Android device.
13. Don’t know the name of a song, but remember a few of the lyrics? Now you can just ask the Assistant to “play that song that goes like...” and list some of the lyrics.
14. Use your voice to play your favorite shows and more from 20+ new partners (HBO NOW, CBS All Access, and HGTV) straight to your TV.
15. With visual responses from your Assistant on TVs with Chromecast, you’ll be able to see Assistant answers on the biggest screen in your house.
16. You can stream with your voice with Google Home on 50 million Cast and Cast-enabled devices.
17. For developers, we're bringing Actions on Google to the Assistant on phones—on both Android and iOS. Soon you’ll find conversation apps for the Assistant that help you do things like shopping for clothes or ordering food from a lengthy menu.
18. Also for developers, we’re adding ways for you to get data on your app's usage and performance, with a new console.
19. We’re rolling out an app directory, so people can find apps from developers directly in the Google Assistant.
20. People can now also create shortcuts for apps in the Google Assistant, so instead of saying "Ok Google, ask Forecaster Joe what's the surf report for the Outer Banks," someone can just say their personal shortcut, like "Ok Google, is the surf up?"
21. Last month we previewed the Google Assistant SDK, and now we’re updating it with hotword support, so developers can build devices that are triggered by a simple "Ok Google."
22. We’re also adding to the SDK the ability to have both timers and alarms.
23. And finally, we’re launching our first developer competition for Actions on Google.

AI, ML and Cloud

24. With the addition of Smart Reply to Gmail on Android and iOS, we’re using machine learning to make responding to emails easier for more than a billion Gmail users.
25. New Cloud TPUs—the second generation of our custom hardware built specifically for machine learning—are optimized for training ML models as well as running them, and will be available in the Google Compute Engine.
26. And to speed up the pace of open machine-learning research, we’re introducing the TensorFlow Research Cloud, a cluster of 1,000 Cloud TPUs available for free to top researchers.
27. Google for Jobs is our initiative to use our products to help people find work, using machine learning. Through Google Search and the Cloud Jobs API, we’re committed to helping companies connect with potential employees and job seekers with available opportunities.
28. The Google Cloud Jobs API is helping customers like Johnson & Johnson recruit the best candidates. Only months after launching, they’ve found that job seekers are 18 percent more likely to apply on its career page now they are using Cloud Jobs API.
29. With Google.ai, we’re pulling all our AI initiatives together to put more powerful computing tools and research in the hands of researchers, developers and companies. We’ve already seen promising research in the fields of pathology and DNA research.
30. We must go deeper. AutoML uses neural nets to design neural nets, potentially cutting down the time-intensive process of setting up an AI system, and helping non-experts build AI for their particular needs.
31. We’ve partnered with world-class medical researchers to explore how machine learning could help improve care for patients, avoid costly incidents and save lives.
32. We introduced a new Google Cloud Platform service called Google Cloud IoT Core, which makes it easy for Google Cloud customers to gain business insights through secure device connections to our rich data and analytics tools.


33. We first launched Google Photos two years ago, and now it has more than 500 million monthly users.
34. Every day more than 1.2 billion photos and videos are uploaded to Google Photos.
35. Soon Google Photos will give you sharing suggestions by selecting the right photos, and suggesting who you should send them to based on who was in them
36. Shared libraries will let you effortlessly share photos with a specific person. You can share your full photo library, or photos of certain people or from a certain date forward.
37. With photo books, once you select the photos, Google Photos can curate an album for you with all the best shots, which you can then print for $9.99 (20-page softcover) or $19.99 (20-page hardcover), in the U.S. for now.
38. Google Lens is coming to Photos later this year, so you’ll be able to look back on your photos to learn more or take action—like find more information about a painting from a photo you took in a museum.


39. We reached 2 billion monthly active devices on Android!
40. Android O, coming later this year, is getting improvements to “vitals” like battery life and performance, and bringing more fluid experiences to your smaller screen, from improved notifications to autofill.
41. With picture-in-picture in Android O, you can do two tasks simultaneously, like checking your calendar while on a Duo video call.
42. Smart text selection in Android O improves copy and paste to recognize entities on the screen—like a complete address—so you can easily select text with a double tap, and even bring up an app like Maps to help navigate you there.
43. Our emoji are going through a major design refresh in Android O.
44. For developers, the first beta release of Android O is now available.
45. We introduced Google Play Protect—a set of security protections for Android that’s always on and automatically takes action to keep your data and device safe, so you don’t have to lift a finger.
46. The new Find My Device app helps you locate, ring, lock and erase your lost Android devices—phones, tablets, and even watches.
47. We previewed a new initiative aimed at getting computing into the hands of more people on entry-level Android devices. Internally called Android Go, it’s designed to be relevant for people who have limited data connectivity and speak multiple languages.
48. Android Auto is now supported by 300 car models, and Android Auto users have grown 10x since last year.
49. With partners in 70+ countries, we’re seeing 1 million new Android TV device activations every two months, doubling the number of users since last year.
50. We’ve refreshed the look and feel of the Android TV homescreen, making it easy for people to find, preview and watch content provided by apps.
51. With new partners like Emporio Armani, Movado and New Balance, Android Wear now powers almost 50 different watches.
52. We shared an early look at TensorFlow Lite, which is designed to help developers take advantage of machine learning to improve the user experience on Android.
53. As part of TensorFlow Lite, we’re working on a Neural Network API that TensorFlow can take advantage of to accelerate computation.
54. An incredible 82 billion apps were downloaded from Google Play in the last year.
55. We honored 12 Google Play Awards winners—apps and games that give their fans particularly delightful and memorable experiences.
56. We’re now previewing Android Studio 3.0, focused on speed and Android platform support.
57. We’re making Kotlin an officially supported programming language in Android, with the goal of making Android development faster and more fun.
58. And we’ll be collaborating with JetBrains, the creators of Kotlin, to move Kotlin into a nonprofit foundation.
59. Android Instant Apps are now open to all developers, so anyone can build and publish apps that can be run without requiring installation.
60. Thousands of developers from 60+ countries are now using Android Things to create connected devices that have easy access to services like the Google Assistant, TensorFlow and more.
61. Android Things will be fully released later this year.
62. Over the last year, the number of Google Play developers with more than 1 million installs grew 35 percent.
63. The number of people buying on Google Play grew by almost 30 percent this past year.
64. We’re updating the Google Play Console with new features to help developers improve your app's performance and quality, and grow your business on Google Play.
65. We’re also adding a new subscriptions dashboard in the Play Console, bringing together data like new subscribers and churn so you can make better business decisions.
66. To make it easier and more fun for developers to write robust apps, we announced a guide to Android app architecture along with a preview of Architecture Components.  
67. We’re adding four new tools to the Complications API for Android Wear, to help give users more informative watch faces.
68. Also for Android Wear, we’re open sourcing some components in the Android Support Library.


69. More Daydream-ready phones are coming soon, including the Samsung Galaxy S8 and S8+, LG’s next flagship phone, and devices from Motorola and ASUS.
70. Today there are 150+ applications available for Daydream.
71. More than 2 million students have gone on virtual reality Expeditions using Google Cardboard, with more than 600 tours available.
72. We’re expanding Daydream to support standalone VR headsets, which don't require a phone or PC. HTC VIVE and Lenovo are both working on devices, based on a Qualcomm reference design.
73. Standalone Daydream headsets will include WorldSense, a new technology based on Tango which enables the headset to track your precise movements in space, without any extra sensors.
74. The next smartphone with Tango technology will be the ASUS ZenFone AR, available this summer.
75. We worked with the Google Maps team to create a new Visual Positioning Service (VPS) for developers, which helps devices quickly and accurately understand their location indoors.
76. We’re bringing AR to the classroom with Expeditions AR, launching with a Pioneer Program this fall.
77. We previewed Euphrates, the latest release of Daydream, which will let you capture what you’re seeing and cast your virtual world right onto the screen in your living room, coming later this year.
78. A new tool for VR developers, Instant Preview, lets developers make changes on a computer and see them reflected on a headset in seconds, not minutes.
79. Seurat is a new technology that makes it possible to render high-fidelity scenes on mobile VR headsets in real time. Somebody warn Cameron Frye.
80. We’re releasing an experimental build of Chromium with an augmented reality API, to help bring AR to the web.


81. Soon you’ll be able to watch and control 360-degree YouTube videos and live streams on your TV, and use your game controller or remote to pan around an immersive experience.
82. Super Chat lets fans interact directly with YouTube creators during live streams by purchasing highlighted chat messages that stay pinned to the top of the chat window. We previewed a developer integration that showed how the Super Chat API can be used to trigger actions in the real world—such as turning the lights on and off in a creator's apartment.
83. A new feature in the YouTube VR app will soon let people watch and discuss videos together.


84. We announced that we will make Fabric’s Crashlytics the primary crash reporting product in Firebase.
85.  We’re bringing phone number authentication to Firebase, working closely with the Fabric Digits team, so your users can sign in to your apps with their phone numbers.
86. New Firebase Performance Monitoring will help diagnose issues resulting from poorly performing code or challenging network conditions.
87. We’ve improved Firebase Cloud Messaging.
88. For game developers, we’ve built Game Loop support & FPS monitoring into Test Lab for Android, allowing you to evaluate your game’s frame rate before you deploy.
89. We’ve taken some big steps to open source many of our Firebase SDKs on GitHub.
90. We’re expanding Firebase Hosting to integrate with Cloud Functions, letting you can do things like send a notification when a user signs up or automatically create thumbnails when an image is uploaded to Cloud Storage.
91. Developers interested in testing the cutting edge of our products can now sign up for a Firebase Alpha program.
92. We’re adding two new certifications for web developers, in addition to the Associate Android Developer Certification announced last year.
93. We opened an Early Access Program for Chatbase, a new analytics tool in API.ai that helps developers monitor the activity in their chatbots.
94. We’ve completely redesigned AdMob, which helps developers promote, measure and monetize mobile apps, with a new user flow and publisher controls.
95. AdMob is also now integrated with Google Analytics for Firebase, giving developers a complete picture of ads revenue, mediation revenue and in-app purchase revenue in one place.
96. With a new Google Payment API, developers can enable easy in-app or online payments for customers who already have credit and debit cards stored on Google properties.
97. We’re introducing new ways for merchants to engage and reward customers, including the new Card Linked Offers API.
98. We’re introducing a new options for ads placement through Universal App Campaigns to help users discover your apps in the Google Play Store.
99. An update to Smart Bidding strategies in Universal App Campaigns helps you gain high-value users of your apps—like players who level-up in your game or the loyal travelers who book several flights a month.
100. A new program, App Attribution Partners, integrates data into AdWords from seven third-party measurement providers so you can more easily find and take action on insights about how users engage with your app.
101. Firebase partnered up with Google Cloud to offer free storage for up to 10 gigabytes in BigQuery so you can quickly, easily and affordably run queries on it.

That’s all, folks! Thanks to everyone who joined us at I/O this year, whether in person, at an I/O Extended event or via the live stream. See you in 2018.