Tag Archives: developers

New OAuth protections to reduce risk from malicious apps



As part of our constant efforts to improve Google’s OAuth application ecosystem, we are launching additional protections that can limit the spread of malicious applications. Applications requiring OAuth will be subject to a daily total new user cap and a new user acquisition rate limit. The first restricts the total number of new users that can authorize your application, while the second limits how rapidly your application can acquire new users.

Every application will have its own quotas depending on its history, developer reputation, and risk profile; for more details, see User Limits for Applications using OAuth.

These quotas will be initially set to match your application’s status and current usage so the majority of developers will see no impact. However, if you have received a quota warning about your application, or if you anticipate your application may exceed its quota (due to, for example, a high profile launch), you can take action to improve your application's adoption:

  1. If your application has reached its total new user cap, submit the OAuth Developer Verification Form to request OAuth verification. Once granted, verification removes the new user cap. 
  2. If your application is running into the new user authorization rate limit, you can request a rate limit quota increase for the application. 
We will actively monitor every application’s quota usage and take proactive steps to contact any developer whose application is approaching its quota. This should help prevent interruption due to these quotas for non-malicious developers on our platform.

These enhanced protections will help protect our users and create an OAuth ecosystem where developers can continue to grow and thrive in a safer environment.

Introducing the Data Studio Community Connector Codelab

Cross-posted from the Google Developer blog 
Posted by Minhaz Kazi, Developer Advocate, Google Data Studio


Data Studio is Google’s free next gen business intelligence and data visualization platform. Community Connectors for Data Studio let you build connectors to any internet-accessible data source using Google Apps Script. You can build Community Connectors for commercial, enterprise, and personal use. Learn how to build Community Connectors using the Data Studio Community Connector Codelab.

Use the Community Connector Codelab 

The Community Connector Codelab explains how Community Connectors work and provides a step by step tutorial for creating your first Community Connector. You can get started if you have a basic understanding of Javascript and web APIs. You should be able to build your first connector in 30 mins using the Codelab.

If you have previously imported data into Google Sheets using Apps Script, you can use this Codelab to get familiar with the Community Connectors and quickly port your code to fetch your data directly into Data Studio.

Why create your own Community Connector 

Community Connectors can help you to quickly deliver an end-to-end visualization solution that is user-friendly and delivers high user value with low development efforts. Community Connectors can help you build a reporting solution for personal, public, enterprise, or commercial data, and also do explanatory visualizations.

  • If you provide a web based service to customers, you can create template dashboards or even let your users create their own visualization based on the users’ data from your service. 
  • Within an enterprise, you can create serverless and highly scalable reporting solutions where you have complete control over your data and sharing features. 
  • You can create an aggregate view of all your metrics across different commercial platforms and service providers while providing drill down capabilities. 
  • You can create connectors to public and open datasets. Sharing these connectors will enable other users to quickly gain access to these datasets and dive into analysis directly without writing any code. 

By building a Community Connector, you can go from scratch to a push button customized dashboard solution for your service in a matter of hours.

The following dashboard uses Community Connectors to fetch data from Stack Overflow, GitHub, and Twitter. Try using the date filter to view changes across all sources:


This dashboard uses the following Community Connectors:
You can build your own connector to any preferred service and publish it in the Community Connector gallery. The Community Connector gallery now has over 90 Partner Connectors connecting to more than 450 data sources.

Once you have completed the Codelab, view the Community Connector documentation and sample code on the Data Studio open source repository to build your own connector.

From veterinarian to coder: Google Developer Scholar Anne-Christine Lefort’s story

Editor's note:As part of our pledge to help 1 million Europeans find a job or grow their business by 2020, the Developer Scholarship Challenge has awarded over 60,000 Udacity scholarships to aspiring European coders. Anne-Christine Lefort, a former veterinarian from Western France, is among our scholars who has used her newly learned skills to launch a new career. This is her story.

I'm a 51-year-old mother of two young children, and after winning a Google and Udacity Developer Scholarship, I'm finally fulfilling my lifelong ambition to work with computers. Most nights, I stay up late learning how to build apps.

anne-christine lefort

Anne-Christine

Becoming an Android programmer is very different from my professional background in veterinary medicine. As a child, I was always interested in science, math and physics. I used a computer for the first time in high school in the 1980s (it was an Apple II!). I used it to play Space Invaders, but my teacher also taught me some programming basics and I knew instantly that I wanted to pursue computers further.

At that time, however, the only real-life applications of computer programming were in banking or insurance, which didn’t really appeal to me. So instead, I studied veterinary medicine at Ecole Nationale Vétérinaire de Lyon, followed by 15 years of working across a variety of laboratory roles.

After my second maternity leave, I returned to work and found that another company had bought the laboratory. My position had completely changed and I decided to take a leap and find something new. But as a woman over 45 and living in a small village in Western France, finding a job as a vet was difficult, and it was impossible to find a position that was flexible enough to accommodate family life.

Without uprooting my family, the only option was to start my own business. I had never lost my interest in computers, so I started building websites for local businesses from home. This offered huge advantages—it allowed me to be at home for the children, working while they were at school and into the evenings.

Though I was using WordPress to build websites and modifying HTML and CSS, my interest in apps and programming still burned strong. I wanted to learn more. In 2017, I saw an ad announcing the Google Developer Scholarship initiative, sponsored by Google and Udacity. This was exactly the kind of opportunity I had been looking for—to learn new digital skills and apply them in my career. I enrolled in the Android Basics Nanodegree Program, and within the first month, I had built an app!

Today, I’m learning new skills all the time and using them to grow my business. As well as working with clients, I want to build apps for children that will help them read and learn math and other languages. I'm also researching the development of an app for the veterinary industry given my knowledge and experience in this field.

Thanks to the Google and Udacity scholarship, I’ve been able to turn my new skills into the job of my dreams, while still being present for my children when they need me. I wanted to prove that a 51-year-old mom living remotely in the forest can become an Android programmer, and today I can proudly say I realized this goal. Moving forward I want to inspire others and show them that with passion and determination anything is possible.

22 international YouTubers, 15 countries, 4 days: Behind the scenes at #io18

Editor’s note: A few of these videos are in different languages. Luckily, YouTube has automatic closed captioning and you can even magically auto-translate those captions into English. Click on CC, then the settings cog to turn on auto-translate to English.

What happens when you let 22 YouTubers from 15 countries run wild inside Google HQ, behind the scenes at Google I/O and across San Francisco’s urban jungle?

Organized chaos, that’s what. Plus lots of selfies, vlogs and smiles for their millions of followers back home. Last week, we invited a delightful bunch of YouTubers from around the world to join us on an adventure and check out the latest tech from Google. Here’s a glimpse at #io18 through their eyes.

Day 1 — Google HQ tour, conference bikes and more


To get things started, we toured the whole Googleplex campus and met 10 of the top Android developers from around the world.

Gaurav Chaudary (AKA Technical Guruji) from India tours Google HQ campus

Gaurav Chaudary (AKA Technical Guruji) from India tours Google HQ campus. Don’t forget to turn on closed captions and select “auto-translate” to English if you don’t speak Hindi

Felix Bahlinger from Germany

Felix Bahlinger from Germany rides a 7-person conference bike and goes speed-dating with top Android developers

One of our very special guests was a lovely 72-year-old lady known as Korea Grandma—definitely a leading candidate as one of the most energetic and daring grandmothers in the world.

For our friends in China, here’s Chaping’s wrap up on Weibo and for those Spanish speakers out there, check out Topes DeGama’s YouTube video.

Day 2 — I/O keynote and product demos


If you didn’t catch the keynote live stream, here are some quick recaps in English, Hindi, Spanish, Chinese, Arabic and German.

After the keynote wrapped, Mr.Mobile, Tim Schofield and Technical Guriji got hands-on with the new Android P and then our YouTuber crew explored the product demo sandboxes at I/O—including Liang Fen from China checking our Accessibility tent and flowers that react to your emotions.

Technical Guruji interacts with our latest Internet of Things tech

Technical Guruji interacts with our latest Internet of Things tech

Day 3 — Machine learning, Waymo, X, and digital wellbeing


To kick off day three, we went to an inspiration talk on the future of machine learning. Auman gave a detailed summary for his Hong Kong followers while RayDu boiled machine learning down to just two simple lines.
Circles and squares

ML demystified by RayDu

We then visited our Alphabet cousins, Waymo and X, to hear about how machine learning is helping make new technologies like self-driving cars, Project Wing and more possible. In classic grandma style, Korea Grandma even handed out chocolates delivered by a Project Wing delivery drone to the team.

Technical Guruji films the arrival of chocolates delivered by Project Wing

Technical Guruji films the arrival of chocolates delivered by Project Wing

After a busy start to the week, it was time to chill for a bit. Helping people maintain a healthy relationship with the way they want to use tech was a major focus of I/O this year, so we decided to take a break with a meditation session. Google VP Sameer Samat also dropped by to share how we’re building digital wellbeing controls into the next version of Android P. Creative Monkeyz, Flora, and Pierre couldn’t pass up the opportunity for a quick selfie too. ?

Pierre (The Liu Pei) talks digital wellbeing in Android P with Sameer

Pierre (The Liu Pei) talks digital wellbeing in Android P with Sameer

We played Emoji Scavenger Hunt... Aaaaaand then we partied. Hard.

io after hours

Day 4 — Urban AI digital jungle


For our last day together, we trekked into San Francisco to road-test the latest tech in the real world. We went to San Francisco Zoo for a Google Lens Zoo Safari and then had lunch with menus written only in Japanese—so you had to use Word Lens in Google Translate to decide what you wanted to eat!
SF zoo scavenger hunt

Newrara with her Korea Grandma using Google Lens at the SF Zoo

Then we finished with a #TeamPixel Photo Tour of the city to try out Portrait Mode with our mint-fresh Pixel 2s.

You still want more? Okay! Here are some full trip vlogs with ALL the things.

Coisa de Nerd’s full trip summary

Coisa de Nerd’s full trip summary

Topes De Gama’s full trip summary

Topes De Gama’s full trip summary

Until our next adventure!

Meet Sara Blevins: mom, Tennessean, and developer

Last October as part of Grow with Google, we announced the Google Developer Scholarship Challenge—a joint effort with Udacity to help people across the U.S. unlock new jobs, new businesses, and new possibilities. The program provided scholarships to tens of thousands of learners across the U.S. to help them strengthen their mobile and web developer skills through curriculum designed with experts from Google and Udacity. This April, 5,000 of the top performers from the initial program also received scholarships toward a six-month Nanodegree program hosted on Udacity.

Sara Blevins is one of these talented individuals, who will complete the Front-End Web Developer Nanodegree program later this year. Last week, we invited Sara and some of her fellow scholars to attend Google I/O as special guests. We caught up with her to find out what I/O was like and what advice she has for other individuals looking to start a new career as a developer.

1. You went to I/O this past week! Tell us about that.

The joy and awe I experienced was overwhelming, it welled up to the point where I couldn’t control it. Google to me isn’t a company, it’s the door in the back of the wardrobe that leads to Narnia. It’s the embodiment of the idea that an open, free, diverse, progressive, inclusive world isn’t too lofty a goal, it’s a reality we can all create together.

2. Raising kids, working a job, and further improving your web development skills as part of this developer scholarship all take a lot of hard work and time. Where do you find your motivation to keep going?

For me, it isn’t that I need to stay motivated, it’s that I’m finally free and the question is, how do I remember that I need to sleep, eat, and relax. For most of my life, I’ve felt like a stallion that couldn’t run, an eagle that couldn’t fly, or a dolphin that couldn’t swim. Now, my cage door has been opened and I’m going to move forward as fast as life will permit me. I see wonder all around me, in even the simplest of things. I now have the ability to meaningfully contribute to that wonder.

3. You’ve talked about being told by others in the past that “it isn’t feminine” to be in science, technology, or math. What would you tell those same people today if they saw what you’re doing now?

In the words of the monk who changed my life, “I open the door of my heart to you.” I understand the social conditioning that implanted that perspective in your mind. I also reject that conditioning, entirely. Now, watch me.

4. What’s one habit that makes you successful?

Anyone who knows me and has for any length of time knows that I play the long game. I’ve been called obsessed and I embrace that—I wear it as a badge of honor.


5. What do you want to get better at?

Right now my next goal is to find someone who’s good with Github and beg them to help me understand how to use it correctly. Aside from that, my primary objective for now is to put in the hours it takes to become an expert at web development. It may sound lofty, but I’d like to be so good at it, and combine it with my natural creative abilities to the point where clients come to me or where when you think web development, you think of my name. I don’t dream small…

6. What advice do you have for others who are starting their journeys to becoming developers?

Embrace fear, self-doubt, discomfort, frustration, and failures. Not just embrace, but hold them close to your heart, nurture them and allow them to be yours. Because they are gifts, the most precious gifts life has to give; in those places are where we grow, push beyond what we are, and learn what we are capable of. This is hard—be harder.

7. Out of everything that happened this week, what new stories, knowledge, or perspectives do you think you’ll carry home with you?

The open sharing of ideas, thoughts, perspectives, and gifts is the height of what we humans can aspire to in my opinion; at I/O, that’s what I witnessed in marvelous abundance. I was especially struck by the diversity and the drive to improve the human experience that seemed to the common threads running through the event. That spirit is now forever locked inside me, I feel renewed toward my overall goal of being a voice for women in tech.

8. And what are you looking forward to most about being back home?

The arms of my babies… I can’t wait to show them the pictures, videos, and answer questions. I tell them as much as possible that if they are brave enough to be people who bring value to the world through their talents, actions, and thoughts, that they can literally create their own reality. I will push myself to my very limits to be the kind of person my babies can look up to. Also, I’m legit going to curl into the fetal position and sob uncontrollably if I don’t get to play my Xbox immediately.

100 things we announced at I/O ‘18

That’s a wrap! After a bustling three days at Google I/O, we have a lot to look back on and a lot to look forward to, from helpful features made possible with AI to updates that help you develop a sense of digital wellbeing. Here are 100 of our Google I/O announcements, in no particular order—because we don’t play favorites. ?

101-IO-headers_1.jpg

1. Hey Google, you sound great today! You can now choose from six new voices for your Google Assistant
2. There will even be some familiar voices later this year, with John Legend lending his melodic tones to the Assistant. 
3. The Assistant is becoming more conversational. With AI and WaveNet technology, we can better mimic the subtleties of the human voice—the pitch, pace and, um, the pauses. 
4. Continued Conversation lets you have a natural back-and-forth conversation without repeating “Hey Google” for each follow-up request. And the Google Assistant will be able to understand when you’re talking to it versus someone else, and respond accordingly. 
5. We’re rolling out  Multiple Actions so the Google Assistant can understand more complex queries like: “What’s the weather like in New York and in Austin?”
6. Custom Routines allow you to create your own Routine, and start it with a phrase that feels best for you. For example, you can create a Custom Routine for family dinner, and kick it off by saying "Hey Google, dinner's ready" and the Assistant can turn on your favorite music, turn off the TV, and broadcast “dinner time!” to everyone in the house. 
7. Soon you’ll be able to schedule Routines for a specific day or time using the Assistant app or through the Google Clock app for Android.
8. Families have listened to over 130,000 hours of children’s stories on the Assistant in the last two months alone. 
9. Later this year we’ll introduce Pretty Please so the Assistant can understand and encourage polite conversation from your little ones.
10. Smart Display devices will be available this summer, bringing the simplicity of voice and the Google Assistant together with a rich visual experience. 
11. We redesigned the Assistant experience on the phone. The Assistant will give you a quick snapshot of your day, with suggestions based on the time of day, location and recent interactions with the Assistant. 
12. Bon appetit! A new food pick-up and delivery experience for the Google Assistant app will be available later this year. 
13. Keep your eyes on the road—the Assistant  is coming to navigation in Google Maps with a low visual profile. You can keep your hands on the wheel while sending text messages, playing music and more. 
14.Google Duplex is a new capability we will be testing this summer within the Google Assistant to you help you make reservations, schedule appointments, and get holiday hours from businesses. Just provide the date and time, and your Assistant will call the business to coordinate for you.
15.The Google Assistant will be available in 80 countries by the end of the year.
16. We’re also bringing Google Home and Google Home Mini to seven more countries later this year: Spain, Mexico, Korea, the Netherlands, Denmark, Norway and Sweden.


101-IO-headers_2.jpg

17.Soon you’ll see Smart Compose in Gmail, a new feature powered by AI, that helps you save you time by cutting back on repetitive writing, while reducing the chance of spelling and grammatical errors in your emails.
18. ML Kit brings the breadth of Google’s machine learning technology to app developers, including on-device APIs for text recognition, face detection, image labeling and more. It’s available in one mobile SDK, accessible through Firebase, and works on both Android and iOS.
19.Our third-generation TPUs (Tensor Processing Units) are liquid-cooled and much more powerful than the previous generation, allowing us to train and run models faster so more products can be enhanced with AI.
20. We published results in a Nature Research journal showing that our AI model can predict medical events, helping doctors spot problems before they happen.
21. AI is making it easier for Waymo’s vehicles to drive in different environments, whether it’s the snowy streets of Michigan, foggy hills of San Francisco or rainy roads of Kirkland. With these improvements, we’re moving closer to our goal of bringing self-driving technology to everyone, everywhere.


101-IO-headers_3.jpg

22.We unveiled a beta version of Android P, focused on intelligence, simplicity and digital wellbeing. 
23. We partnered with DeepMind to build Adaptive Battery, which prioritizes battery power for the apps and services you use most.
24. Adaptive Brightness in Android P learns how you like to set the brightness based on your surroundings, and automatically updates it to conserve energy. 
25. App Actions help you get to your next task quickly by predicting what action you’ll take next. So if you connect your headphones to your device, Android will suggest an action to resume your favorite Spotify playlist. 
26. Actions will also show up throughout your Android phone in places like the Launcher, Smart Text Selection, the Play Store, the Google Search app and the Assistant.
27. Slices makes your smartphone even smarter by showing parts of apps right when you need them most. Say for example you search for “Lyft” in Google Search on your phone—you can see an interactive Slice that gives you the price and time for a trip to work, and you can quickly order the ride. 
28. A new enterprise work profile visually separates your work apps. Tap on the work tab to see work apps all in one place, and turn them off with a simple toggle when you get off work. 
29. Less is more! Swipe up on the home button in Android P to see a newly designed Overview, with full-screen previews of recently used apps. Simply tap once to jump back into any app. 
30. If you’re constantly switching between apps, we’ve got good news for you. Smart Text Selection (which recognizes the meaning of the text you’re selecting and suggests relevant actions) now works in Overview, making it easier to perform the action you want.
31.Android P also brings a redesigned Quick Settings, a better way to take and edit screenshots (say goodbye to the vulcan grip that was required before), simplified volume controls, an easier way to manage notifications and more.
32. Technology should help you with your life, not distract you from it. Android P comes with digital wellbeing features built into the platform. 
33.Dashboard gives you a snapshot on how you’re spending time on your phone. It includes information about how long you’ve spent in apps, how many times you unlocked your phone and how many notifications you’ve received.  
34.You can take more control over how you engage with your phone. App Timer lets  you set time limits on apps, and when you get close to your time limit Android will nudge you that it is time to do something else.  
35. Do Not Disturb (DND) mode has more oomph. Not only does it silence phone calls and texts, but it also hides visual disruptions like notifications that pop up on your display. 
36. We created a gesture to help you focus on being present: If you turn your phone over on the table, it automatically enters DND. 
37. With a new API, you can automatically set your status on messaging apps to “away” when DND is turned on. 
38.Fall asleep a little easier with Wind Down. Set a bedtime and your phone will automatically switch to Night Light mode and fade to grayscale to eliminate distractions. 
39.Android P is packed with security and privacy improvements updated security protocols, encrypted backups, protected confirmations and more.  
40.Thanks to work on Project Treble, an effort we introduced last year to make OS upgrades easier for partners, Android P Beta is available on partner devices including Sony Xperia XZ2, Xiaomi Mi Mix 2S, Nokia 7 Plus, Oppo R15 Pro, Vivo X21, OnePlus 6, and Essential PH‑1, in addition to Pixel and Pixel 2.


101-IO-headers_4.jpg

41. Say hello to the JBL LINK BAR. We worked with Harman to launch this hybrid device that delivers a full Google Assistant speaker and Android TV experience. 
42. We released a limited edition Android TV dongle device, the ADT-2, for developers to create more with Android TV. 
43. Android Auto is now working with more than 50 OEMs to support more than 400 cars and aftermarket stereos. 
44. Volvo’s next-gen infotainment system powered by Android will integrate with Google apps, including Maps, Assistant and Play Store. 
45. Watch out! You can get more done from your watch with new features from the Google Assistant on Wear OS by Google
46. Smart suggestions from the Google Assistant on Wear OS by Google let you continue conversations directly from your watch. Choose from contextually relevant follow-up questions or responses. 
47. Now you can choose to hear answers from your watch speaker or Bluetooth headphones. Just ask Google Assistant on your watch “tell me about my day.” 
48. Actions will be available on all Wear OS by Google watches, so you can use your voice to do tasks like preheat your LG oven while you’re unloading your groceries or ask Bay Trains when the next train is leaving. And we’re working with developers and partners to add more Actions and functionalities.


101-IO-headers_5.jpg

49. We’ve mapped more than 21 million miles across 220 countries, put hundreds of millions of businesses on the map, and provided access to more than 1 billion people around the world.
50.Google Maps is becoming more assistive and personal. A redesigned Explore tab features everything you need to know about dining, events and activity options in whatever area you’re interested in.
51. Top lists give you information from local experts, Google’s algorithms and trusted publishers so you can see everything that's new and interesting—like the most essential brunches or cheap eats nearby.
52. New features help you easily make plans as a group. You can create a shortlist of places within the app and share it with friends across any platform, so you can quickly vote and decide on a place to go.
53. Your "match" helps you see the world through your lens, suggesting how likely you are to enjoy a food or drink spot based on your preferences.
54. Updated walking directions help you get oriented on your walking journey more quickly and navigate the world on foot with more confidence. So when you emerge out of a subway or reach a crossing with more than four streets, you’ll know which way to go.


101-IO-headers_6.jpg

55. Suggested actions, powered by machine learning, will start to show up on your photos right as you view them—giving you the option to brighten, share, rotate or archive a picture. Another action on the horizon is the ability to quickly export photos of documents into PDFs. 
56. New color pop creations leave the subject of your photo in color while setting the background to black and white. 
57. We’re also working on the ability for you to change black-and-white photos into color in just a tap.  
58. We announced the Google Photos partner program, giving developers the tools to build smarter, faster and more helpful photo and video experiences in their products, so you can interact with your photos across more apps and devices.


101-IO-headers_7.jpg

59. The updatedGoogle News uses a new set of AI techniques to find and organize quality reporting and diverse information from around the web, in real time, and organize it into storylines so you can make sense of what’s happening from the world stage to your own backyard. 
60. The “For You” tab makes it easy to keep up to date on what you care about, starting with a “Daily Briefing” of five stories that Google has organized for you—a mix of the most important headlines, local news and the latest on your interests.  
61.With Full Coverage, you can deep dive on a story with one click. This section is not personalized—everyone will see the same content including related articles, timelines, opinion and analysis pieces, video, timeline and the ability to see what the impact or reaction has been in real time. 
62. The separate Headlines section, also unpersonalized, lets you stay fully informed across a broad spectrum of news, like world news, business, science, sports, entertainment and more. 
63. Subscribing to your favorite publishers right in the Google News app is super simple using Subscribe with Google—no forms, new passwords or credit cards—and you can access your subscriptions anywhere you’re logged in across Google and the web.


101-IO-headers_8.jpg

64. Updates to Google Lens help you get answers to the world around you. With smart text, you can copy and paste text from the real world—like recipes or business cards—to your phone. 
65. With style match, if an outfit or a home decor item catches your eye, you can open Lens and not only get info on that specific item (like reviews), but also see similar items.
66.Lens now uses real-time identification so you’ll be able to browse the world around you just by pointing your camera. It’s able to give you information quickly and anchor it to the things you see.
67. Use Lens directly in the camera app on supported devices from the following OEMs: LGE, Motorola, Xiaomi, Sony Mobile, HMD/Nokia, Transsion, TCL, OnePlus, BQ, Asus—and of course the Google Pixel. 
68. Lens is coming to more languages, including French, Italian, German, Spanish and Portuguese. 
69. Tour Creator lets anyone with a story to tell, like teachers or students, easily make a VR tour using imagery from Google Street View or their own 360 photos.  
70.With Sceneform, Java developers can now build immersive, 3D apps without having to learn complicated APIs. They can use it to build AR apps from scratch as well as add AR features to existing ones. 
71. We’ve rolled out ARCore’s Cloud Anchor API across Android and iOS to help developers build more collaborative and immersive augmented reality apps. Cloud Anchors makes it possible to create collaborative AR experiences, like redecorating your home, playing games and painting a community mural—all together with your friends.
72. ARCore now features Vertical Plane Detection which means you can place AR objects on more surfaces, like textured walls. Now you can do things like view artwork above your mantlepiece before buying it. 
73. Thanks to a capability called Augmented Images, you’ll be able to bring images to life just by pointing your phone at them—this works on QR codes, AR markers and static image targets (like maps, products in a store, logos, photos or movie posters).


101-IO-headers_9.jpg

74. We launched updates to the YouTube mobile app that will help everyone develop their own sense of digital wellbeing. The Take a Break reminder lets you set a reminder to (you guessed it!) take a break while watching videos after a specified amount of time. 
75. You can schedule specific times each day to silence notification sounds and vibrations that are you sent to your phone from the YouTube app. 
76. You can also opt in to a scheduled notification digest that combines all of the daily push notifications from the YouTube app into a single, combined notification. 
77. Soon you’ll have access to a time watched profile to give you a better understanding of the time you spend on YouTube.


101-IO-headers_10.jpg

78. Lookout, a new Android app, gives people who are blind or visually impaired auditory cues as they encounter objects, text and people around them.
79. We’re introducing the ability to type in Morse code in Gboard beta for Android. We partnered with developer Tania Finlayson, an expert in Morse code assistive technology, to build this feature.


101-IO-headers_11.jpg

80. After launching in beta at Game Developers Conference, Google Play Instant is now open to all game developers. 
81.Updated Google Play Console features help you improve your app’s performance and grow your business. These include improvements to the dashboard statistics, Android vitals, pre-launch report, acquisition report and subscriptions dashboard. 
82. Android Jetpack is a new set of components, tools and architectural guidance that makes it quicker and easier for developers to build great Android apps. 
83. Android KTX, launching as part of Android Jetpack, optimizes the Kotlin developer experience. 
84. Android App Bundle, a new format for publishing Android apps, helps developers deliver great experiences in smaller app sizes and optimize apps for the wide variety of Android devices and form factors available. 
85.The latest canary release of Android Studio 3.2 focuses on supporting the Android P Developer Preview, Android App Bundle and Android Jetpack, plus more features to help you develop fast and easily.  
86.We added Dynamic Delivery so your users download only the code and resources they need to run your app, reducing download times and saving space on their devices.  
87.With Android Things 1.0, developers can build and ship commercial IoT products using the Android Things platform.
88.The latest improvements to Performance Monitoring on Firebase help you easily monitor  app performance issues and identify the parts of your app that stutter or freeze. 
89. In the coming months, we're expanding Firebase Test Lab to include iOS to help get your app into a high-quality state—across both Android and iOS—before you even release it.
90. We shipped Flutter Beta 3, the latest version of our mobile app SDK for creating high-quality, native user experiences on iOS and Android.. 
91. We launched an early preview of the Android extension libraries (AndroidX) which represents a new era for the Support Library.
92. You can now run Linux apps on your Chromebooks (starting with a preview on the Google Pixelbook), so you can use your favorite tools and familiar commands with the speed, simplicity and security of Chrome OS. 
93. Material Theming, part of the latest update to Material Design, lets developers systematically express a unique style across their product more consistently, so they don’t have to choose between building beautiful and building fast. We also redesigned Material.io
94. We introduced three Material tools to streamline workflow and address common pain points across design and development: Material Theme Editor, a control panel that lets you apply global style changes to components across your design; Gallery, a platform for sharing, reviewing and commenting on design iterations; and Material Icons in five different themes.
95. With open-source Material Components, you can customize key aspects of an app’s design, including color, shape, and type themes.


101-IO-headers_12.jpg

96. We’ll launch a beta that allows developers to display relevant content from their apps—such as a product catalog for a shopping app—within ads, giving users more helpful information before they download an app.
97. We started early testing to make Google Play Instant compatible with AdWords, so game developers can use Universal App campaigns to reach potential users and let them try out games directly from ads.
98. Developers using ads to grow their user bases will soon have a more complete picture with view through conversion (VTC) reporting, providing more insight into ad impressions and conversions. 
99. With rewarded reporting to AdMob, developers can understand and fine-tune the performance of their rewarded ads--ads that let users opt in to view ads in exchange for in-app incentives or digital goods, such as an extra life in a game or 15 minutes of ad-free music streaming. 
100. Developers who sell ad placements in their app can now more easily report data back to advertisers with the integration of IAB Tech Lab’s Open Measurement SDK.

Solving problems with AI for everyone

Today, we’re kicking off our annual I/O developer conference, which brings together more than 7,000 developers for a three-day event. I/O gives us a great chance to share some of Google’s latest innovations and show how they’re helping us solve problems for our users. We’re at an important inflection point in computing, and it’s exciting to be driving technology forward. It’s clear that technology can be a positive force and improve the quality of life for billions of people around the world. But it’s equally clear that we can’t just be wide-eyed about what we create. There are very real and important questions being raised about the impact of technology and the role it will play in our lives. We know the path ahead needs to be navigated carefully and deliberately—and we feel a deep sense of responsibility to get this right. It’s in that spirit that we’re approaching our core mission.

The need for useful and accessible information is as urgent today as it was when Google was founded nearly two decades ago. What’s changed is our ability to organize information and solve complex, real-world problems thanks to advances in AI.

Pushing the boundaries of AI to solve real-world problems

There’s a huge opportunity for AI to transform many fields. Already we’re seeing some encouraging applications in healthcare. Two years ago, Google developed a neural net that could detect signs of diabetic retinopathy using medical images of the eye. This year, the AI team showed our deep learning model could use those same images to predict a patient’s risk of a heart attack or stroke with a surprisingly high degree of accuracy. We published a paper on this research in February and look forward to working closely with the medical community to understand its potential. We’ve also found that our AI models are able to predict medical events, such as hospital readmissions and length of stays, by analyzing the pieces of information embedded in de-identified health records. These are powerful tools in a doctor’s hands and could have a profound impact on health outcomes for patients. We’re going to be publishing a paper on this research today and are working with hospitals and medical institutions to see how to use these insights in practice.

Another area where AI can solve important problems is accessibility. Take the example of captions. When you turn on the TV it's not uncommon to see people talking over one another. This makes a conversation hard to follow, especially if you’re hearing-impaired. But using audio and visual cues together, our researchers were able to isolate voices and caption each speaker separately. We call this technology Looking to Listen and are excited about its potential to improve captions for everyone.

Saving time across Gmail, Photos, and the Google Assistant

AI is working hard across Google products to save you time. One of the best examples of this is the new Smart Compose feature in Gmail. By understanding the context of an email, we can suggest phrases to help you write quickly and efficiently. In Photos, we make it easy to share a photo instantly via smart, inline suggestions. We’re also rolling out new features that let you quickly brighten a photo, give it a color pop, or even colorize old black and white pictures.

One of the biggest time-savers of all is the Google Assistant, which we announced two years ago at I/O. Today we shared our plans to make the Google Assistant more visual, more naturally conversational, and more helpful.

Thanks to our progress in language understanding, you’ll soon be able to have a natural back-and-forth conversation with the Google Assistant without repeating “Hey Google” for each follow-up request. We’re also adding a half a dozen new voices to personalize your Google Assistant, plus one very recognizable one—John Legend (!). So, next time you ask Google to tell you the forecast or play “All of Me,” don’t be surprised if John Legend himself is around to help.

We’re also making the Assistant more visually assistive with new experiences for Smart Displays and phones. On mobile, we’ll give you a quick snapshot of your day with suggestions based on location, time of day, and recent interactions. And we’re bringing the Google Assistant to navigation in Google Maps, so you can get information while keeping your hands on the wheel and your eyes on the road.

Someday soon, your Google Assistant might be able to help with tasks that still require a phone call, like booking a haircut or verifying a store’s holiday hours. We call this new technology Google Duplex. It’s still early, and we need to get the experience right, but done correctly we believe this will save time for people and generate value for small businesses.

Understanding the world so we can help you navigate yours

AI’s progress in understanding the physical world has dramatically improved Google Maps and created new applications like Google Lens. Maps can now tell you if the business you’re looking for is open, how busy it is, and whether parking is easy to find before you arrive. Lens lets you just point your camera and get answers about everything from that building in front of you ... to the concert poster you passed ... to that lamp you liked in the store window.

Bringing you the top news from top sources

We know people turn to Google to provide dependable, high-quality information, especially in breaking news situations—and this is another area where AI can make a big difference. Using the latest technology, we set out to create a product that surfaces the news you care about from trusted sources while still giving you a full range of perspectives on events. Today, we’re launching the new Google News. It uses artificial intelligence to bring forward the best of human intelligence—great reporting done by journalists around the globe—and will help you stay on top of what’s important to you.

Overview - News.gif

The new Google News uses AI to bring forward great reporting done by journalists around the globe and help you stay on top of what’s important to you.

Helping you focus on what matters

Advances in computing are helping us solve complex problems and deliver valuable time back to our users—which has been a big goal of ours from the beginning. But we also know technology creates its own challenges. For example, many of us feel tethered to our phones and worry about what we’ll miss if we’re not connected. We want to help people find the right balance and gain a sense of digital wellbeing. To that end, we’re going to release a series of features to help people understand their usage habits and use simple cues to disconnect when they want to, such as turning a phone over on a table to put it in “shush” mode, or “taking a break” from watching YouTube when a reminder pops up. We're also kicking off a longer-term effort to support digital wellbeing, including a user education site which is launching today.

These are just a few of the many, many announcements at Google I/O—for Android, the Google Assistant, Google News, Photos, Lens, Maps and more, please see our latest stories.

We’re live (streaming) from Google I/O

This year’s annual developer festival is here, and you can catch all the fun on our Google I/O livestream at google.com/io.

Thousands of developers are joining us in Mountain View, CA, to hear about the latest product and platform updates at Google. But since the more the merrier, we’re making it possible for you to stream the event starting with Sundar’s keynote on Tuesday, May 8 at 10 a.m. PDT.

To stay in the loop throughout the conference, you can also follow @Google and @GoogleDevs on Twitter and @Googleon Instagram.

Ready, set, stream!


Introducing Google Maps Platform



It’s been thirteen years since we opened up Google Maps to your creativity and passion. Since then, it's been exciting to see how you've transformed your industries and improved people's lives. You’ve changed the way we ride to work, discover the best schools for our children, and search for a new place to live. We can’t wait to see what you do next. That’s why today we’re introducing a series of updates designed to make it easier for you to start taking advantage of new location-based features and products.
We’re excited to announce Google Maps Platform—the next generation of our Google Maps business—encompassing streamlined API products and new industry solutions to help drive innovation.

In March, we announced our first industry solution for game studios to create real-world games using Google Maps data. Today, we also offer solutions tailored for ridesharing and asset tracking companies. Ridesharing companies can embed the Google Maps navigation experience directly into their apps to optimize the driver and customer experience. Our asset tracking offering helps businesses improve efficiencies by locating vehicles and assets in real-time, visualizing where assets have traveled, and routing vehicles with complex trips. We expect to bring new solutions to market in the future, in areas where we’re positioned to offer insights and expertise.

Our core APIs work together to provide the building blocks you need to create location-based apps and experiences. One of our goals is to evolve our core APIs to make them simpler, easier to use and scalable as you grow. That’s why we’ve introduced a number of updates to help you do so. 

Streamlined products to create new location-based experiences
We’re simplifying our 18 individual APIs into three core products—Maps, Routes and Places, to make it easier for you to find, explore and add new features to your apps and sites. And, these new updates will work with your existing code—no changes required.

One pricing plan, free support, and a single console
We’ve heard that you want simple, easy to understand pricing that gives you access to all our core APIs. That’s one of the reasons we merged our Standard and Premium plans to form one pay-as-you go pricing plan for our core products. With this new plan, developers will receive the first $200 of monthly usage for free. We estimate that most of you will have monthly usage that will keep you within this free tier. With this new pricing plan you’ll pay only for the services you use each month with no annual, up-front commitments, termination fees or usage limits. And we’re rolling out free customer support for all. In addition, our products are now integrated with Google Cloud Platform Console to make it easier for you to track your usage, manage your projects, and discover new innovative Cloud products.

Scale easily as you grow
Beginning June 11, you’ll need a valid API key and a Google Cloud Platform billing account to access our core products. Once you enable billing, you will gain access to your $200 of free monthly usage to use for our Maps, Routes, and Places products. As your business grows or usage spikes, our plan will scale with you. And, with Google Maps’ global infrastructure, you can scale without thinking about capacity, reliability, or performance. We’ll continue to partner with Google programs that bring our products to nonprofits, startups, crisis response, and news media organizations. We’ve put new processes in place to help us scale these programs to hundreds of thousands of organizations and more countries around the world.

We’re excited about all the new location-based experiences you’ll build, and we want to be there to support you along the way. If you're currently using our core APIs, please take a look at our Guide for Existing Users to further understand these changes and help you easily transition to the new plan. And if you’re just getting started, you can start your first project here. We're here to help.

Introducing .app, a more secure home for apps on the web

Posted By Ben Fried, VP, CIO, & Chief Domains Enthusiast

Today we're announcing .app, the newest top-level domain (TLD) from Google Registry.

A TLD is the last part of a domain name, like .com in “www.google.com” or .google in “blog.google”. We created the .app TLD specifically for apps and app developers, with added security to help you showcase your apps to the world.

Even if you spend your days working in the world of mobile apps, you can still benefit from a home on the web. With a memorable .app domain name, it's easy for people to find and learn more about your app. You can use your new domain as a landing page to share trustworthy download links, keep users up to date, and deep link to in-app content.

A key benefit of the .app domain is that security is built in—for you and your users. The big difference is that HTTPS is required to connect to all .app websites, helping protect against ad malware and tracking injection by ISPs, in addition to safeguarding against spying on open WiFi networks. Because .app will be the first TLD with enforced security made available for general registration, it's helping move the web to an HTTPS-everywhere future in a big way.

Starting today at 9:00am PDT and through May 7, .app domains are available to register as part of our Early Access Program, where, for an additional fee, you can secure your desired domains ahead of general availability. And then beginning on May 8, .app domains will be available to the general public through your registrar of choice.

Just visit get.app to see who's already on .app and choose a registrar partner to begin registering your domain. We look forward to seeing where your new .app domain takes you!