Category Archives: Google Developers Blog

News and insights on Google platforms, tools and events

Developing bots for Hangouts Chat

Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite

We recently introduced Hangouts Chat to general availability. This next-generation messaging platform gives G Suite users a new place to communicate and to collaborate in teams. It features archive & search, tighter G Suite integration, and the ability to create separate, threaded chat rooms. The key new feature for developers is a bot framework and API. Whether it's to automate common tasks, query for information, or perform other heavy-lifting, bots can really transform the way we work.

In addition to plain text replies, Hangouts Chat can also display bot responses with richer user interfaces (UIs) called cards which can render header information, structured data, images, links, buttons, etc. Furthermore, users can interact with these components, potentially updating the displayed information. In this latest episode of the G Suite Dev Show, developers learn how to create a bot that features an updating interactive card.

As you can see in the video, the most important thing when bots receive a message is to determine the event type and take the appropriate action. For example, a bot will perform any desired "paperwork" when it is added to or removed from a room or direct message (DM), generically referred to as a "space" in the vernacular.

Receiving an ordinary message sent by users is the most likely scenario; most bots do "their thing" here in serving the request. The last event type occurs when a user clicks on an interactive card. Similar to receiving a standard message, a bot performs its requisite work, including possibly updating the card itself. Below is some pseudocode summarizing these four event types and represents what a bot would likely do depending on the event type:

function processEvent(req, rsp) {
var event = req.body; // event type received
var message; // JSON response message

if (event.type == 'REMOVED_FROM_SPACE') {
// no response as bot removed from room
return;

} else if (event.type == 'ADDED_TO_SPACE') {
// bot added to room; send welcome message
message = {text: 'Thanks for adding me!'};

} else if (event.type == 'MESSAGE') {
// message received during normal operation
message = responseForMsg(event.message.text);

} else if (event.type == 'CARD_CLICKED') {
// user-click on card UI
var action = event.action;
message = responseForClick(
action.actionMethodName, action.parameters);
}

rsp.send(message);
};

The bot pseudocode as well as the bot featured in the video respond synchronously. Bots performing more time-consuming operations or those issuing out-of-band notifications, can send messages to spaces in an asynchronous way. This includes messages such as job-completed notifications, alerts if a server goes down, and pings to the Sales team when a new lead is added to the CRM (Customer Relationship Management) system.

Hangouts Chat supports more than JavaScript or Python and Google Apps Script or Google App Engine. While using JavaScript running on Apps Script is one of the quickest and simplest ways to get a bot online within your organization, it can easily be ported to Node.js for a wider variety of hosting options. Similarly, App Engine allows for more scalability and supports additional languages (Java, PHP, Go, and more) beyond Python. The bot can also be ported to Flask for more hosting options. One key takeaway is the flexibility of the platform: developers can use any language, any stack, or any cloud to create and host their bot implementations. Bots only need to be able to accept HTTP POST requests coming from the Hangouts Chat service to function.

At Google I/O 2018 last week, the Hangouts Chat team leads and I delivered a longer, higher-level overview of the bot framework. This comprehensive tour of the framework includes numerous live demos of sample bots as well as in a variety of languages and platforms. Check out our ~40-minute session below.

To help you get started, check out the bot framework launch post. Also take a look at this post for a deeper dive into the Python App Engine version of the vote bot featured in the video. To learn more about developing bots for Hangouts Chat, review the concepts guides as well as the "how to" for creating bots. You can build bots for your organization, your customers, or for the world. We look forward to all the exciting bots you're going to build!

.app is now open for general registration

Posted by Christina Chiou Yeh, Google Registry

On May 1 we announced .app, the newest top-level domain (TLD) from Google Registry. It's now open for general registration so you can register your desired .app name right now. Check out what some of our early adopters are already doing on .app around the globe.

We begin our journey with sitata.app, which provides real-time travel information about events like protests or transit strikes. Looks all clear, so our first stop is the Caribbean, where we use thelocal.app and start exploring. After getting some sun, we fly to the Netherlands, where we're feeling hungry. Luckily, picnic.app delivers groceries, right to our hotel. With our bellies full, it's time to head to India, where we use myra.app to order the medicine, hygiene, and baby products that we forgot to pack. Did we mention this was a business trip? Good thing lola.app helped make such a complex trip stress free. Time to head home now, so we slip on a hoodie we bought on ov.app and enjoy the ride.

We hope these apps inspire you to also find your home on .app! Visit get.app to choose a registrar partner to register your domain.

Introducing ML Kit

Posted by Brahim Elbouchikhi, Product Manager

In today's fast-moving world, people have come to expect mobile apps to be intelligent - adapting to users' activity or delighting them with surprising smarts. As a result, we think machine learning will become an essential tool in mobile development. That's why on Tuesday at Google I/O, we introduced ML Kit in beta: a new SDK that brings Google's machine learning expertise to mobile developers in a powerful, yet easy-to-use package on Firebase. We couldn't be more excited!



Machine learning for all skill levels

Getting started with machine learning can be difficult for many developers. Typically, new ML developers spend countless hours learning the intricacies of implementing low-level models, using frameworks, and more. Even for the seasoned expert, adapting and optimizing models to run on mobile devices can be a huge undertaking. Beyond the machine learning complexities, sourcing training data can be an expensive and time consuming process, especially when considering a global audience.

With ML Kit, you can use machine learning to build compelling features, on Android and iOS, regardless of your machine learning expertise. More details below!

Production-ready for common use cases

If you're a beginner who just wants to get the ball rolling, ML Kit gives you five ready-to-use ("base") APIs that address common mobile use cases:

  • Text recognition
  • Face detection
  • Barcode scanning
  • Image labeling
  • Landmark recognition

With these base APIs, you simply pass in data to ML Kit and get back an intuitive response. For example: Lose It!, one of our early users, used ML Kit to build several features in the latest version of their calorie tracker app. Using our text recognition based API and a custom built model, their app can quickly capture nutrition information from product labels to input a food's content from an image.

ML Kit gives you both on-device and Cloud APIs, all in a common and simple interface, allowing you to choose the ones that fit your requirements best. The on-device APIs process data quickly and will work even when there's no network connection, while the cloud-based APIs leverage the power of Google Cloud Platform's machine learning technology to give a higher level of accuracy.

See these APIs in action on your Firebase console:

Heads up: We're planning to release two more APIs in the coming months. First is a smart reply API allowing you to support contextual messaging replies in your app, and the second is a high density face contour addition to the face detection API. Sign up here to give them a try!

Deploy custom models

If you're seasoned in machine learning and you don't find a base API that covers your use case, ML Kit lets you deploy your own TensorFlow Lite models. You simply upload them via the Firebase console, and we'll take care of hosting and serving them to your app's users. This way you can keep your models out of your APK/bundles which reduces your app install size. Also, because ML Kit serves your model dynamically, you can always update your model without having to re-publish your apps.

But there is more. As apps have grown to do more, their size has increased, harming app store install rates, and with the potential to cost users more in data overages. Machine learning can further exacerbate this trend since models can reach 10's of megabytes in size. So we decided to invest in model compression. Specifically, we are experimenting with a feature that allows you to upload a full TensorFlow model, along with training data, and receive in return a compressed TensorFlow Lite model. The technology behind this is evolving rapidly and so we are looking for a few developers to try it and give us feedback. If you are interested, please sign up here.

Better together with other Firebase products

Since ML Kit is available through Firebase, it's easy for you to take advantage of the broader Firebase platform. For example, Remote Config and A/B testing lets you experiment with multiple custom models. You can dynamically switch values in your app, making it a great fit to swap the custom models you want your users to use on the fly. You can even create population segments and experiment with several models in parallel.

Other examples include:

Get started!

We can't wait to see what you'll build with ML Kit. We hope you'll love the product like many of our early customers:

Get started with the ML Kit beta by visiting your Firebase console today. If you have any thoughts or feedback, feel free to let us know - we're always listening!

Actions on Google at I/O: More ways to drive engagement and create rich, immersive experiences

Posted by Brad Abrams, Group Product Manager, Actions on Google

The Google Assistant is becoming even more conversational and visual – helping people get things done, save time and be more present. And developers like you have been a big part of this story, making the Assistant more useful across more than 500 million devices. Starbucks, Disney, Zyrtec, Singapore Airlines and many others are engaging with users through the Actions they've built. In total, the Google Assistant is ready to help with over 1 million Actions, built by Google and all of you.

Ever since we launched Actions on Google, our mission has been to give you the tools you need to create engaging Actions, making them a part of people's everyday lives. Just over the past six months we've made significant upgrades to our platform to bring us closer to that vision. We made improvements to help your Actions get discovered, opened Actions on Google to more languages, took a few steps toward making your Actions more creative and visually appealing, launched a new conversation design site, and last week announced a new program to invest in startups that push the Assistant ecosystem forward.


Today, I want to share how we're making it even easier for app and web developers to get started with the Google Assistant.

Welcoming Android and web developers

We've seen a lot of great Android developers build Actions that complement their mobile apps. You can already create a personal, connected experience across your Android app and the Actions you build for the Assistant. Now we're making it possible to extend your Android app experiences to the Assistant in even more ways.

Think of your Actions for the Google Assistant as a companion experience to your app that users can access at home or on the go, across phones, smart speakers, TVs, cars, watches, headphones, and, soon, Smart Displays. If you want to personalize some of the experiences from your Android app, account linking lets your users have a consistent experience whether they're in your app or interacting with your Action.

Seamless digital content subscriptions from Google Play

We added support for seamless digital subscriptions so your users can enjoy the content and digital goods they bought in the Google Play Store right in your Assistant Action. For example, since I'm a premium subscriber in the Economist's app, I can now enjoy their premium content on any Assistant-enabled device.

And while you can already help users complete transactions for physical goods, soon you will be able to offer digital goods and subscriptions directly from your Actions.

Fully customizable visuals for display surfaces

The Assistant blends conversation with rich visual interactions for phones, Smart Displays and TVs. We've made it so your Actions already work on these visual surfaces with no extra work.

Starting today, you can take this a step further and better customize the appearance of your Actions for visual surfaces by, among other things, controlling the background image, defining the typeface, and setting color themes used in your Action. Just head to the Actions console, make your changes and test them in the simulator today. These changes will be available on phones, TVs and Smart Displays, when they launch.

Here's an example screenshot from a demo Action:

And below, you can see how Volley was able to create a full screen immersive experience for their game "King for a Day." The ability to create customizable edge-to-edge visuals will launch for developers in the next few months.

Introducing App Actions

In the Android keynote today, we announced a new feature called App Actions. App Actions are a new way to raise the visibility of your Android app to users as they start their tasks. We look forward to creating another channel to reach more users that can engage with your App Actions in the Google Assistant.

App Actions will be available for all developers to try soon; please sign up here if you'd like to be notified.

Find new users and keep them coming back

After you've built an Action for the Assistant, you want to get lots of people engaged with your experience. You can already prompt your users to sign up for Action Notifications on their phones, and soon, we'll be expanding support so users can get notifications on smart speakers and
Smart Displays. Today we're also announcing three updates aimed at helping more users discover your Actions and keeping them engaged on a daily basis.

Map your Actions to users' queries with built-in intents

Over the past 20 years, Google has helped connect people with the information, services and content they're looking for by organizing, ranking, and showing the most relevant experience for users. With built-in intents, we're bringing this expertise to use in the Google Assistant.

When someone says "Hey Google, let's play a maps quiz" they expect the Assistant to suggest relevant games that might pertain to geography. For that to happen, we need to understand the user's fundamental intent. This can be pretty difficult; just think of the thousands of ways a user could ask for a game.


To handle this complexity, we're beginning to map all the ways that people can ask for things into a taxonomy of built-in intents. Today, we're making the first set of these intents available to you so you can give the Assistant a deeper understanding of what your Action can do. As a result, the Assistant will be able to better understand and recommend Actions to meet a user's intent. We'll be rolling out hundreds of built-in intents in the coming months.

Today you can implement built-in intents in your action and test them in the simulator. You'll be able to use these in production soon.

Promote your Actions from anywhere a link works
We're now making it easier to drive traffic to your Actions with Action Links. These are hyperlinks you can use anywhere—your website, emails, blog, even social media channels like Facebook and Twitter—that deep link directly into your Action.

Now, when a developer like Headspace has something new to share, they can spread the word and drive engagement directly into their Action from across the web. Users can click on the link and jump into their Action's experience on phones and Smart Displays, and if they click the Action Link while on desktop, they can choose which Assistant-enabled device they'd like to use – from smart speakers to TVs. Go see an example on Headspace's website, or give their Action Link a try here.


If you've already built an Action and want to spread the word, starting today you can visit the Actions console to find your Action Links and get going.

Become a part of your users' daily routines

To consistently re-engage with users, you need to become a part of their daily habits. Google Assistant users can already use routines to execute multiple Actions with a single command, perfect for those times when users wake up in the morning, head out of the house, get ready for bed or many of the other tasks we perform throughout the day.

Now, with Routine Suggestions, after someone engages with your Action, you can prompt them to add your Action to their routines with just a couple of taps.

So when I leave the house for work each morning, I can have my Assistant order my Americano from Starbucks and play that premium content from the Economist.

You can enable your Action for Routine Suggestions in the console today, and it will be working in production soon.


And more...

Before you run off and start sharing Actions links to all of your followers on social media, check out some of the other announcements we're making here at I/O:

  • Better testing: Testing with real users is the best way to ensure your Action has high quality. Starting today, you can deploy your Actions—or updates to your Actions—to a limited set of users in pre-launch alpha and beta environments.
  • Voice transactions on smart speakers: Starting today, users in the US will be able to purchase goods via voice-activated speakers like Google Home, and this is coming to the UK, Australia, Canada, France, Germany, Japan in the next few weeks.
  • A redesigned Actions console: The new onboarding experience allows you to choose from several categories to tailor your workflow, with a new UI to guide you through the stages of the developer workflow, making it faster and easier to build your Actions.
  • Improvements to the directory: Users can leave written reviews about your Actions while signed in, providing you praise and valuable feedback to fine-tune your Actions over time. We also introduced new dynamic sections—"Popular," "You Might Like" and "Editorial Picks"—in the Explore tab to create new ways for your Actions to be discovered by users.
  • The Google Assistant SDK for devices: We're announcing support for 14 new locales, and for cards visualization and media (news and podcasts). To see some of these features in action, check out our new poster maker experiment with Deeplocal, or stop by to see it at the Google Assistant I/O Sandbox.
  • Account Linking via Voice: We're launching a developer preview of Google Sign-In for the Assistant. Users will soon be able to connect, or create an account with your Actions using just their voice so no need to set up an account linking system for your users.
  • 500,000 developers on Dialogflow: the team hit a big milestone with over half a million developers building conversational experiences! Their new releases help you onboard faster, debug smarter, enrich natural language understanding quality, and build for new Google Assistant surfaces.

Extend your experiences to the Google Assistant
We're delighted to see that many of you are starting to test the waters in this emerging era of conversational computing. If you're already building mobile or web apps but haven't tried building conversational Actions for the Google Assistant just yet, now is the perfect time to get started. Start thinking of the companion experiences that could be a fit for the Google Assistant. We have easy-to-follow guides and a community program with rewards and Google Cloud credits to get you up and running in no time. We can't wait to try out your Actions soon!

Introducing the Google Photos partner program

Posted by Jan-Felix Schmakeit, Google Photos Developer Lead

People create and consume photos and videos in many different ways, and we think it should be easier to do more with the photos you've taken, across all the apps and devices you use.

That's why we're introducing a new Google Photos partner program that gives you the tools and APIs to build photo and video experiences in your products that are smarter, faster and more helpful.

Building with the Google Photos Library API

With the Google Photos Library API, your users can seamlessly access their photos whenever they need them.

Whether you're a mobile, web, or backend developer, you can use this REST API to utilize the best of Google Photos and help people connect, upload, and share from inside your app.

Your user is always in the driver's seat. Here are a few things you can help them to do:

  • Easily find photos, based on
    • what's in the photo
    • when it was taken
    • attributes like description and media format
  • Upload directly to their photo library
  • Organize albums and add titles and locations
  • Use shared albums to easily transfer and collaborate

With the Library API, you don't have to worry about maintaining your own storage and infrastructure, as photos and videos remain safely backed up in Google Photos.

Putting machine intelligence to work in your app is simple too. You can use smart filters, like content categories, to narrow down or exclude certain types of photos and videos and make it easier for your users to find the ones they're looking for.

We've also aimed to take the hassle out of building a smooth user experience. Features like thumbnailing and cross-platform deep-links mean you can offload common tasks and focus on what makes your product unique.

Getting started

Today, we're launching a developer preview of the Google Photos Library API. You can start building and testing it in your own projects right now.

Get started by visiting our developer documentation where you can also express your interest in joining the Google Photos partner program. Some of our early partners, including HP, Legacy Republic, NixPlay, Xero and TimeHop are already building better experiences using the API.

If you are following Google I/O, you can also join us for our session to learn more.

We're excited for the road ahead and look forward to working with you to develop new apps that work with Google Photos.

Ready for Production Apps: Flutter Beta 3

Posted by the Flutter Team at Google

This week at Google I/O, we're announcing the third beta release of Flutter, our mobile app SDK for creating high-quality, native user experiences on iOS and Android, along with showcasing new tooling partners, usage of Flutter by several high-profile customers, and announcing official support from the Material team.

We believe mobile development needs an upgrade. All too often, developers are forced to compromise between quality and productivity: either building the same application twice on both iOS and Android, or settling for a cross-platform solution that makes it hard to deliver the native experience that customers demand. This is why we built Flutter: to offer a new path for mobile development, focused foremost on native performance, advanced visuals, and dramatically improving developer velocity and productivity.

Just twelve months ago at Google I/O 2017, we announced Flutter and delivered an early alpha of the toolkit. Over the last year, we've invested tens of thousands of engineering hours preparing Flutter for production use. We've rewritten major parts of the engine for performance, added support for developing on Windows, published tooling for Android Studio and Visual Studio Code, integrated Dart 2 and added support for more Firebase APIs, added support for inline video, ads and charts, internationalization and accessibility, addressed thousands of bugs and published hundreds of pages of documentation. It's been a busy year and we're thrilled to share the latest beta release with you!

Flutter offers:

  1. High-velocity development with features like stateful hot reload, which helps you quickly and easily experiment with your application without having to rebuild from scratch.
  2. Expressive and flexible designs with a layered, extensible architecture of rich, composable, customizable UI widget sets and animation libraries that enables designers' dreams to come to life.
  3. High-quality experiences across devices and platforms with our portable, GPU-accelerated renderer and ahead-of-time compilation to lightning-fast machine code.

Empowering Developers and Designers

As evidence of the power that Flutter can offer applications, 2Dimensions are this week releasing a preview of a new tool for creating powerful interactive animations with Flutter. Here's an example of the output of their software:

What you are seeing here is Flutter rendering 2D skeletal mesh animations on the phone in real-time. Achieving this level of graphical horsepower is thanks to Flutter's use of the hardware-accelerated Skia engine that draws every pixel to the screen, paired with the blazingly fast ahead-of-time compiled Dart language. But it gets better: note how the demo slider widget is translucently overlaid on the animation. Flutter seamlessly combines user interface widgets with 60fps animated graphics generated in real time, with the same code running on iOS and Android.

Here's what Luigi Rosso, co-founder of 2Dimensions, says about Flutter:

"I love the friction-free iteration with Flutter. Hot Reload sets me in a feedback loop that keeps me focused and in tune with my work. One of my biggest productivity inhibitors are tools that run slower than the developer. Flutter finally resets that bar."

One common challenge for mobile application creators is the transition from early design sketches to an interactive prototype that can be piloted or tested with customers. This week at Google I/O, Infragistics, one of the largest providers of developer tooling and components, are announcing their commitment to Flutter and demonstrating how they've set out to close the designer/developer gap even further with supportive tooling. Indigo Design to Code Studio enables designers to add interactivity to a Sketch design, and generate a pixel-perfect Flutter application.

Customer Adoption

We launched Flutter Beta 1 just ten weeks ago at Mobile World Congress, and it is exciting to see the momentum since then, both on Github, and in the number of published Flutter applications. Even though we're still building out Flutter, we're pleasantly surprised to see strong early adoption of the SDK, with some high-profile customer examples already published. One of the most popular is the companion app to the award-winning Hamilton Broadway musical, built by Posse Digital, with millions of monthly users, and an average rating of 4.6 on the Play Store.

This week, Alibaba is announcing their adoption of Flutter for Xianyu, one of their flagship applications with over twenty million monthly active users. Alibaba praises Flutter for its consistency across platforms, the ease of generating UI code from designer redlines, and the ease with which their native developers have learned Flutter. They are currently rolling out this updated version to their customers.

Another company now using Flutter is Groupon, who is prototyping and building new code for their merchant application. Here's what they say about using it:

"I love the fact that Flutter integrates with our existing app and our team has to write code just once to provide a native experience for both our apps. This significantly reduces our time to market and helps us deliver more features to our customers." Varun Menghani, Head of Merchant Product Management, Groupon

In the short time since the Beta 1 launch, we've seen hundreds of Flutter apps published to the app stores, across a wide variety of application categories. Here are a few examples of the diversity of apps being created with Flutter:

  • Abbey Road Studios are previewing Topline, a new version of their music production app.
  • AppTree provides a low-code enterprise app platform for brands like McDonalds, Stanford, Wayfair & Fermilab.
  • Birch Finance lets you manage and optimize your existing credit cards.
  • Coach Yourself offers mindfulness and cognitive-behavioral training.
  • OfflinePal collects nearby activities in one place, from concerts and theaters, to mountain hiking and tourist attractions.

Closer to home, Google continues to use Flutter extensively. One new example announced at I/O comes from Google Ads, who are previewing their new Flutter-based AdWords app that allows businesses to track and optimize their online advertising campaigns. Sridhar Ramaswamy, SVP for Ads and Commerce, says:

"Flutter provides a modern reactive framework that enabled us to unify the codebase and teams for our Android and iOS applications. It's allowed the team to be much more productive, while still delivering a native application experience to both platforms. Stateful hot reload has been a game changer for productivity."

New in Flutter Beta 3

Flutter Beta 3, shipping today at I/O, continues us on the glidepath towards our eventual 1.0 release with new features that complete core scenarios. Dart 2, our reboot of the Dart language with a focus on client development, is now fully enabled with a terser syntax for building Flutter UIs. Beta 3 is world-ready with localization support including right-to-left languages, and also provides significantly improved support for building highly-accessible applications. New tooling provides a powerful widget inspector that makes it easier to see the visual tree for your UI and preview how widgets will look during development. We have emerging support for integrating ads through Firebase. And Visual Studio Code is now fully supported as a first-class development tool, with a dedicated Flutter extension.

The Material Design team has worked with us extensively since the start. We're happy to announce that as of today, Flutter is a first-class toolkit for Material, which means the Material and Flutter teams will partner to deliver even more support for Material Design. Of course, you can continue to use Flutter to build apps with a wide range of design aesthetics to express your brand.

More information about the new features in Flutter Beta 3 can be found at the Flutter blog on Medium. If you already have Flutter installed, just one command -- flutter upgrade -- gets you on the latest build. Otherwise, you can follow our getting started guide to install Flutter on macOS, Windows or Linux.

Roadmap to Release

Flutter has long been used in production at Google and by the public, even though we haven't yet released "1.0." We're approaching our 1.0 quality bar, and in the coming months you'll see us focus on some specific areas:

  1. Performance and size. We'll work on improving both the speed and consistency of Flutter's performance, and offer additional tools and documentation for diagnosing potential issues. We'll also reduce the minimum size of a Flutter application;
  2. Compatibility. We are continuing to grow our support for a broad range of device types, including older 32-bit devices and expanding our set of out-of-the-box iOS widgets. We're also working to make it easier to add Flutter to your existing Android or iOS codebase.
  3. Ecosystem. In partnership with the broader community, we continue to build out an ecosystem of packages that make it easy to integrate with the broad set of platform APIs and SDKs.

Like every software project, the trade-offs are between time, quality and features. We are targeting a 1.0 release within the next year, but we will continue to adjust the schedule as necessary. As we're an open source project, our open issues are public and work scheduled for upcoming milestones can be viewed on our Github repo at any time. We welcome your help along this journey to make mobile development amazing.

Whether you're at Google I/O in person or watching remotely, we have plenty of technical content to help you get up and running. In particular, we have numerous sessions on Flutter and Material Design, as well as a new series of Flutter codelabs and a Udacity course that is now open for registration.

Since last year, we've been on a great journey together with a community of early adopters. We get an electric feeling when we see the range of apps, experiments, plug-ins, and supporting tools that developers are starting to produce using Flutter, and we're only just getting started. Now is a great time to join us. Connect with us through the website at https://flutter.io, via Twitter at @flutterio, and in our Google group and Gitter chat room. We're excited to see what you build!

Open Sourcing Seurat: bringing high-fidelity scenes to mobile VR

Posted by Manfred Ernst, Software Engineer

Great VR experiences make you feel like you're really somewhere else. To create deeply immersive experiences, there are a lot of factors that need to come together: amazing graphics, spatialized audio, and the ability to move around and feel like the world is responding to you.

Last year at I/O, we announced Seurat as a powerful tool to help developers and creators bring high-fidelity graphics to standalone VR headsets with full positional tracking, like the Lenovo Mirage Solo with Daydream. Seurat is a scene simplification technology designed to process very complex 3D scenes into a representation that renders efficiently on mobile hardware. Here's how ILMxLAB was able to use Seurat to bring an incredibly detailed 'Rogue One: A Star Wars Story' scene to a standalone VR experience.

Today, we're open sourcing Seurat to the developer community. You can now use Seurat to bring visually stunning scenes to your own VR applications and have the flexibility to customize the tool for your own workflows.

Behind the scenes - How Seurat works

Seurat works by taking advantage of the fact that VR scenes are typically viewed from within a limited viewing region, and leverages this to optimize the geometry and textures in your scene. It takes RGBD images (color and depth) as input and generates a textured mesh, targeting a configurable number of triangles, texture size, and fill rate, to simplify scenes beyond what traditional methods can achieve.

To demonstrate what Seurat can do, here's a snippet from Blade Runner: Revelations, which launched today with the Lenovo Mirage Solo.

Blade Runner: Revolution by Alcon Interactive and Seismic Games

The Blade Runner universe is known for its stunning worlds, and in Revelations, you get to unravel a mystery around fugitive Replicants in the futuristic but gritty streets. To create the look and feel for Revelations, Seismic used Seurat to bring a scene of 46.6 million triangles down to only 307,000, improving performance by more than 100x with almost no loss in visual quality:

Original scene:

Seurat-processed scene:

If you're interested in learning more about Seurat or trying it out yourself, visit the Seurat GitHub page to access the documentation and source code. We're looking forward to seeing what you build!

Install the Google I/O 2018 App and reserve seats for Sessions

Posted by Kerry Murrill, Google Developers Marketing

I/O is just a couple of days away! As we get closer, we hope you've had the chance to explore the schedule to make the most of the three festival days. In addition to customizing your schedule on google.com/io/schedule, you can now browse through our 150+ Sessions, and dozens of Office Hours, App Reviews, and Codelabs via the Google I/O 2018 mobile app or Action for the Assistant.

Apps: Android, iOS, Web (add to your mobile homescreen), Action for the Assistant

Here is a breakdown of all the things you can do with the mobile app this year:

Schedule on iOS

Session details on Android

Map on iOS

Action on the Assistant

SCHEDULE

Browse, filter, and find Sessions, Office Hours, Codelabs, App Reviews and the recently added Meetups across 18 product areas.

Be sure to reserve seats for your favorite Sessions either in the app or at google.com/io/schedule. You can reserve as many Sessions as you'd like per day, but only one reservation per time slot is allowed. Reservations will be open until 1 hour before the start time for each Session. If a Session is full, you can join the waitlist and we'll automatically change your reservation status if any spots open up (you can now check your waitlist position on the I/O website). A portion of seats will still be available first-come, first-served for those who aren't able to reserve a seat in advance.

Most Sessions will be livestreamed and recordings will be available soon after. Want to celebrate I/O with your community? Find an I/O Extended viewing party near you.

In addition to attending Sessions, and participating in Office Hours and App Reviews, you'll have the opportunity to talk directly with Google engineers throughout the Sandbox space, which will feature multiple product demos and activations, and during Codelabs where you can complete self-paced tutorials.

Remember to save some energy for the evening! On Day 1, attendees are invited to the After Hours Block Party from 7-10PM. It will include dinner, drinks, and lots of fun, interactive experiences throughout the Sandbox space: a magic show, a diner, throwback treats, an Android themed Bouncy World, MoDA 2.0, the I/O Totem stage and lots of music throughout! On Day 2, don't miss out on the After Hours Concert from 8-10PM, with food and drinks available throughout. The concert will be livestreamed so you can join from afar, too. Stay tuned to find out who's performing this year!

MY I/O

This is where you'll find all your saved #io18 events. To make things easy for you, these will always be synced from your account across mobile, desktop, and Assistant, so you can switch back and forth as needed. We know May 8-10 will be quite busy; we'll send you reminders right before your saved and/or reserved Sessions are about to start.

MAP

Guide yourself throughout Shoreline with the interactive map. Find your way to your next Session or see what's inside the Sandbox domes.

INFO & TRANSPORTATION

Find more information about onsite WiFi, content formats, plus travel tips to get to Shoreline, including the shuttle schedule.

Keeping up with the tradition, the mobile app and Action for the Assistant will be open sourced after I/O. Until then, we hope the mobile app and Action will help you navigate the schedule and grounds for a great experience.

T-4 days… See you soon!

Introducing Google Maps Platform

Posted by Google Maps Platform Team

It's been thirteen years since we opened up Google Maps to your creativity and passion. Since then, it's been exciting to see how you've transformed your industries and improved people's lives. You've changed the way we ride to work, discover the best schools for our children, and search for a new place to live. We can't wait to see what you do next. That's why today we're introducing a series of updates designed to make it easier for you to start taking advantage of new location-based features and products.

We're excited to announce Google Maps Platform—the next generation of our Google Maps business—encompassing streamlined API products and new industry solutions to help drive innovation.

In March, we announced our first industry solution for game studios to create real-world games using Google Maps data. Today, we also offer solutions tailored for ridesharing and asset tracking companies. Ridesharing companies can embed the Google Maps navigation experience directly into their apps to optimize the driver and customer experience. Our asset tracking offering helps businesses improve efficiencies by locating vehicles and assets in real-time, visualizing where assets have traveled, and routing vehicles with complex trips. We expect to bring new solutions to market in the future, in areas where we're positioned to offer insights and expertise.

Our core APIs work together to provide the building blocks you need to create location-based apps and experiences. One of our goals is to evolve our core APIs to make them simpler, easier to use and scalable as you grow. That's why we've introduced a number of updates to help you do so.

Streamlined products to create new location-based experiences

We're simplifying our 18 individual APIs into three core products—Maps, Routes and Places, to make it easier for you to find, explore and add new features to your apps and sites. And, these new updates will work with your existing code—no changes required.

One pricing plan, free support, and a single console

We've heard that you want simple, easy to understand pricing that gives you access to all our core APIs. That's one of the reasons we merged our Standard and Premium plans to form one pay-as-you go pricing plan for our core products. With this new plan, developers will receive the first $200 of monthly usage for free. We estimate that most of you will have monthly usage that will keep you within this free tier. With this new pricing plan you'll pay only for the services you use each month with no annual, up-front commitments, termination fees or usage limits. And we're rolling out free customer support for all. In addition, our products are now integrated with Google Cloud Platform Console to make it easier for you to track your usage, manage your projects, and discover new innovative Cloud products.

Scale easily as you grow

Beginning June 11, you'll need a valid API key and a Google Cloud Platform billing account to access our core products. Once you enable billing, you will gain access to your $200 of free monthly usage to use for our Maps, Routes, and Places products. As your business grows or usage spikes, our plan will scale with you. And, with Google Maps' global infrastructure, you can scale without thinking about capacity, reliability, or performance. We'll continue to partner with Google programs that bring our products to nonprofits, startups, crisis response, and news media organizations. We've put new processes in place to help us scale these programs to hundreds of thousands of organizations and more countries around the world.

We're excited about all the new location-based experiences you'll build, and we want to be there to support you along the way. If you're currently using our core APIs, please take a look at our Guide for Existing Users to further understand these changes and help you easily transition to the new plan. And if you're just getting started, you can start your first project here. We're here to help.

New conversation design resources for Actions on Google developers

Posted by April Pufahl, Conversation Designer

Creating Actions for the Google Assistant requires a breadth of design expertise ranging from voice user interface design, interaction design, visual design, motion design, and UX writing that we've refined into a single discipline: conversation design.

Today, we're launching a conversation design site that shares this expertise with you, so you can design Actions using the same principles that guide our teams at Google. Our goals are to help you:

  • Craft conversations that are natural and intuitive for users
  • Scale your conversations across all devices to help users wherever they are

If you're new to conversation design, you'll learn the basics of the conversation design process and how to determine whether conversation is right for your Action. You'll also get practical tips on how to:

  • Gather requirements
  • Create system and user personas
  • Write sample dialogs
  • Draw high level flows
  • Test and iterate
  • Design for the ways a conversation can deviate from the most common paths by adding handling for errors and other scenarios
  • Make sure your feature works as a voice only and a multimodal interaction

Finally, we've broken down the conversational and visual components that are used to compose your Actions' responses to the user.

By following our conversation design principles, you'll adapt to the communication system users learned first and know best, and in the process, build better Actions.

Follow us on Twitter @ActionsOnGoogle and join our G+ community https://g.co/actionsdev to keep up to date with more news from our team.