Tag Archives: Google I/O

Search at Google I/O 2019

Google I/O is our yearly developer conference where we have the pleasure of announcing some exciting new Search-related features and capabilities. A good place to start is Google Search: State of the Union, which explains how to take advantage of the latest capabilities in Google Search:

We also gave more details on how JavaScript and Google Search work together and what you can do to make sure your JavaScript site performs well in Search.

Try out new features today

Here are some of the new features, codelabs, and documentation that you can try out today:
The Google I/O sign at Shoreline Amphitheatre at Mountain View, CA

Be among the first to test new features

Your help is invaluable to making sure our products work for everyone. We shared some new features that we're still testing and would love your feedback and participation.
A large crowd at Google I/O

Learn more about what's coming soon

I/O is a place where we get to showcase new Search features, so we're excited to give you a heads up on what's next on the horizon:
Two people posing for a photo at Google I/O, forming a heart with their arms

We hope these cool announcements help & inspire you to create even better websites that work well in Search. Should you have any questions, feel free to post in our webmaster help forums, contact us on Twitter, or reach out to us at any of the next events we're at.

Flutter and Chrome OS: Better Together

Posted by the Flutter and Chrome OS teams

Chrome OS is the fast, simple, and secure operating system that powers Chromebooks, including the Google Pixelbook and millions of devices used by consumers and students every day. The latest Flutter release adds support for building beautiful, tailored Chrome OS applications, including rich support for keyboard and mouse, and tooling to ensure that your app runs well on a Chromebook. Furthermore, Chrome OS is a great developer workstation for building general-purpose Flutter apps, thanks to its support for developing and running Flutter apps locally on the same device.

Flutter is a great way to build Chrome OS apps

Since its inception, Flutter has shared many of the same principles as Chrome OS: productive, fast, and beautiful experiences. Flutter allows developers to build beautiful, fast UIs, while also providing a high degree of developer productivity, and a completely open-source engine, framework and tools. In short, it’s the ideal modern toolkit for building multi-platform apps, including apps for Chrome OS.

Flutter initially focused on providing a UI toolkit for building apps for mobile devices, which typically feature touch input and small screens. However, we’ve been building keyboard and mouse support into Flutter since before our 1.0 release last December. And today, we’re pleased to announce that Flutter for Chrome OS is now stronger with scroll wheel support, hover management, and better keyboard event support. In addition, Flutter has always been great at allowing you to build apps that run at any size (large screen or small), with seamless resizing, as shown here in the Chrome OS Best Practices Sample:

The Chrome OS best practices sample in action

The Chrome OS best practices sample in action

The Chrome OS Hello World sample is an app built with Flutter that is optimized for Chrome OS. This includes a responsive UI to showcase how to reposition items and have layouts that respond well to changes in size from mobile to desktop.

Because Chrome OS runs Android apps, targeting Android is the way to build Chrome OS apps. However, while building Chrome OS apps on Android has always been possible, as described in these guidelines, it’s often difficult to know whether your Android app is going to run well on Chrome OS. To help with that problem, today we are adding a new set of lint rules to the Flutter tooling to catch violations of the most important of the Chrome OS best practice guidelines:

The Flutter Chrome OS lint rules in action

The Flutter Chrome OS lint rules in action

When you’re able to put these Chrome OS lint rules in place, you’ll quickly be able to see any problems in your Android app that would hamper it when running on Chrome OS. To learn how to take advantage of these rules, see the linting docs for Flutter Chrome OS.

But all of that is just the beginning -- the Flutter tools allow you to develop and test your apps directly on Chrome OS as well.

Chrome OS is a great developer platform to build Flutter apps

No matter what platform you're targeting, Flutter has support for rich IDEs and programming tools like Android Studio and Visual Studio Code. Over the last year, Chrome OS has been building support for running the Linux version of these tools with the beta of Linux on Chrome OS (aka Crostini). And, because Chrome OS also supports Android natively, you can configure the Flutter tooling to run your Android apps directly without an emulator involved.

The Flutter development tools running on Chrome OS

The Flutter development tools running on Chrome OS

All of the great productivity of Flutter is available, including Stateful Hot Reload, seamless resizing, keyboard and mouse support, and so on. Recent improvements in Crostini, such as high DPI support, Crostini file system integration, easier adb, and so on, have made this experience even better! Of course, you don’t have to test against the Android container running on Chrome OS; you can also test against Android devices attached to your Chrome OS box. In short, Chrome OS is the ideal environment in which to develop and test your Flutter apps, especially when you’re targeting Chrome OS itself.

Customers love Flutter on Chrome OS

With its unique combination of simplicity, security, and capability, Chrome OS is an increasingly popular platform for enterprise applications. These apps often work with large quantities of data, whether it’s a chart, or a graph for visualization, or lists and forms for data entry. The support in Flutter for high quality graphics, large screen layout, and input features (like text selection, tab order and mousewheel), make it an ideal way to port mobile applications for the enterprise. One purveyor of such apps is AppTree, who use Flutter and Chrome OS to solve problems for their enterprise customers.

“Creating a Chrome OS version of our app took very little effort. In 10 minutes we tweaked a few values and now our users have access to our app on a whole new class of devices. This is a huge deal for our enterprise customers who have been wanting access to our app on Desktop devices.”
--Matthew Smith, CTO, AppTree Software

By using Flutter to target Chrome OS, AppTree was able to start with their existing Flutter mobile app and easily adapt it to take advantage of the capabilities of Chrome OS.

Try Flutter on Chrome OS today!

If you’d like to target Chrome OS with Flutter, you can do so today simply by installing the latest version of Flutter. If you’d like to run the Flutter development tools on Chrome OS, you can follow these instructions to get started fast. To see a real-world app built with Flutter that has been optimized for Chrome OS, check out the the Developer Quest sample that the Flutter DevRel team launched at the 2019 Google I/O conference. And finally, don’t forget to try out the Flutter Chrome OS linting rules to make sure that your Chrome OS apps are following the most important practices.

Flutter and Chrome OS go great together. What are you going to build?

Actions on Google at I/O 2019: New tools for web, mobile, and smart home developers

Posted by Chris Turkstra, Director, Actions on Google

People are using the Assistant every day to get things done more easily, creating lots of opportunities for developers on this quickly growing platform. And we’ve heard from many of you that want easier ways to connect your content across the Assistant.

At I/O, we’re announcing new solutions for Actions on Google that were built specifically with you in mind. Whether you build for web, mobile, or smart home, these new tools will help make your content and services available to people who want to use their voice to get things done.

Enhance your presence in Search and the Assistant

Help people with their “how to” questions

Every day, people turn to the internet to ask “how to” questions, like how to tie a tie, how to fix a faucet, or how to install a dog door. At I/O, we’re introducing support for How-to markup that lets you power richer and more helpful results in Search and the Assistant.

Adding How-to markup to your pages will enable the page to appear as a rich result on mobile Search and on Google Assistant Smart Displays. This is an incredibly lightweight way for web developers and creators to connect with millions of people, giving them helpful step-by-step instructions with video, images and text. You can start seeing How-to markup results on Search today, and your content will become available on the Smart Displays in the coming months.

Here’s an example where DIY Network added markup to their existing content on the web to provide a more helpful, interactive result on both Google Search and the Assistant:

Mobile Search screenshot showing how to install a dog door How-to Markup of how to install a dog door

For content creators that don’t maintain a website, we created a How-to Video Template where video creators can upload a simple spreadsheet with titles, text and timestamps for their YouTube video, and we’ll handle the rest. This is a simple way to transform your existing how-to videos into interactive, step-by-step tutorials across Google Assistant Smart Displays and Android phones.

Check out how REI is getting extra mileage out of their YouTube video:

Laptop to Home Hub displaying How To Template for the REI compass

How-to Video Templates are in developer preview so you can start building today, and your content will become available on Android phones and Smart Displays in the coming months.

Easier engagement with your apps

Help people quickly get things done with App Actions

If you’re an app developer, people are turning to your apps every day to get things done. And we see people turn to the Assistant every day for a natural way to ask for help via voice. This offers an opportunity to use intents to create voice-based entry points from the Assistant to the right spot in your app.

Last year, we previewed App Actions, a simple mechanism for Android developers that uses intents from the Assistant to deep link to exactly the right spot in your app. At I/O, we are announcing the release of built-in intents for four new App Action categories: Health & Fitness, Finance and Banking, Ridesharing, and Food Ordering. Using these intents, you can integrate with the Assistant in no time.

If I wanted to track my run with Nike Run Club, I could just say “Hey Google, start my run in Nike Run Club” and the app will automatically start tracking my run. Or, let’s say I just finished dinner with my friend Chad and we're splitting the check. I can say "Hey Google, send $15 to Chad on PayPal" and the Assistant takes me right into Paypal, I log in, and all of my information is filled in – all I need to do is hit send.

Google Pixel showing App Actions Nike Run Club

Each of these integrations were completed in less than a day with the addition of an Actions.xml file that handles the mapping of intents between your app and the Actions platform. You can start building with these new intents today and deploy to Assistant users on Android in the coming months. This is a huge opportunity to offer your fans an effortless way to engage more frequently with your apps.

Build for devices in the home

Take advantage of Smart Displays’ interactive screens

Last year, we saw the introduction of the Smart Display as a new device category. The interactive visual surface opens up many new possibilities for developers.

Today, we’re introducing a developer preview of Interactive Canvas which lets you create full-screen experiences that combine the power of voice, visuals and touch. Canvas works across Smart Displays and Android phones, and it uses open web technologies you’re likely already familiar with, like HTML, CSS and Javascript.

Here’s an example of what you can build when you can leverage the full screen of a Smart Display:

Full screen of a Smart Display

Interactive Canvas is available for building games starting today, and we’ll be adding more categories soon. Visit the Actions Console to be one of the first to try it out.

Enable smart home devices to communicate locally

There are now more than 30,000 connected devices that work with the Assistant across 3,500 brands, and today, we’re excited to announce a new suite of local technologies that are specifically designed to create an even better smart home.

Introducing a preview of the Local Home SDK which enables you to run your smart home code locally on Google Home Speakers and Nest Displays and use its radios to communicate locally with your smart devices. This reduces cloud hops and brings a new level of speed and reliability to the smart home. We’ve been working with some amazing partners including Philips, Wemo, TP-Link, and LIFX on testing this SDK and we’re excited to open it up for all developers next month.

Flowchart of Local Home SDK

Make setup more seamless

And, through the Local Home SDK, we’re improving the device setup experience by providing users with a seamless setup experience, something we launched in partnership with GE smart lights this past October. So far, people have loved the ability to set up their lights in less than a minute in the Google Home app. We’re now scaling this to more partners, so go here if you’re interested.

Make your devices smart with Assistant Connect

Also, at CES earlier this year we previewed Google Assistant Connect which leverages the Local Home SDK. Assistant Connect enables smart home and appliance developers to easily add Assistant functionality into their devices at low cost. It does this by offloading a lot of work onto the Assistant to complete Actions, display content and respond to commands. We've been hard at work developing the platform along with the first products built on it by Anker, Leviton and Tile. We can't wait to show you more about Assistant Connect later this year.

New device types and traits

For those of you creating Actions for the smart home, we’re also releasing 16 new device types and three new device traits including LockUnlock, ArmDisarm, and Timer. Head over to our developer documentation for the full list of 38 device types and 18 device traits, and check out our sample project on GitHub to start building.

Get started with our new tools for all types of developers

Whether you’re looking to extend the reach of your content, drive more usage in your apps, or build custom Assistant-powered experiences, you now have more tools to do so.

If you want to learn more about how you can start building with these tools, check out our website to get started and our schedule so you can tune in to all of our developer talks that we’ll be hosting throughout the week.

We can’t wait to build together with you!

Google I/O 2019 – What sessions should SEOs and webmasters watch?

Google I/O 2019 is starting tomorrow and will run for 3 days, until Thursday. Google I/O is our yearly developers festival, where product announcements are made, new APIs and frameworks are introduced, and Product Managers present the latest from Google to an audience of 7,000+ developers who fly to California.

However, you don't have to physically attend the event to take advantage of this once-a-year opportunity: many conferences and talks are live streamed on YouTube for anyone to watch. Browse the full schedule of events, including a list of talks that we think will be interesting for webmasters to watch (all talks are in English). All the links shared below will bring you to pages with more details about each talk, and links to watch the sessions will display on the day of each event. All times are Pacific Central time (California time).



This list is only a small part of the agenda that we think is useful to webmasters and SEOs. There are many more sessions that you could find interesting! To learn about those other talks, check out the full list of “web” sessions, design sessions, Cloud sessions, machine learning sessions, and more. Use the filtering function to toggle the sessions on and off.

We hope you can make the time to watch the talks online, and participate in the excitement of I/O ! The videos will also be available on Youtube after the event, in case you can't tune in live.

Posted by Vincent Courson, Search Outreach Specialist

Check out the Google Assistant talks at I/O 2019

Posted by Mary Chen, Strategy Lead, Actions on Google

This year at Google I/O, the Actions on Google team is sharing new ways developers of all types can use the Assistant to help users get things done. Whether you’re making Android apps, websites, web content, Actions, or IoT devices, you’ll see how the Assistant can help you engage with users in natural and conversational ways.

Tune in to our announcements during the developer keynote, and then dive deeper with our technical talks. We listed the talks out below by area of interest. Make sure to bookmark them and reserve your seat if you’re attending live, or check back for livestream details if you’re joining us online.


For anyone new to building for the Google Assistant


For Android app developers


For webmasters, web developers, and content creators


For smart home developers


For anyone building an Action from scratch


For insight and beyond


In addition to these sessions, stay tuned for interactive demos and codelabs that you can try at I/O and at home. Follow @ActionsOnGoogle for updates and highlights before, during, and after the festivities.

See you soon!

Google Search at I/O 2018

With the eleventh annual Google I/O wrapped up, it’s a great time to reflect on some of the highlights.

What we did at I/O


The event was a wonderful way to meet many great people from various communities across the globe, exchange ideas, and gather feedback. Besides many great web sessions, codelabs, and office hours we shared a few things with the community in two sessions specific to Search:




The sessions included the launch of JavaScript error reporting in the Mobile Friendly Test tool, dynamic rendering (we will discuss this in more detail in a future post), and an explanation of how CMS can use the Indexing and Search Console APIs to provide users with insights. For example, Wix lets their users submit their homepage to the index and see it in Search results instantly, and Squarespace created a Google Search keywords report to help webmasters understand what prospective users search for.

During the event, we also presented the new Search Console in the Sandbox area for people to try and were happy to get a lot of positive feedback, from people being excited about the AMP Status report to others exploring how to improve their content for Search.

Hands-on codelabs, case studies and more


We presented the Structured Data Codelab that walks you through adding and testing structured data. We were really happy to see that it ended up being one of the top 20 codelabs by completions at I/O. If you want to learn more about the benefits of using Structured Data, check out our case studies.



During the in-person office hours we saw a lot of interest around HTTPS, mobile-first indexing, AMP, and many other topics. The in-person Office Hours were a wonderful addition to our monthly Webmaster Office Hours hangout. The questions and comments will help us adjust our documentation and tools by making them clearer and easier to use for everyone.

Highlights and key takeaways


We also repeated a few key points that web developers should have an eye on when building websites, such as:


  • Indexing and rendering don’t happen at the same time. We may defer the rendering to a later point in time.
  • Make sure the content you want in Search has metadata, correct HTTP statuses, and the intended canonical tag.
  • Hash-based routing (URLs with "#") should be deprecated in favour of the JavaScript History API in Single Page Apps.
  • Links should have an href attribute pointing to a URL, so Googlebot can follow the links properly.

Make sure to watch this talk for more on indexing, dynamic rendering and troubleshooting your site. If you wanna learn more about things to do as a CMS developer or theme author or Structured Data, watch this talk.

We were excited to meet some of you at I/O as well as the global I/O extended events and share the latest developments in Search. To stay in touch, join the Webmaster Forum or follow us on Twitter, Google+, and YouTube.

 

Developing bots for Hangouts Chat

Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite

We recently introduced Hangouts Chat to general availability. This next-generation messaging platform gives G Suite users a new place to communicate and to collaborate in teams. It features archive & search, tighter G Suite integration, and the ability to create separate, threaded chat rooms. The key new feature for developers is a bot framework and API. Whether it's to automate common tasks, query for information, or perform other heavy-lifting, bots can really transform the way we work.

In addition to plain text replies, Hangouts Chat can also display bot responses with richer user interfaces (UIs) called cards which can render header information, structured data, images, links, buttons, etc. Furthermore, users can interact with these components, potentially updating the displayed information. In this latest episode of the G Suite Dev Show, developers learn how to create a bot that features an updating interactive card.

As you can see in the video, the most important thing when bots receive a message is to determine the event type and take the appropriate action. For example, a bot will perform any desired "paperwork" when it is added to or removed from a room or direct message (DM), generically referred to as a "space" in the vernacular.

Receiving an ordinary message sent by users is the most likely scenario; most bots do "their thing" here in serving the request. The last event type occurs when a user clicks on an interactive card. Similar to receiving a standard message, a bot performs its requisite work, including possibly updating the card itself. Below is some pseudocode summarizing these four event types and represents what a bot would likely do depending on the event type:

function processEvent(req, rsp) {
var event = req.body; // event type received
var message; // JSON response message

if (event.type == 'REMOVED_FROM_SPACE') {
// no response as bot removed from room
return;

} else if (event.type == 'ADDED_TO_SPACE') {
// bot added to room; send welcome message
message = {text: 'Thanks for adding me!'};

} else if (event.type == 'MESSAGE') {
// message received during normal operation
message = responseForMsg(event.message.text);

} else if (event.type == 'CARD_CLICKED') {
// user-click on card UI
var action = event.action;
message = responseForClick(
action.actionMethodName, action.parameters);
}

rsp.send(message);
};

The bot pseudocode as well as the bot featured in the video respond synchronously. Bots performing more time-consuming operations or those issuing out-of-band notifications, can send messages to spaces in an asynchronous way. This includes messages such as job-completed notifications, alerts if a server goes down, and pings to the Sales team when a new lead is added to the CRM (Customer Relationship Management) system.

Hangouts Chat supports more than JavaScript or Python and Google Apps Script or Google App Engine. While using JavaScript running on Apps Script is one of the quickest and simplest ways to get a bot online within your organization, it can easily be ported to Node.js for a wider variety of hosting options. Similarly, App Engine allows for more scalability and supports additional languages (Java, PHP, Go, and more) beyond Python. The bot can also be ported to Flask for more hosting options. One key takeaway is the flexibility of the platform: developers can use any language, any stack, or any cloud to create and host their bot implementations. Bots only need to be able to accept HTTP POST requests coming from the Hangouts Chat service to function.

At Google I/O 2018 last week, the Hangouts Chat team leads and I delivered a longer, higher-level overview of the bot framework. This comprehensive tour of the framework includes numerous live demos of sample bots as well as in a variety of languages and platforms. Check out our ~40-minute session below.

To help you get started, check out the bot framework launch post. Also take a look at this post for a deeper dive into the Python App Engine version of the vote bot featured in the video. To learn more about developing bots for Hangouts Chat, review the concepts guides as well as the "how to" for creating bots. You can build bots for your organization, your customers, or for the world. We look forward to all the exciting bots you're going to build!

Say Hello to Android Things 1.0

Posted by Dave Smith, Developer Advocate for IoT

Android Things is Google's managed OS that enables you to build and maintain Internet of Things devices at scale. We provide a robust platform that does the heavy lifting with certified hardware, rich developer APIs, and secure managed software updates using Google's back-end infrastructure, so you can focus on building your product.

After a developer preview with over 100,000 SDK downloads, we're releasing Android Things 1.0 to developers today with long-term support for production devices. Developer feedback and engagement has been critical in our journey towards 1.0, and we are grateful to the over 10,000 developers who have provided us feedback through the issue tracker, at workshop events, and through our Google+ community.

Powerful production hardware

Today, we are announcing support for new System-on-Modules (SoMs) based on the NXP i.MX8M, Qualcomm SDA212, Qualcomm SDA624, and MediaTek MT8516 hardware platforms. These modules are certified for production use with guaranteed long-term support for three years, making it easier to bring prototypes to market. Development hardware and reference designs for these SoMs will be available in the coming months.

New SoMs from NXP, Qualcomm, and MediaTek

The Raspberry Pi 3 Model B and NXP i.MX7D devices will continue to be supported as developer hardware for you to prototype and test your product ideas. Support for the NXP i.MX6UL devices will not continue. See the updated supported platforms page for more details on the differences between production and prototype hardware.

Secure software updates

One of the core tenets of Android Things is powering devices that remain secure over time. Providing timely software updates over-the-air (OTA) is a fundamental part of that. Stability fixes and security patches are supported on production hardware platforms, and automatic updates are enabled for all devices by default. For each long-term support version, Google will offer free stability fixes and security patches for three years, with additional options for extended support. Even after the official support window ends, you will still be able to continue to push app updates to your devices. See the program policies for more details on software update support.

Use of the Android Things Console for software updates is limited to 100 active devices for non-commercial use. Developers who intend to ship a commercial product running Android Things must sign a distribution agreement with Google to remove the device limit. Review the updated terms in the Android Things SDK License Agreement and Console Terms of Service.

Hardware configuration

The Android Things Console includes a new interface to configure hardware peripherals, enabling build-time control of the Peripheral I/O connections available and device properties such as GPIO resistors and I2C bus speed. This feature will continue to be expanded in future releases to encompass more peripheral hardware configurations.

Production ready

Over the past several months, we've worked closely with partners to bring products built on Android Things to market. These include Smart Speakers from LG and iHome and Smart Displays from Lenovo, LG, and JBL, which showcase powerful capabilities like Google Assistant and Google Cast. These products are hitting shelves between now and the end of summer.

Startups and agencies are also using Android Things to prototype innovative ideas for a diverse set of use-cases. Here are some examples we are really excited about:

  • Byteflies: Docking station that securely transmits wearable health data to the cloud
  • Mirego: Network of large photo displays driven by public photo booths in downtown Montreal

If you're building a new product powered by Android Things, we want to work with you too! We are introducing a special limited program to partner with the Android Things team for technical guidance and support building your product. Space is limited and we can't accept everyone. If your company is interested in learning more, please let us know here.

Additional resources

Take a look at the full release notes for Android Things 1.0, and head over to the Android Things Console to begin validating your devices for production with the 1.0 system image. Visit the developer site to learn more about the platform and explore androidthings.withgoogle.com to get started with kits, sample code, and community projects. Finally, join Google's IoT Developers Community on Google+ to let us know what you're building with Android Things!

Google releases source for Google I/O 2017 for Android

Posted by Shailen Tuli

Today we're releasing the source code for the official Google I/O 2017 for Android app.

This year's app substantially modifies existing functionality and adds several new features. It also expands the tech stack to use Firebase. In this post, we'll highlight several notable changes to the app as well as their design considerations.

The most prominent new feature for 2017 is the event reservation system, designed to help save in-person attendees' time and provide a streamlined conference experience. Registered attendees could reserve sessions and join waitlists prior to and during the conference; a reservation provided expedited entry to sessions without having to wait in long lines. Reservation data was synced with attendees' conference badges, allowing event staff to verify reservations using NFC-enabled phones. Not only was the reservation feature incredibly popular, but the reservation data helped event staff change the size of session rooms both before and during I/O to adjust for actual demand for seats.

The reservation feature was implemented using Firebase Realtime Database (RTDB) and Cloud Functions for Firebase. RTDB provided easy sync across user devices — we just had to implement a listener in our code to receive database updates. RTDB also provided out-of-the-box offline support, allowing conference data to be available even in the face of intermittent network connectivity while traveling. A Cloud Function processed reservation requests in the background for the user, using transactions to ensure correctness of state (preventing mischievous users from grabbing too many seats!) and communicating with the event badging system.

As in previous years, we used a ContentProvider as an abstraction layer over all app data, which meant we had to figure out how to integrate RTDB data with the ContentProvider. We needed to negotiate between having two local caches for data: 1) the extant local SQLite database accessed via the ContentProvider, and 2) the local cache created by RTDB to facilitate offline access. We decided to integrate all app data under the ContentProvider: whenever reservation data for the user changed in RTDB, we updated the ContentProvider, making it the single source of truth for app data at all times. This meant that we needed to keep open connections to RTDB only on a single screen, the Session Detail Activity, where users might be actively managing their reservations. Reservation data displayed in other parts of the app was backed by the ContentProvider. In offline mode, or in case of a flaky or delayed connection to RTDB, we could just get the last known state of the user's reservations from the ContentProvider.

We also had to figure out good patterns for integrating RTDB into the overall sync logic of IOSched, especially since RTDB comes with a very different sync model than the ping-and-fetch approach we were using in the app. We decided to continue using Cloud Endpoints to synchronize user data across devices and with the web and iOS clients (the data itself was stored in Datastore). While RTDB provides out-of-the-box data syncing, we wanted to make sure that a user's reservation data was current across all devices, even when the app was not in the foreground. We used a Cloud Function to integrate RTDB reservation data into the sync flow: once reservation data for a user changed in RTDB, the function updated the endpoint, which triggered a Firebase Cloud Messaging downstream message to all the user's devices, which then scheduled data syncs.

This year's app also featured a Feed to apprise users about hour-by-hour developments at I/O (most of the app's users were remote, and the Feed was a window into the conference for them). The Feed was also powered by RTDB, with data pushed to the server using a simple CMS. We used a Cloud Function to monitor RTDB feed data; when feed data was updated on the server, the Function sent a Cloud Messaging downstream message to clients, which visually surfaced the presence of new feed items to the user.

In 2015 and 2016, we had adopted an MVP architecture for IOSched, and we continued using that this year. This architecture provides us with good separation of concerns, facilitates testing, and in general makes our code cleaner and easier to maintain. For the Feed feature, we decided to experiment with a more lightweight MVP implementation inspired by Android Architecture Blueprints, which provided the necessary modularity while being very easy to conceptualize. The goal here was both pedagogical and practical: we wanted to showcase an alternate MVP pattern for developers; we also wanted to showcase an architecture that was an appropriate fit for our needs for this feature.

For the first time, IOSched made heavy use of Firebase Remote Config. In the past, we had found ourselves unable to inform users when non-session data - wifi information, shuttle schedule, discount codes for ridesharing, etc. - changed just before or during the conference. Forcing an app update was not feasible; we just wanted in-app default values to be updatable. Using remote config easily solved this problem for us.

In the end, we ended up with a three-tier system of informing users about changes:

  1. Conference data and user data changes were communicated via Cloud Messaging and data syncs (ping and fetch model).
  2. Feed data changes were controlled via RTDB.
  3. Changes to in-app constants were controlled via Remote Config.

Future plans

Even though we're releasing the 2017 code, we still have work ahead of us for the coming months. We'll be updating the code to follow modern patterns for background processing (and making our app "O" compliant), and in the future we'll be adopting Android's Architecture Components to simplify the overall design of the app. Developers can follow changes to the code on GitHub.

Semantic Time support now available on the Awareness APIs

Posted by Ritesh Nayak M, Product Manager

Last year at I/O we launched the Awareness API, a simple yet powerful API that let developers use signals such as Location, Weather, Time and User Activity to build contextually relevant app experiences.

Available via Google Play services, the Awareness API offers two ways to take advantage of context signals within your app. The Snapshot API lets your app request information about the user's current context, while the Fence API lets your app react to changes in user's context, and when it matches a certain set of conditions. For example, "tell me whenever the user is walking and their headphone is plugged in".

Until now, you could specify a time fence on the Awareness APIs but were restricted to using absolute/canonical representation of time. Based on developer feedback, we realized that the flexibility of the API in regards to building time fences did not support higher level abstractions people use when they think and talk about time. "This weekend", "on the next holiday", "after sunset", are all very common and colloquial ways of expressing time. That's why we're adding Semantic time support to these APIs starting today

For e.g., if you were building a fitness app and wanted a way to prompt users everyday morning to start their routine, or if you're a reading app that wants to turn on night mode after dusk; you would have to query a 3p API for sunrise/sunset information at the user location and then write up an Awareness fence with those canonical time values. With our latest update, you can use our TIME_INSTANT_SUNRISE and TIME_INSTANT_SUNSET constants and let the platform manage all the complexity for you.

Let's look at an example. Suppose you're building a fitness app which prompts users on Tuesday, and Thursday around sunrise to begin their morning work out. You can set up this triggering using the following lines of code.

// A sun-state-based fence that is TRUE only on Tuesday and Thursday during Sunrise 
AwarenessFence.and(
    TimeFence.aroundTimeInstant(TimeFence.TIME_INSTANT_SUNRISE,
            -10 * ONE_MINUTE_MILLIS, 5 * ONE_MINUTE_MILLIS),
    AwarenessFence.or(
        TimeFence.inIntervalOfDay(TimeFence.DAY_OF_WEEK_TUESDAY,
                0, ONE_DAY_MILLIS),
        TimeFence.inIntervalOfDay(TimeFence.DAY_OF_WEEK_THURSDAY,
                0, ONE_DAY_MILLIS)));

One of our favorite semantic time features is public holidays. Every country and regions within it have different holidays. Assume you were a local hiking & adventure app that wants to show users activities they can indulge in on a holiday that falls on a Friday or a Monday. You can use a combination of Days and Holiday flags to identify this state for all your users around the world. You can do this with just 3 lines of code and have this work in any part of the world.

// A local-time fence that is TRUE only on public holidays in the
// device locale that fall on Fridays or Mondays.
AwarenessFence.and(
    TimeFence.inTimeInterval(TimeFence.TIME_INTERVAL_HOLIDAY),
    AwarenessFence.or(
        TimeFence.inIntervalOfDay(TimeFence.DAY_OF_WEEK_FRIDAY,
                9 * ONE_HOUR_MILLIS, 11 * ONE_HOUR_MILLIS),
        TimeFence.inIntervalOfDay(TimeFence.DAY_OF_WEEK_MONDAY,
                9 * ONE_HOUR_MILLIS, 11 * ONE_HOUR_MILLIS)));

In both example cases, Awareness does the heavy lifting of localizing time and holidays based on the device locale settings.

We're excited to see what problems you'll solve using this powerful API. Please join our mailing list to get updates about this and other Context APIs at Google.