Category Archives: Google Developers Blog

News and insights on Google platforms, tools and events

Hello, .dev!

Posted by Ben Fried, VP, CIO, & Chief Domains Enthusiast

Developers, designers, writers and architects: you built the web. You make it possible for the billions of people online today to do what they do. Have you ever tried to register your preferred domain name, only to find out it's not available? Today, Google Registry is announcing .dev, a brand new top-level domain (TLD) that's dedicated to developers and technology. We hope .dev will be a new home for you to build your communities, learn the latest tech and showcase your projects—all with a perfect domain name.

Check out what some companies, both big and small, are doing on .dev:

  • Want to build a website? Both GitHub.dev and grow.dev have you covered.
  • Trying to create more inclusive products? Visit accessibility.dev for digital accessibility solutions.
  • Learn about Slack's helpful tools, libraries, and SDKs at slack.dev.
  • Connect with Women Who Code at women.dev.
  • Who doesn't want to do more with their time? Jetbrains.dev offers software solutions that make developers more productive.
  • Want to brush up on your skills (or learn new ones)? Check out Codecademy.dev.
  • Learn how to build apps on the Salesforce platform at crm.dev.
  • Interested in learning how to increase the agility and productivity of your data team? Visit dataops.dev.
  • Want to build & deploy serverless apps on a global cloud network? You can do that with Cloudflare at workers.dev.
  • Get a sneak peek of what's running under the hood of the Niantic Real World Platform at ar.dev.

Like our recent launches for .app and .page, this new domain will be secure by default because it requires HTTPS to connect to all .dev websites. This protects people who visit your site against ad malware and tracking injection by internet service providers, and from spying when using open WiFi networks. With every .dev website that's launched, you help move the web to an HTTPS-everywhere future.

Starting today at 8:00 a.m. PT and through February 28, .dev domains are available to register as part of our Early Access Program, where you can secure your desired domains for an additional fee. The fee decreases according to a daily schedule. Beginning on February 28, .dev domains will be available at a base annual price through your registrar of choice. To find out pricing from our participating partners, visit get.dev.

Google has already started using .dev for some of our own projects, like web.dev and opensource.dev. Visit get.dev to see what companies like Mozilla, Netflix, Glitch, Stripe, JetBrains and more are doing on .dev and get your own domain through one of our registrar partners. We look forward to seeing what you create on .dev!

New UI tools and a richer creative canvas come to ARCore

Posted by Evan Hardesty Parker, Software Engineer

ARCore and Sceneform give developers simple yet powerful tools for creating augmented reality (AR) experiences. In our last update (version 1.6) we focused on making virtual objects appear more realistic within a scene. In version 1.7, we're focusing on creative elements like AR selfies and animation as well as helping you improve the core user experience in your apps.

Creating AR Selfies

Example of 3D face mesh application

ARCore's new Augmented Faces API (available on the front-facing camera) offers a high quality, 468-point 3D mesh that lets users attach fun effects to their faces. From animated masks, glasses, and virtual hats to skin retouching, the mesh provides coordinates and region specific anchors that make it possible to add these delightful effects.

You can get started in Unity or Sceneform by creating an ARCore session with the "front-facing camera" and Augmented Faces "mesh" mode enabled. Note that other AR features such as plane detection aren't currently available when using the front-facing camera. AugmentedFace extends Trackable, so faces are detected and updated just like planes, Augmented Images, and other trackables.

// Create ARCore session that support Augmented Faces for use in Sceneform.
public Session createAugmentedFacesSession(Activity activity) throws UnavailableException {
// Use the front-facing (selfie) camera.
Session session = new Session(activity, EnumSet.of(Session.Feature.FRONT_CAMERA));
// Enable Augmented Faces.
Config config = session.getConfig();
config.setAugmentedFaceMode(Config.AugmentedFaceMode.MESH3D);
session.configure(config);
return session;
}

Animating characters in your Sceneform AR apps

Another way version 1.7 expands the AR creative canvas is by letting your objects dance, jump, spin and move around with support for animations in Sceneform. To start an animation, initialize a ModelAnimator (an extension of the existing Android animation support) with animation data from your ModelRenderable.

void startDancing(ModelRenderable andyRenderable) {
AnimationData data = andyRenderable.getAnimationData("andy_dancing");
animator = new ModelAnimator(data, andyRenderable);
animator.start();
}

Solving common AR UX challenges in Unity with new UI components

In ARCore version 1.7 we also focused on helping you improve your user experience with a simplified workflow. We've integrated "ARCore Elements" -- a set of common AR UI components that have been validated with user testing -- into the ARCore SDK for Unity. You can use ARCore Elements to insert AR interactive patterns in your apps without having to reinvent the wheel. ARCore Elements also makes it easier to follow Google's recommended AR UX guidelines.

ARCore Elements includes two AR UI components that are especially useful:

  • Plane Finding - streamlining the key steps involved in detecting a surface
  • Object Manipulation - using intuitive gestures to rotate, elevate, move, and resize virtual objects

We plan to add more to ARCore Elements over time. You can download the ARCore Elements app available in the Google Play Store to learn more.

Improving the User Experience with Shared Camera Access

ARCore version 1.7 also includes UX enhancements for the smartphone camera -- specifically, the experience of switching in and out of AR mode. Shared Camera access in the ARCore SDK for Java lets users pause an AR experience, access the camera, and jump back in. This can be particularly helpful if users want to take a picture of the action in your app.

More details are available in the Shared Camera developer documentation and Java sample.

Learn more and get started

For AR experiences to capture users' imaginations they need to be both immersive and easily accessible. With tools for adding AR selfies, animation, and UI enhancements, ARCore version 1.7 can help with both these objectives.

You can learn more about these new updates on our ARCore developer website.

Working together to improve user security

Posted by Adam Dawes

We're always looking for ways to improve user security both on Google and on your applications. That's why we've long invested in Google Sign In, so that users can extend all the security protections of their Google Account to your app.

Historically, there has been a critical shortcoming of all single sign in solutions. In the rare case that a user's Google Account falls prey to an attacker, the attacker can also gain and maintain access to your app via Google Sign In. That's why we're super excited to open a new feature of Google Sign In, Cross Account Protection (CAP), to developers.

CAP is a simple protocol that enables two apps to send and receive security notifications about a common user. It supports a standardized set of events including: account hijacked, account disabled, when Google terminates all the user's sessions, and when we lock an account to force the user to change their password. We also have a signal if we detect that an account could be causing abuse on your system.

CAP is built on several newly created Internet Standards, Risk and Incident Sharing and Coordination (RISC) and Security Events, that we developed with the community at the OpenID Foundation and IETF. This means that you should only have to build one implementation to be able to receive signals from multiple identity providers.

Google is now ready to send security events to your app for any user who has previously logged in using Google Sign In. If you've already integrated Google Sign In into your service, you can start receiving signals in just three easy steps:

  1. Enable the RISC API and create a Service Account on the project/s where you set up Google Sign In. If you have clients set up in different projects for your web, Android and iOS apps, you'll have to repeat this for each project.
  2. Build a RISC Receiver. This means opening a REST API on your service where Google will be able to POST security event tokens. When you receive these events, you'll need to validate they come from Google and then act on them. This may mean terminating your user's existing sessions, disabling the account, finding an alternate login mechanism or looking for other suspicious activity with the user's account.
  3. Use the Service Account to configure Google's pubsub with the location of your API. You should then start receiving signals, and you can start testing and then roll out this important new protection.

If you already use Google Sign In, please get started by checking out our developer docs. If you don't use Google Sign In, CAP is another great reason to do so to improve the security of your users. Developers using Firebase Authentication or Google Cloud Identity for Customers & Partners have CAP configured automatically - there's nothing you need to do. You can also post questions on Stack Overflow with the #SecEvents tag.

NoSQL for the serverless age: Announcing Cloud Firestore general availability and updates

Posted by Amit Ganesh, VP Engineering & Dan McGrath, Product Manager

As modern application development moves away from managing infrastructure and toward a serverless future, we're pleased to announce the general availability of Cloud Firestore, our serverless, NoSQL document database. We're also making it available in 10 new locations to complement the existing three, announcing a significant price reduction for regional instances, and enabling integration with Stackdriver for monitoring.

Cloud Firestore is a fully managed, cloud-native database that makes it simple to store, sync, and query data for web, mobile, and IoT applications. It focuses on providing a great developer experience and simplifying app development with live synchronization, offline support, and ACID transactions across hundreds of documents and collections. Cloud Firestore is integrated with both Google Cloud Platform (GCP) and Firebase, Google's mobile development platform. You can learn more about how Cloud Firestore works with Firebase here. With Cloud Firestore, you can build applications that move swiftly into production, thanks to flexible database security rules, real-time capabilities, and a completely hands-off auto-scaling infrastructure.

Cloud Firestore does more than just core database tasks. It's designed to be a complete data backend that handles security and authorization, infrastructure, edge data storage, and synchronization. Identity and Access Management (IAM) and Firebase Auth are built in to help make sure your application and its data remain secure. Tight integration with Cloud Functions, Cloud Storage, and Firebase's SDK accelerates and simplifies building end-to-end serverless applications. You can also easily export data into BigQuery for powerful analysis, post-processing of data, and machine learning.

Building with Cloud Firestore means your app can seamlessly transition from online to offline and back at the edge of connectivity. This helps lead to simpler code and fewer errors. You can serve rich user experiences and push data updates to more than a million concurrent clients, all without having to set up and maintain infrastructure. Cloud Firestore's strong consistency guarantee helps to minimize application code complexity and reduces bugs. A client-side application can even talk directly to the database, because enterprise-grade security is built right in. Unlike most other NoSQL databases, Cloud Firestore supports modifying up to 500 collections and documents in a single transaction while still automatically scaling to exactly match your workload.

What's new with Cloud Firestore

  • New regional instance pricing. This new pricing takes effect on March 3, 2019 for most regional instances, and is as low as 50% of multi-region instance prices.
    • Data in regional instances is replicated across multiple zones within a region. This is optimized for lower cost and lower write latency. We recommend multi-region instances when you want to maximize the availability and durability of your database.
  • SLA now available. You can now take advantage of Cloud Firestore's SLA: 99.999% availability for multi-region instances and 99.99% availability for regional instances.
  • New locations available. There are 10 new locations for Cloud Firestore:
    • Multi-region
      • Europe (eur3)
    • North America (Regional)
      • Los Angeles (us-west2)
      • Montréal (northamerica-northeast1)
      • Northern Virginia (us-east4)
    • South America (Regional)
      • São Paulo (southamerica-east1)
    • Europe (Regional)
      • London (europe-west2)
    • Asia (Regional)
      • Mumbai (asia-south1)
      • Hong Kong (asia-east2)
      • Tokyo (asia-northeast1)
    • Australia (Regional)
      • Sydney (australia-southeast1)

Cloud Firestore is now available in 13 regions.

  • Stackdriver integration (in beta). You can now monitor Cloud Firestore read, write and delete operations in near-real time with Stackdriver.
  • More features coming soon. We're working on adding some of the most requested features to Cloud Firestore from our developer community, such as querying for documents across collections and incrementing database values without needing a transaction.

As the next generation of Cloud Datastore, Cloud Firestore is compatible with all Cloud Datastore APIs and client libraries. Existing Cloud Datastore users will be live-upgraded to Cloud Firestore automatically later in 2019. You can learn more about this upgrade here.

Adding flexibility and scalability across industries

Cloud Firestore is already changing the way companies build apps in media, IoT, mobility, digital agencies, real estate, and many others. The unifying themes among these workloads include: the need for mobility even when connectivity lapses, scalability for many users, and the ability to move quickly from prototype to production. Here are a few of the stories we've heard from Cloud Firestore users.

When opportunity strikes...

In the highly competitive world of shared, on-demand personal mobility via cars, bikes, and scooters, the ability to deliver a differentiated user experience, iterate rapidly, and scale are critical. The prize is huge. Skip provides a scooter-sharing system where shipping fast can have a big impact. Mike Wadhera, CTO and Co-founder, says, "Cloud Firestore has enabled our engineering and product teams to ship at the clock-speed of a startup while leveraging Google-scale infrastructure. We're delighted to see continued investment in Firebase and the broader GCP platform."

Another Cloud Firestore user, digital consultancy The Nerdery, has to deliver high-quality results in a short period of time, often needing to integrate with existing third-party data sources. They can't build up and tear down complicated, expensive infrastructure for every client app they create. "Cloud Firestore was a great fit for the web and mobile applications we built because it required a solution to keep 40,000-plus users apprised of real-time data updates," says Jansen Price, Principal Software Architect. "The reliability and speed of Cloud Firestore coupled with its real-time capabilities allowed us to deliver a great product for the Google Cloud Next conferences."

Reliable information delivery

Incident response company Now IMS uses real-time data to keep citizens safe in crowded places, where cell service can get spotty when demand is high. "As an incident management company, real-time and offline capabilities are paramount to our customers," says John Rodkey, Co-founder. "Cloud Firestore, along with the Firebase Javascript SDK, provides us with these capabilities out of the box. This new 100% serverless architecture on Google Cloud enables us to focus on rapid application development to meet our customers' needs instead of worrying about infrastructure or server management like with our previous cloud."

Regardless of the app, users want the latest information right away, without having to click refresh. The QuintoAndar mobile application connects tenants and landlords in Brazil for easier apartment rentals. "Being able to deliver constantly changing information to our customers allows us to provide a truly engaging experience. Cloud Firestore enables us to do this without additional infrastructure and allows us to focus on the core challenges of our business," says Guilherme Salerno, Engineering Manager at QuintoAndar.

Real-time, responsive apps, happy users

Famed broadsheet and media company The Telegraph uses Cloud Firestore so registered users can easily discover and engage with relevant content. The Telegraph wanted to make the user experience better without having to become infrastructure experts in serving and managing data to millions of concurrent connections. "Cloud Firestore allowed us to build a real-time personalized news feed, keeping readers informed with synchronized content state across all of their devices," says Alex Mansfield-Scaddan, Solution Architect. "It allowed The Telegraph engineering teams to focus on improving engagement with our customers, rather than becoming real-time database and infrastructure experts."

On the other side of the Atlantic, The New York Times used Cloud Firestore to build a feature in The Times' mobile app to send push notifications updated in real time for the 2018 Winter Olympics. In previous approaches to this feature, scaling had been a challenge. The team needed to track each reader's history of interactions in order to provide tailored content for particular events or sports. Cloud Firestore allowed them to query data dynamically, then send the real-time updates to readers. The team was able to send more targeted content faster.

Delivering powerful edge storage for IoT devices

Athlete testing technology company Hawkin Dynamics was an early, pre-beta adopter of Cloud Firestore. Their pressure pads are used by many professional sports teams to measure and track athlete performance. In the fast-paced, high-stakes world of professional sports, athletes can't wait around for devices to connect or results to calculate. They demand instant answers even if the WiFi is temporarily down. Hawkin Dynamics uses Cloud Firestore to bring real-time data to athletes through their app dashboard, shown below.

"Our core mission at Hawkin Dynamics is to help coaches make informed decisions regarding their athletes through the use of actionable data. With real-time updates, our users can get the data they need to adjust an athlete's training on a moment-by-moment basis," says Chris Wales, CTO. "By utilizing the powerful querying ability of Cloud Firestore, we can provide them the insights they need to evaluate the overall efficacy of their programs. The close integrations with Cloud Functions and the other Firebase products have allowed us to constantly improve on our product and stay responsive to our customers' needs. In an industry that is rapidly changing, the flexibility afforded to us by Cloud Firestore in extending our applications has allowed us to stay ahead of the game."

Getting started with Cloud Firestore

We've heard from many of you that Cloud Firestore is helping solve some of your most timely development challenges by simplifying real-time data and data synchronization, eliminating server-side code, and providing flexible yet secure database authentication rules. This reflects the state of the cloud app market, where developers are exploring lots of options to help them build better and faster while also providing modern user experiences. This glance at Stack Overflow questions gives a good picture of some of these trends, where Cloud Firestore is a hot topic among cloud databases.

Source: StackExchange

We've seen close to a million Cloud Firestore databases created since its beta launch. The platform is designed to serve databases ranging in size from kilobytes to multiple petabytes of data. Even a single application running on Cloud Firestore is delivering more than 1 million real-time updates per second to users. These apps are just the beginning. To learn more about serverless application development, take a look through the archive of the recent application development digital conference.

We'd love to hear from you, and we can't wait to see what you build next. Try Cloud Firestore today for your apps.

Google opens new innovation space in San Francisco for the developer community

Posted by Jeremy Neuner, Head of Launchpad San Francisco

Google's Developer Relations team is opening a new innovation space at 543 Howard St. in San Francisco. By working with more than a million developers and startups we've found that something unique happens when we interact with our communities face-to-face. Talks, meetups, workshops, sprints, bootcamps, and social events not only provide opportunities for Googlers to authentically connect with users but also build trust and credibility as we form connections on a more personal level.

The space will be the US home of Launchpad, Google's startup acceleration engine. Founded in 2016 the Launchpad Accelerator has seen 13 cohorts graduate across 5 continents, reaching 241 startups. In 2019, the program will bring together top Google talent with startups from around the world who are working on AI-enabled solutions to problems in financial technology, healthcare, and social good.

In addition to its focus on startups, the Google innovation space will offer programming designed specifically for developers and designers throughout the year. For example, in tandem with the rapid growth of Google Cloud Platform, we will host hands-on sessions on Kubernetes, big data and AI architectures with Google engineers and industry experts.

Finally, we want the space to serve as a hub for industry-wide Developer Relations' diversity and inclusion efforts. And we will partner with groups such as Manos Accelerator and dev/Mission to bring the latest technologies to underserved groups.

We designed the space with a single credo in mind, "We must continually be jumping off cliffs and developing our wings on the way down." The flexible design of the space ensures our community has a place to learn, experiment, and grow.

For more information about our new innovation space, click here.


Scratch 3.0’s new programming blocks, built on Blockly

Posted by Erik Pasternak, Blockly team Manager

Coding is a powerful tool for creating, expressing, and understanding ideas. That's why our goal is to make coding available to kids around the world. It's also why, in late 2015, we decided to collaborate with the MIT Media Lab on the redesign of the programming blocks for their newest version of Scratch.

Left: Scratch 2.0's code rendering. Right: Scratch 3.0's new code rendering.

Scratch is a block-based programming language used by millions of kids worldwide to create and share animations, stories, and games. We've always been inspired by Scratch, and CS First, our CS education program for students, provides lessons for educators to teach coding using Scratch.

But Scratch 2.0 was built on Flash, and by 2015, it became clear that the code needed a JavaScript rewrite. This would be an enormous task, so having good code libraries would be key.

And this is where the Blockly team at Google came in. Blockly is a library that makes it easy for developers to add block programming to their apps. By 2015, many of the web's visual coding activities were built on Blockly, through groups like Code.org, App Inventor, and MakeCode. Today, Blockly is used by thousands of developers to build apps that teach kids how to code.

One of our Product Managers, Champika (who earned her master's degree in Scratch's lab at MIT) believed Blockly could be a great fit for Scratch 3.0. She brought together the Scratch and Google Blockly teams for informal discussions. It was clear the teams had shared goals and values and could learn a lot from one another. Blockly brought a flexible, powerful library to the table, and the Scratch team brought decades of experience designing for kids.

Champika and the Blockly team together at I/O Youth, 2016.

Those early meetings kicked off three years of fun (and hard work) that led to the new blocks you see in Scratch 3.0. The two teams regularly traveled across the country to work together in person, trade puns, and pore over designs. Scratch's feedback and design drove lots of new features in Blockly, and Blockly made those features available to all developers.

On January 2nd, Scratch 3.0 launched with all of the code open source and publicly developed. At Google, we created two coding activities that showcase this code base. The first was Code a Snowflake, which was used by millions of kids as part of Google's Santa Tracker. The second was a Google Doodle that celebrated 50 years of kids coding and gave millions of people their first experience with block programming. As an added bonus, we worked with Scratch to include an extension for Google Translate in Scratch 3.0.

With Scratch 3.0, even more people are programming with blocks built on Blockly. We're excited to see what else you, our developers, will build on Blockly.

Google+ APIs shutting down March 7, 2019

As part of the sunset of Google+ for consumers, we will be shutting down all Google+ APIs on March 7, 2019. This will be a progressive shutdown beginning in late January, so we are advising all developers reliant on the Google+ APIs that calls to those APIs may start to intermittently fail as early as January 28, 2019.

On or around December 20, 2018, affected developers should also receive an email listing recently used Google+ API methods in their projects. Whether or not an email is received, we strongly encourage developers to search for and remove any dependencies on Google+ APIs from their applications.

The most commonly used APIs that are being shut down include:

As part of these changes:

  • The Google+ Sign-in feature has been fully deprecated and will also be shut down on March 7, 2019. Developers should migrate to the more comprehensive Google Sign-in authentication system.
  • Over the Air Installs is now deprecated and has been shut down.

Google+ integrations for web or mobile apps are also being shut down. Please see this additional notice.

While we're sunsetting Google+ for consumers, we're investing in Google+ for enterprise organizations. They can expect a new look and new features -- more information is available in our blog post.

Tasty: A Recipe for Success on the Google Home Hub

Posted by Julia Chen Davidson, Head of Partner Marketing, Google Home

We recently launched the Google Home Hub, the first ever Made by Google smart speaker with a screen, and we knew that a lot of you would want to put these helpful devices in the kitchen—perhaps the most productive room in the house. With the Google Assistant built-in to the Home Hub, you can use your voice—or your hands—to multitask during meal time. You can manage your shopping list, map out your family calendar, create reminders for the week, and even help your kids out with their homework.

To make the Google Assistant on the Home Hub even more helpful in the kitchen, we partnered with BuzzFeed's Tasty, the largest social food network in the world, to bring 2,000 of their step-by-step tutorials to the Assistant, adding to the tens of thousands of recipes already available. With Tasty on the Home Hub, you can search for recipes based on the ingredients you have in the pantry, your dietary restrictions, cuisine preferences and more. And once you find the right recipe, Tasty will walk you through each recipe with instructional videos and step-by-step guidance.

Tasty's Action shows off how brands can combine voice with visuals to create next-generation experiences for our smart homes. We asked Sami Simon, Product Manager for BuzzFeed Media Brands, a few questions about building for the Google Assistant and we hope you'll find some inspiration for how you can combine voice and touch for the new category of devices in our homes.

What additive value do you see for your users by building an Action for the Google Assistant that's different from an app or YouTube video series, for example?

We all know that feeling when you have your hands in a bowl of ground meat and you realize you have to tap the app to go to the next step or unpause the YouTube video you were watching (I can attest to random food smudges all over my phone and computer for this very reason!).


With our Action, people can use the Google Assistant to get a helping hand while cooking, navigating a Tasty recipe just by using their voice. Without having to break the flow of rolling out dough or chopping an onion, we can now guide people on what to expect next in their cooking process. What's more, with the Google Home Hub, which has the added bonus of a display screen, home chefs can also quickly glance at the video instructions for extra guidance.

The Google Home Hub gives users all of Google, in their home, at a glance. What advantages do you see for Tasty in being a part of voice-enabled devices in the home?

The Assistant on the Google Home Hub enhances the Tasty experience in the kitchen, making it easier than ever for home chefs to cook Tasty recipes, either by utilizing voice commands or the screen display. Tasty is already the centerpiece of the kitchen, and with the Google Home Hub integration, we have the opportunity to provide additional value to our audience. For instance, we've introduced features like Clean Out My Fridge where users share their available ingredients and Tasty recommends what to cook. We're so excited that we can seamlessly provide inspiration and coaching to all home chefs and make cooking even more accessible.

How do you think these new devices will shape the future of digital assistance? How did you think through when to use voice and visual components in your Action?

In our day-to-day lives, we don't necessarily think critically about the best way to receive information in a given instance, but this project challenged us to create the optimal cooking experience. Ultimately we designed the Action to be voice-first to harness the power of the Assistant.

We then layered in the supplemental visuals to make the cooking experience even easier and make searching our recipe catalogue more fun. For instance, if you're busy stir frying, all the pertinent information would be read aloud to you, and if you wanted to quickly check what this might look like, we also provide the visual as additional guidance.

Can you elaborate on 1-3 key findings that your team discovered while testing the Action for the Home Hub?

Tasty's lens on cooking is to provide a fun and accessible experience in the kitchen, which we wanted to have come across with the Action. We developed a personality profile for Tasty with the mission of connecting with chefs of all levels, which served as a guide for making decisions about the Action. For instance, once we defined the voice of Tasty, we knew how to keep the dialogue conversational in order to better resonate with our audience.


Additionally, while most people have had some experience with digital assistants, their knowledge of how assistants work and ways that they use them vary wildly from person to person. When we did user testing, we realized that unlike designing UX for a website, there weren't as many common design patterns we could rely on. Keeping this in mind helped us to continuously ensure that our user paths were as clear as possible and that we always provided users support if they got lost or confused.

What are you most excited about for the future of digital assistance and branded experiences there? Where do you foresee this ecosystem going?

I'm really excited for people to discover more use cases we haven't even dreamed of yet. We've thoroughly explored practical applications of the Assistant, so I'm eager to see how we can develop more creative Actions and evolve how we think about digital assistants. As the Assistant will only get smarter and better at predicting people's behavior, I'm looking forward to seeing the growth of helpful and innovative Actions, and applying those to Tasty's mission to make cooking even more accessible.

What's next for Tasty and your Action? What additional opportunities do you foresee for your brand in digital assistance or conversational interfaces?

We are proud of how our Action leverages the Google Assistant to enhance the cooking experience for our audience, and excited for how we can evolve the feature set in the future. The Tasty brand has evolved its videos beyond our popular top-down recipe format. It would be an awesome opportunity to expand our Action to incorporate the full breadth of the Tasty brand, such as our creative long-form programming or extended cooking tutorials, so we can continue helping people feel more comfortable in the kitchen.

To check out Tasty's Action yourself, just say "Hey Google, ask Tasty what I should make for dinner" on your Home Hub or Smart Display. And to learn more about the solutions we have for businesses, take a look at our Assistant Business site to get started building for the Google Assistant.

If you don't have the resources to build in-house, you can also work with our talented partners that have already built Actions for all types of use cases. To make it even easier to find the perfect partner, we recently launched a new website that shows these agencies on a map with more details about how to get in touch. And if you're an agency already building Actions, we'd love to hear from you. Just reach out here and we'll see if we can offer some help along the way!

Building the Shape System for Material Design

Posted by Yarden Eitan, Software Engineer

Building the Shape System for Material Design

I am Yarden, an iOS engineer for Material Design—Google's open-source system for designing and building excellent user interfaces. I help build and maintain our iOS components, but I'm also the engineering lead for Material's shape system.

Shape: It's kind of a big deal

You can't have a UI without shape. Cards, buttons, sheets, text fields—and just about everything else you see on a screen—are often displayed within some kind of "surface" or "container." For most of computing's history, that's meant rectangles. Lots of rectangles.

But the Material team knew there was potential in giving designers and developers the ability to systematically apply unique shapes across all of our Material Design UI components. Rounded corners! Angular cuts! For designers, this means being able to create beautiful interfaces that are even better at directing attention, expressing brand, and supporting interactions. For developers, having consistent shape support across all major platforms means we can easily apply and customize shape across apps.

My role as engineering lead was truly exciting—I got to collaborate with our design leads to scope the project and find the best way to create this complex new system. Compared to systems for typography and color (which have clear structures and precedents like the web's H1-H6 type hierarchy, or the idea of primary/secondary colors) shape is the Wild West. It's a relatively unexplored terrain with rules and best practices still waiting to be defined. To meet this challenge, I got to work with all the different Material Design engineering platforms to identify possible blockers, scope the effort, and build it!

When building out the system, we had two high level goals:

  • Adding shape support for our components—giving developers the ability to customize the shape of buttons, cards, chips, sheets, etc.
  • Defining and developing a good way to theme our components using shape—so developers could set their product's shape story once and have it cascade through their app, instead of needing to customize each component individually.

From an engineering perspective, adding shape support held the bulk of the work and complexities, whereas theming had more design-driven challenges. In this post, I'll mostly focus on the engineering work and how we added shape support to our components.

Here's a rundown of what I'll cover here:

  • Scoping out the shape support functionality
  • Building shape support consistently across platforms is hard
  • Implementing shape support on iOS
    • Shape core implementation
    • Adding shape support for components
  • Applying a custom shape on your component
  • Final words

Scoping out the shape support functionality

Our first task was to scope out two questions: 1) What is shape support? and 2) What functionality should it provide? Initially our goals were somewhat ambitious. The original proposal suggested an API to customize components by edges and corners, with full flexibility on how these edges and corners look. We even thought about receiving a custom .png file with a path and converting it to a shaped component in each respective platform.

We soon found that having no restrictions would make it extremely hard to define such a system. More flexibility doesn't necessarily mean a better result. For example, it'd be quite a feat to define a flexible and easy API that lets you make a snake-shaped FAB and train-shaped cards. But those elements would almost certainly contradict the clear and straightforward approach championed by Material Design guidance.

This truck-shaped FAB is a definite "don't" in Material Design guidance.

We had to weigh the expense of time and resources against the added value for each functionality we could provide.

To solve these open questions we decided to conduct a full weeklong workshop including team members from design, engineering, and tooling. It proved to be extremely effective. Even though there were a lot of inputs, we were able to hone down what features were feasible and most impactful for our users. Our final proposal was to make the initial system support three types of shapes: square, rounded, and cut. These shapes can be achieved through an API customizing a component's corners.

Building shape support consistently across platforms (it's hard)

Anyone who's built for multiple platforms knows that consistency is key. But during our workshop, we realized how difficult it would be to provide the exact same functionality for all our platforms: Android, Flutter, iOS, and the web. Our biggest blocker? Getting cut corners to work on the web.

Unlike sharp or rounded corners, cut corners do not have a built-in native solution on the web.

Our web team looked at a range of solutions—we even considered the idea of adding background-colored squares over each corner to mask it and make it appear cut. Of course, the drawbacks there are obvious: Shadows are masked and the squares themselves need to act as chameleons when the background isn't static or has more than one color.

We then investigated the Houdini (paint worklet) API along with polyfill which initially seemed like a viable solution that would actually work. However, adding this support would require additional effort:

  • Our UI components use shadows to display elevation and the new canvas shadows look different than the native CSS box-shadow, which would require us to reimplement shadows throughout our system.
  • Our UI components also display a visual ripple effect when being tapped—to show intractability. For us to continue using ripple in the paint worklet, we would need to reimplement it, as there is no cross-browser masking solution that doesn't provide significant performance hits.

Even if we'd decided to add more engineering effort and go down the Houdini path, the question of value vs cost still remained, especially with Houdini still being "not ready" across the web ecosystem.

Based on our research and weighing the cost of the effort, we ultimately decided to move forward without supporting cut corners for web UIs (at least for now). But the good news was that we have spec-ed out the requirements and could start building!

Implementing shape support on iOS

After honing down the feature set, it was up to the engineers of each platform to go and start building. I helped build out shape support for iOS. Here's how we did it:

Core implementation

In iOS, the basic building block of user interfaces is based on instances of the UIView class. Each UIView is backed by a CALayer instance to manage and display its visual content. By modifying the CALayer's properties, you can modify various properties of its visual appearance, like color, border, shadow, and also the geometry.

When we refer to a CALayer's geometry, we always talk about it in the form of a rectangle.

Its frame is built from an (x, y) pair for position and a (width, height) pair for size. The main API for manipulating the layer's rectangular shape is by setting its cornerRadius, which receives a radius value, and in turn sets its four corners to be rounded by that value. The notion of a rectangular backing and an easy API for rounded corners exists pretty much across the board for Android, Flutter, and the web. But things like cut corners and custom edges are usually not as straightforward. To be able to offer these features we built a shape library that provides a generator for creating CALayers with specific, well-defined shape attributes.

Thankfully, Apple provides us with the class CAShapeLayer, which subclasses CALayer and has a customPath property. Assigning this property to a custom CGPath allows us to create any shape we want.

With the path capabilities in mind, we then built a class that leverages the CGPath APIs and provides properties that our users will care about when shaping their components. Here is the API:

/**
An MDCShapeGenerating for creating shaped rectangular CGPaths.

By default MDCRectangleShapeGenerator creates rectangular CGPaths.
Set the corner and edge treatments to shape parts of the generated path.
*/
@interface MDCRectangleShapeGenerator : NSObject <MDCShapeGenerating>

/**
The corner treatments to apply to each corner.
*/
@property(nonatomic, strong) MDCCornerTreatment *topLeftCorner;
@property(nonatomic, strong) MDCCornerTreatment *topRightCorner;
@property(nonatomic, strong) MDCCornerTreatment *bottomLeftCorner;
@property(nonatomic, strong) MDCCornerTreatment *bottomRightCorner;

/**
The offsets to apply to each corner.
*/
@property(nonatomic, assign) CGPoint topLeftCornerOffset;
@property(nonatomic, assign) CGPoint topRightCornerOffset;
@property(nonatomic, assign) CGPoint bottomLeftCornerOffset;
@property(nonatomic, assign) CGPoint bottomRightCornerOffset;

/**
The edge treatments to apply to each edge.
*/
@property(nonatomic, strong) MDCEdgeTreatment *topEdge;
@property(nonatomic, strong) MDCEdgeTreatment *rightEdge;
@property(nonatomic, strong) MDCEdgeTreatment *bottomEdge;
@property(nonatomic, strong) MDCEdgeTreatment *leftEdge;

/**
Convenience to set all corners to the same MDCCornerTreatment instance.
*/
- (void)setCorners:(MDCCornerTreatment *)cornerShape;

/**
Convenience to set all edge treatments to the same MDCEdgeTreatment instance.
*/
- (void)setEdges:(MDCEdgeTreatment *)edgeShape;

By providing such an API, a user can generate a path for only a corner or an edge, and the MDCRectangleShapeGenerator class above will create a shape with those properties in mind. For this initial implementation of our initial shape system, we used only the corner properties.

As you can see, the corners themselves are made of the class MDCCornerTreatment, which encapsulates three pieces of important information:

  • The value of the corner (each specific corner type receives a value).
  • Whether the value provided is a percentage of the height of the surface or an absolute value.
  • A method that returns a path generator based on the given value and corner type. This will provide MDCRectangleShapeGenerator a way to receive the right path for the corner, which it can then append to the overall path of the shape.

To make things even simpler, we didn't want our users to have to build the custom corner by calculating the corner path, so we provided 3 convenient subclasses for our MDCCornerTreatment that generate a rounded, curved, and cut corner.

As an example, our cut corner treatment receives a value called a "cut"—which defines the angle and size of the cut based on the number of UI points starting from the edge of the corner, and going an equal distance on the X axis and the Y axis. If the shape is a square with a size of 100x100, and we have all its corners set with MDCCutCornerTreatment and a cut value of 50, then the final result will be a diamond with a size of 50x50.

Here's how the cut corner treatment implements the path generator:

- (MDCPathGenerator *)pathGeneratorForCornerWithAngle:(CGFloat)angle
andCut:(CGFloat)cut {
MDCPathGenerator *path =
[MDCPathGenerator pathGeneratorWithStartPoint:CGPointMake(0, cut)];
[path addLineToPoint:CGPointMake(MDCSin(angle) * cut, MDCCos(angle) * cut)];
return path;
}

The cut corner's path only cares about the 2 points (one on each edge of the corner) that dictate the cut. The points are (0, cut) and (sin(angle) * cut, cos(angle) * cut). In our case—because we are talking only about rectangles where their corner is 90 degrees—the latter point is equivalent to (cut, 0) where sin(90) = 1 and cos(90) = 0

Here's how the rounded corner treatment implements the path generator:

- (MDCPathGenerator *)pathGeneratorForCornerWithAngle:(CGFloat)angle 
andRadius:(CGFloat)radius {
MDCPathGenerator *path =
[MDCPathGenerator pathGeneratorWithStartPoint:CGPointMake(0, radius)];
[path addArcWithTangentPoint:CGPointZero
toPoint:CGPointMake(MDCSin(angle) * radius, MDCCos(angle) * radius)
radius:radius];
return path;
}

From the starting point of (0, radius) we draw an arc of a circle to the point (sin(angle) * radius, cos(angle) * radius) which—similarly to the cut example—translates to (radius, 0). Lastly, the radius value is the radius of the arc.

Adding shape support for components

After providing an MDCRectangleShapeGenerator with the convenient APIs for setting the corners and edges, we then needed to add a property for each of our components to receive the shape generator and apply the shape to the component.

Each supported component now has a shapeGenerator property in its API that can receive an MDCShapeGenerator or any different shape generator that implements the pathForSize method: Given the width and height of the component, it returns a CGPath of the shape. We also needed to make sure that the path generated is then applied to the underlying CALayer of the component's UIView for it to be displayed.

By applying the shape generator's path on the component, we had to keep a couple things in mind:

Adding proper shadow, border, and background color support

Because the shadows, borders, and background colors are part of the default UIView API and don't necessarily take into account custom CALayer paths (they follow the default rectangular bounds), we needed to provide additional support. So we implemented MDCShapedShadowLayer to be the view's main CALayer. What this class does is take the shape generator path, and then passes that path to be the layer's shadow path—so the shadow will follow the custom shape. It also provides different APIs for setting the background color and border color/width by explicitly setting the values on the CALayer that holds the custom path, rather than invoking the top level UIView APIs. As an example, when setting the background color to black (instead of invoking UIView's backgroundColor) we invoke CALayer's fillColor.

Being conscious of setting layer's properties such as shadowPath and cornerRadius

Because the shape's layer is set up differently than the view's default layer, we need to be conscious of places where we set our layer's properties in our existing component code. As an example, setting the cornerRadius of a component—which is the default way to set rounded corners using Apple's API—will actually not be applicable if you also set a custom shape.

Supporting touch events

Receiving touch also applies only on the original rectangular bounds of the view. With a custom shape, we'll have cases where there are places in the rectangular bounds where the layer isn't drawn, or places outside the bounds where the layer is drawn. So we needed a way to support proper touch that corresponds to where the shape is and isn't, and act accordingly.

To achieve this, we override the hitTest method of our UIView. The hitTest method is responsible for returning the view supposed to receive the touch. In our case, we implemented it so it returns the custom shape's view if the touch event is contained inside the generated shape path:

- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event {
if (self.layer.shapeGenerator) {
if (CGPathContainsPoint(self.layer.shapeLayer.path, nil, point, true)) {
return self;
} else {
return nil;
}
}
return [super hitTest:point withEvent:event];
}

Ink Ripple Support

As with the other properties, our ink ripple (which provides a ripple effect to the user as touch feedback) is also built on top of the default rectangular bounds. For ink, there are two things we update: 1) the maxRippleRadius and 2) the masking to bounds. The maxRippleRadius must be updated in cases where the shape is either smaller or bigger than the bounds. In these cases we can't rely on the bounds because for smaller shapes the ink will ripple too fast, and for bigger shapes the ripple won't cover the entire shape. The ink layer's maskToBounds needs to also be set to NO so we can allow the ink to spread outside of the bounds when the custom shape is bigger than the default bounds.

- (void)updateInkForShape {
CGRect boundingBox = CGPathGetBoundingBox(self.layer.shapeLayer.path);
self.inkView.maxRippleRadius =
(CGFloat)(MDCHypot(CGRectGetHeight(boundingBox), CGRectGetWidth(boundingBox)) / 2 + 10.f);
self.inkView.layer.masksToBounds = NO;
}

Applying a custom shape to your components

With all the implementation complete, here are per-platform examples of how to provide cut corners to a Material Button component:

Android:

Kotlin

button.background as? MaterialShapeDrawable?.let {
it.shapeAppearanceModel.apply {
cornerFamily = CutCornerTreatment(cornerSize)
}
}

XML:

<com.google.android.material.button.MaterialButton
android:layout_width="wrap_content"
android:layout_height="wrap_content"
app:shapeAppearanceOverlay="@style/MyShapeAppearanceOverlay"/>

<style name="MyShapeAppearanceOverlay">
<item name="cornerFamily">cut</item>
<item name="cornerSize">4dp</item>
<style>

Flutter:

FlatButton(
shape: BeveledRectangleBorder(
// Despite referencing circles and radii, this means "make all corners 4.0".
borderRadius: BorderRadius.all(Radius.circular(4.0)),
),

iOS:

MDCButton *button = [[MDCButton alloc] init];
MDCRectangleShapeGenerator *rectShape = [[MDCRectangleShapeGenerator alloc] init];
[rectShape setCorners:[MDCCutCornerTreatment alloc] initWithCut:4]]];
button.shapeGenerator = rectShape;

Web (rounded corners):

.my-button {
@include mdc-button-shape-radius(4px);
}

Final words

I'm really excited to have tackled this problem and have it be part of the Material Design system. I'm particularly happy to have worked so collaboratively with design. As an engineer, I tend to tackle problems more or less from similar angles, and also think about problems very similarly to other engineers. But when solving problems together with designers, it feels like the challenge is actually looked at from all the right angles (pun intended), and the solution often turns out to be better and more thoughtful.

We're in good shape to continue growing the Material shape system and offering even more support for things like edge treatments and more complicated shapes. One day (when Houdini is ready) we'll even be able to support cut corners on the web.

Please check our code out on GitHub across the different platforms: Android, Flutter, iOS, Web. And check out our newly updated Material Design guidance on shape.

Creating More Realistic AR experiences with updates to ARCore & Sceneform

Posted by Ashish Shah, Product Manager, Google AR & VR

The magic of augmented reality is in the way it blends the digital and the physical worlds. For AR experiences to feel truly immersive, digital objects need to look realistic -- as if they were actually there with you, in your space. This is something we continue to prioritize as we update ARCore and Sceneform, our 3D rendering library for Java developers.

Today, with the release of ARCore 1.6, we're bringing further improvements to help you build more realistic and compelling experiences, including better plane boundary tracking and several lighting improvements in Sceneform.

With 250M devices now supporting ARCore, developers can bring these experiences to an even larger and growing user base.

More Realistic Lighting in Sceneform

Previous versions of Sceneform defaulted to optimizing ambient light as yellow. Version 1.6 defaults to neutral and white. This aligns more closely to the way light appears in the real world, making digital objects look more natural. You can see the differences below.

Left side image: Sceneform 1.5Right side image: Sceneform 1.6

This change will also make objects rendered with Sceneform look as if they're affected more naturally by color and lighting in the surrounding environment. For example, if you're viewing an AR object at sunset, it would appear to be illuminated by the red and orange hues, just like real objects in the scene.

In addition, we've updated Sceneform's built-in environmental image to provide a more neutral scene for your app. This will be most noticeable when viewing reflections in smooth metallic surfaces.

Adding screen capture and recording to the mix

To help you further improve quality and engagement in your AR apps, we're adding screen capture and recording to Sceneform. This is something a number of developers have requested to help with demo recording and prototyping. It can also be used as an external facing feature, allowing your users to share screenshots and videos on social media more easily, which can help get the word out about your app.

You can access this functionality through the surface mirroring API for the SceneView class. The API allows you to display the Sceneform view on a device's screen at the same time it's being rendered to another surface (such as the input surface for the Android MediaRecorder).

Learn more and get started

The new updates to Sceneform and ARCore are available today. With these new versions also comes support for new devices, such as the Samsung Galaxy A3 and the Huawei P20 Lite, that will join the list of ARCore-enabled devices. More information is available on the ARCore developer website.