Posted by Jon Harmer, Product Manager, Google Workspace
With today being the start of AMP Fest, quite naturally AMP is on our minds. One of the ways that AMP shines is through email. With AMP for Email, brands can change triggered emails from being just another notification to an easy way for a user to always have realtime and relevant context.
Expanding the AMP Ecosystem
We’re excited to be partnering with Verizon Media and Salesforce Marketing Cloud to build for a future in which every message and touchpoint is an opportunity to make a delightful impression with rich, web-like experiences.
“The motivation to join the AMP for email project was simple: Allowing brands to send richer and more engaging emails to our users. This in turn creates a much better user experience. This also enables features and functionality right within the email environment which are on par with other native web or app experiences. It’s a perfect fit with our mission ... to create the best consumer email experience.” said Nirmal Thangaraj, Engineer on Verizon Media Mail, which powers AOL and Yahoo! Mail.
Making things even easier for email senders, Salesforce announced at AMP Fest that early next year, senders will be able to send AMP emails from the Marketing Cloud. With Salesforce Marketing Cloud enabling AMP emails, senders can add one to two actionable steps into their emails and store that information back in Salesforce Marketing Cloud.
AMP for Productivity
Another area where AMP can really make an impact is in the office. With the influx of applications in the workplace, companies are using new SaaS applications to simplify individual processes - but it comes with a downside of complicating a workers day by requiring that employee jump from app to app to get work done. With context aware content that's dynamically populated and updated in real-time, AMP helps make email a place where work gets done .
Let’s take a look at a couple partners who have been building AMP emails, and how they’ve gone about implementing AMP as part of their email strategy.
Guru
Guru sends tens of thousands of notification emails each day, and while helpful, there were limitations to their effectiveness. Here’s Jason Maynard, Guru’s VP of Product on AMP:
“Static emails are helpful for giving a user awareness of a necessary task, but they also require that user to navigate away from their inbox to our web app in order to review knowledge cards and take specific actions. Their workflow is interrupted. Thus, we decided to leverage AMP in hopes of alleviating this user friction with a goal of fostering engagement within an email thread and reducing context switching.”
And the process and results also were in Guru’s favor: “AMP’s predefined components, documented examples, and testing playgrounds were all development resources that enabled us to deploy AMP payloads very quickly.The new implementation has resulted in users now being able to interact with these notifications to a much greater extent. Users can now expand and read knowledge cards within their email thread. They can also complete actions such as card verifications and reply comments. Emails are now much more stateful and relevant to users.”
After deploying AMP, Guru saw a noticeable uptick in email-driven actions resulting in a 2.5x increase in the number of card comment actions and a 75% increase in card verification. These are thousands of new actions that helped teams manage their knowledge base, all without leaving their inbox.
VOGSY
VOGSY, the Professional Services Automation Cloud App for Workspace, sends approval and notification emails that have multiple conversion paths. Historically, these actions would take a day to complete. With AMP, they've seen an 80% improvement in completion speed. Reaching this success was a smooth and pleasant journey.
“Our developers and our users love AMP technology. Developers truly enjoy building engaging emails with personalized content that is securely and dynamically updated every time you open the email. User adoption is 100%. Completing a workflow can be done without leaving your inbox. That is a huge improvement in user experience. Because of its fast adoption, we expect to send more than 2 million AMP emails in the first year,” said Leo Koster, Founder of VOGSY.
Copper
Copper is a CRM designed for people whose business relies on relationship-building, Copper functions seamlessly in the background while employees spend time on what matters: customers. Email is obviously a big part of how organizations communicate, plan, and collaborate. And up to now, email is mostly used as a gateway to other applications where users can take action or complete their task.
“This is why the idea of dynamic emails intrigued us... Supercharging the receivers’ experience to provide up to date information that you can interact with from your inbox. Instead of receiving static email notifications each time you are tagged, we leveraged AMP for email to give users a single, dynamic email where they can see relevant information about the opportunity. They can then respond to comments from their teammates—bringing our users the most seamless experience possible wherever they like to work,” said Sefunmi Osinaike, Product Manager at Copper.
And best of all, the process was simple: “Our developers described the documentation as enjoyable because it helped us add rich components without the overhead of figuring out how to make them work in email with basic HTML. The ease of use of lists, inputs and tooltips accelerated the rate we prototyped our feature and it saved us a lot of time. We also got a ton of support on stack overflow with a response rate in less than 24 hours.”
For Copper, AMP has allowed them to take the experiences that always existed in Copper, but move them closer to the employee’s day-to-day workflow by allowing them to take those actions from email.
Stripo
As an email design platform, Stripo.email has seen over 1,000 different companies create AMP email campaigns with Carousels, Feedback Forms, and Net Promoter Score forms--in one month alone. Stripo was able to implement AMP where users could fill out forms without having to leave their inbox. The strategy drove a 5x lift in effectiveness from traditional questionnaires.
We’re excited about AMP and all of the great use cases partners are implementing to modernize the capabilities of email. To learn more about AMP for Email, click here and be sure to check out AMP Fest.
Posted by Baris Gultekin and Payam Shodjai, Directors of Product Management
Top brands turn to Google Assistant every day to help their users get things done on their phones and on Smart Displays -- such as playing games, finding recipes or checking investments, just by using their voice. In fact, over the last year, the number of Actions completed by third-party developers has more than doubled.
We want to support our developer ecosystem as they continue building the best experiences for smart displays and Android phones. That’s why today at Google Assistant Developer Day, we introduced:
New App Actions built in intents -- to enable Android developers easily integrate Google Assistant with their apps,
New discovery features such as suggestions and shortcuts -- to enable users easily discover and engage with Android apps
New developer tools and features, such as testing API, voices and frameworks for game development -- to help build high quality nativel experiences for smart displays
New discovery and monetization improvements -- to help users discover and engage with developers’ experiences on Assistant.
Now, all Android Developers can bring Google Assistant to their apps
Now, every Android app developer can make it easier for their users to find what they're looking for, by fast forwarding them into the app’s key functionality, using just voice. With App Actions, top app developers such as Yahoo Mail, Fandango, and ColorNote, are currently creating these natural and engaging experiences for users by mapping their users' intents to specific functionality within their apps. Instead of having to navigate through each app to get tasks done, users can simply say “Hey Google” and the outcome they want - such as “find Motivation Mix on Spotify” using just their voice.
Here are a few updates we’re introducing today to App Actions.
Quickly open and search within apps with common intents
Every day, people ask Google Assistant to open their favorite apps. Today, we are building on this functionality to open specific pages within apps and also search within apps. Starting today, you can use the GET_THING intent to search within apps and the OPEN_APP_FEATURE intent to open specific pages in apps; offering more ways to easily connect users to your app through Assistant.
Many top brands such as eBay and Kroger are already using these intents. If you have the eBay app on your Android phone, try saying “Hey Google, find baseball cards on eBay” to try the GET_THING intent.
If you have the Kroger app on your Android phone, try saying “Hey Google, open Kroger pay” to try the OPEN_APP_FEATURE intent.
It's easy to implement all these common intents to your Android apps. You can simply declare support for these capabilities in your Actions.xml file to get started. For searching, you can provide a deep link that will allow Assistant to pass a search term into your app. For opening pages, you can provide a deep link with the corresponding name for Assistant to match users' requests.
Vertical specific built-in intents
For a deeper integration, we offer vertical-specific built-in intents (BII) that lets Google take care of all the Natural Language Understanding (NLU) so you don’t have to. We first piloted App Actions in some of the most popular app verticals such as Finance, Ridesharing, Food Ordering, and Fitness. Today, we are announcing that we have now grown our catalog to cover more than 60 intents across 10 verticals, adding new categories like Social, Games, Travel & Local, Productivity, Shopping and Communications
For example, Twitter and Wayfair have already implemented these vertical built in intents. So, if you have the Twitter app on your Android phone, try saying “Hey Google, post a Tweet” to see a Social vertical BII in action.
If you have the Wayfair app on your Android phone, try saying “Hey Google, buy accent chairs on Wayfair” to see a Shopping vertical BII in action.
Check out how you can get started with these built-in intents or explore creating custom intents today.
Custom Intents to highlight unique app experiences
Every app is unique with its own features and capabilities, which may not match the list of available App Actions built-in intents. For cases where there isn't a built-in intent for your app functionality, you can instead create a custom intent.Like BIIs, custom intents follow the actions.xml schema and act as connection points between Assistant and your defined fulfillments.
Snapchat and Walmart use custom intents to extend their app’s functionality to Google Assistant. For example, if you have the Snapchat app on your Android phone, just say, “Hey Google, send a Snap using the cartoon face lens” to try their Custom Intent.
Or, If you have the Walmart app on your Android phone, just say, “Hey Google, reserve a time slot with Walmart” to schedule your next grocery pickup.
With more common, built-in, and custom intents available, every Android developer can now enable their app to fulfill Assistant queries that tailor to exactly what their app offers. Developers can also use known developer tools such as Android Studio, and with just a few days of work, they can easily integrate their Android apps with the Google Assistant.
Suggestions and Shortcuts for improving user discoverability
We are excited about these new improvements to App Actions, but we also understand that it's equally important that people are able to discover your App Actions. We’re designing new touch points to help users easily learn about Android apps that support App Actions. For example, we’ll be recommending relevant Apps Actions even when the user doesn't mention the app name explicitly by showing suggestions. If you say broadly “Hey Google, show me Taylor Swift”, we’ll highlight a suggestion chip that will guide the user to open up the search result in Twitter. Google Assistant will also be suggesting apps proactively, depending on individual app usage patterns.
Android users will also be able to customize their experience, creating their own way to automate their most common tasks with app shortcuts, enabling people to set up quick phrases to enable app functions they frequently use. For example, you can create a MyFitnessPal shortcut to easily track their calories throughout the day and customize the query to say what you want - such as “Hey Google, check my calories.”
By simply saying "Hey Google, shortcuts", they can set up and explore suggested shortcuts in the settings screen. We’ll also make proactive suggestions for shortcuts throughout the Assistant mobile experience, tailored to how you use your phone.
Build high quality conversational Actions for Smart Displays
Back in June, we launched new developer tools such as Actions Builder and Actions SDK, making it easier to design and build conversational Actions on Assistant, like games, for Smart Displays. Many partners have already been building with these, such as Cool Games and Sony. We’re excited to share new updates that not only enable developers to build more, higher quality native Assistant experiences with new game development frameworks and better testing tools, but we’ve also made user discovery of those experiences better than ever.
New developer tools and features
Improved voices
We’ve heard your feedback that you need better voices to match the quality of the experiences you’re delivering on the Assistant. We’ve released two new English voices that take advantage of an improved prosody model to make Assistant sound more natural. Give it a listen.
These voices are now available and you can leverage them in your existing Actions by simply making the change in the Actions Console.
Interactive Canvas expansion
But what can you build with these new voices? Last year, we introduced Interactive Canvas, an API that lets you build custom experiences for the Assistant that can be controlled via both touch and voice using simple technologies like HTML, CSS, and Javascript.
We’re expanding Interactive Canvas to Actions in the education and storytelling verticals; in addition to games. Whether you’re building an action that teaches someone to cook, explains the phases of the moon, helps a family member with grammar, or takes you through an interactive adventure, you’ll have access to the full visual power of Interactive Canvas.
Improved testing to deliver high quality experiences
Actions Testing API is a new programmatic way to test your critical user journeys and ensure there aren’t any broken conversation paths. Using this framework allows you to run end to end tests in an isolated preview environment, run regression tests, and add continuous testing to your arsenal. This API is being released to general availability soon.
New Dialogflow migration tool
For those of you who built experiences using Dialogflow, we want you to enjoy the benefits of the new platform without having to build from scratch. That’s why we’re offering a migration tool inside the Actions Console that automates much of the work to move projects to the improved platform.
New site for game developers
Game developers, we built a new resource hub just for you. Boost your game design expertise with full source code to games, design best practices, interviews with game developers, tools, and everything you need to create voice-enabled games for Smart Displays.
Discovery
With more incredible experiences being built, we know it can be challenging to help users discover them and drive engagement. To make it easier for people to discover and engage with your experiences, we have invested in a slew of new discovery features:
New Built-in intents and the Learning Hub
We’ll soon be opening two new set Built-in intents (BIIs) for public registration: Education and Storytelling. Registering your Actions for these intents allows users to discover them in a simple, natural way through general requests to Google Assistant. These new BIIs cover a range of intents in the Education and Storytelling domains and join Games as principal areas of investment for the developer ecosystem.
People will then be able to say "Hey Google, teach me something new" and they will be presented with a Learning Hub where they can browse different education experiences. For stories, users can simply say "Hey Google, tell me a story". Developers can soon register for both new BIIs to get their experiences listed in these browsable catalog.
Household Authentication token and improving transactions
One of the exciting things about the Smart Display is that it’s an inherently communal device. So if you’re offering an experience that is meant to be enjoyed collaboratively, you need a way to share state between household members and between multiple devices. Let’s say you’re working on a puzzle and your roommate wants to help with a few pieces on the Smart Display. We’re introducing household authentication tokens so all users in a home can now share these types of experiences. This feature will be available soon via the Actions console.
Finally, we're making improvements to the transaction flow on Smart Displays. We want to make it easier for you to add seamless voice-based and display-based monetization capabilities to your experience. We've started by supporting voice-match as an option for payment authorization. And early next year, we'll also launch an on-display CVC entry.
Simplifying account linking and authentication
Once you build personalized and premium experiences, you need to make it as easy as possible to connect with existing accounts. To help streamline this process, we’re opening two betas: Link with Google and App Flip, for improved account linking flows to allow simple, streamlined authentication via apps.
Link with Google enables anyone with an Android or iOS app where they are already logged in to complete the linking flow with just a few clicks, without needing to re-enter credentials.
App Flip helps you build a better mobile account linking experience and decrease drop-off rates. App Flip allows your users to seamlessly link their accounts to Google without having to re-enter their credentials.
Assistant links
In addition to launching new channels of discovery for developer Actions, we also want to provide more control over how you and your users reach your Actions. Action links were a way to deep link to your conversational action that has been used with great success by partners like Sushiro, Caixa, and Giallo Zafferano. Now we are reintroducing this feature as Assistant links, which enable partners such as TD Ameritrade to deliver rich Google Assistant experiences in their websites as well as deep links to their Google Assistant integrations from anywhere on the web.
We are very excited about all these announcements - both across App Actions and native Assistant development. Whether you are exploring new ways to engage your users using voice via App Actions, or looking to build something new to engage users at home via Smart Displays, we hope you will leverage these new tools and features and share your feedback with us.
Posted by Eric Lai, Product Manager, Augmented Reality
Augmented reality (AR) can help you explore the world around you in new, seemingly magical ways. Whether you want to venture through the Earth’s unique habitats, explore historic cultures or even just find the shortest path to your destination, there’s no shortage of ways that AR can help you interact with the world.
That’s why we’re constantly improving ARCore — so developers can build amazing AR experiences that help us reimagine what’s possible.
In 2018, we introduced the Cloud Anchors API in ARCore, which lets people across devices view and share the same AR content in real-world spaces. Since then, we’ve been working on new ways for developers to use Cloud Anchors to make AR content persist and more easily discoverable.
Create long-lasting AR experiences
Last year, we previewed persistent Cloud Anchors, which lets people return to shared AR experiences again and again. With ARCore 1.20, this feature is now widely available to Android, iOS, and Unity mobile developers.
Developers all over the world are already using this technology to help people learn, share and engage with the world around them in new ways.
MARK, which we highlighted last year, is a social platform that lets people leave AR messages in real-world locations for friends, family and their community to discover. MARK is now available globally and will be launching the MARK Hope Campaign in the US to help people raise funds for their favorite charities and have their donations matched for a limited time.
REWILD Our Planet is an AR nature series produced by Melbourne based studio PHORIA. The experience is based on the Netflix original documentary series Our Planet. REWILD uses Ultra High Definition Video alongside AR content to let you venture into earth’s unique habitats and interact with endangered wildlife. It originally launched in museums, but can now be enjoyed on your smartphone in your living room. As episodes of the show are released, persistent Cloud Anchors allow you to return to the same spot in your own home to see how nature is changing.
Changdeok ARirang is an AR tour guide app that combines the power of SK Telecom’s 5G with persistent Cloud Anchors. Visitors at Changdeokgung Palace in South Korea are guided by the legendary Haechi to relevant locations where they can experience historical and cultural high fidelity AR content. Changdeok ARirang at Home was also launched so that this same experience can be accessed from the comfort of your couch.
In Sweden, SJ Labs, the innovation arm of Swedish Railways, together with Bontouch, their tech innovation partner, uses persistent Cloud Anchors to help passengers find their way at Central Station in Stockholm, making it easier and faster for them to make their train departures.
Coming soon, Lowe’s Persistent View will let you design your home in AR with the help of an expert. You’ll be able to add furniture and appliances to different areas of your home to see how they’d look, and return to the experience as many times as needed before making a purchase.
Lowe’s Persistent View powered by Streem
If you’re interested in building AR experiences that last over time, you can learn more about persistent Cloud Anchors in our docs.
Call for collaborators: test a new way to find AR content
As developers use Cloud Anchors to attach more AR experiences to the world, we also want to make it easier for people to discover them. That’s why we’re working on earth Cloud Anchors, a new feature that uses AR and global localization—the underlying technology that powers Live View features on Google Maps—to easily guide users to AR content. If you’re interested in early access to test this feature, you can apply here.
If you haven’t heard yet, we’re excited to announce the launch of developers.google.com/learn, a new one-stop destination for developers to achieve the knowledge and skills needed to develop software with Google's technology. Learn brings the learning content you already love from Google together into one easy to access place.
The home page of developers.google.com/learn
Previously, our educational content was separated by product area and platform. For example, you’d likely find Firebase Codelabs on firebase.google.com, and their video series on Youtube. We know you love these educational offerings, but they could be somewhat difficult to find, unless you were already in the know.
To address this issue, we built Learn to act as a portal, linking all these amazing educational activities together. In addition, we came up with some handy new ways to organize the content, so you can easily find what you’re looking for the first time, every time.
Codelabs
For newbies: Codelabs walk you through the process of building a small application, or adding a new feature to an existing application. They cover a wide range of topics such as Android Wear, Google Compute Engine, Project Tango, and Google APIs on iOS.
If you’re already familiar with Codelabs, rest assured that not too much has changed. Codelabs still provide guided, hands-on coding experience for new and aspiring developers at no charge, and you can still access all of them through codelabs.developers.google.com.
What has changed is that now there’s a new way to experience Codelabs: through our Pathways.
Pathways
The home for Google Learning Pathways
Pathways are a new way to learn skills using all of the educational activities Google has developed for that skill. They organize selected videos, articles, blog posts, and Codelabs, together in one sequential learning experience so you can develop knowledge and skills at your own pace.
Let’s use Flutter as an example. Did you love The Boring Flutter Development Show, but your style of learning is a little more hands-on? Look no further than the Build apps with Flutter pathway, featuring explanatory videos from the Flutter team and step-by-step Codelabs designed to help you build your first Flutter app.
The Flutter pathway
All Pathways finish with an assessment, which you can pass to earn a badge.
Topics
Topics allow you to explore collections of related codelabs, pathways, news, and videos.
Are you a chatbot developer, or aspire to be one? You can find all the latest news and educational content regarding chatbots in one easy to find place.
The home for news and more about Chatbots
Developer Profiles
Here’s where the fun begins! You can show off all the new stuff you’ve learned on your Google Developer Profile.
To use the social features, first, create your unique Developer Profile on google.dev.
Create a Developer Profile on google.dev
Your first badge will be the Created Developer Profile badge.
Create a Developer Profile badge
Next, try one of the pathways we currently host. After completing the activities you’ll take a quiz, and if you pass, you’ll be awarded the badge for that pathway. You can share all of your earned badges on social media, and make your other developer friends jealous!
Posted by Kanstantsin Sokal, Software Engineer, MediaPipe team
Earlier this year, the MediaPipe Team released the Face Mesh solution, which estimates the approximate 3D face shape via 468 landmarks in real-time on mobile devices. In this blog, we introduce a new face transform estimation module that establishes a researcher- and developer-friendly semantic API useful for determining the 3D face pose and attaching virtual objects (like glasses, hats or masks) to a face.
The new module establishes a metric 3D space and uses the landmark screen positions to estimate common 3D face primitives, including a face pose transformation matrix and a triangular face mesh. Under the hood, a lightweight statistical analysis method called Procrustes Analysis is employed to drive a robust, performant and portable logic. The analysis runs on CPU and has a minimal speed/memory footprint on top of the original Face Mesh solution.
Figure 1: An example of virtual mask and glasses effects, based on the MediaPipe Face Mesh solution.
Introduction
The MediaPipe Face Landmark Model performs a single-camera face landmark detection in the screen coordinate space: the X- and Y- coordinates are normalized screen coordinates, while the Z coordinate is relative and is scaled as the X coordinate under the weak perspective projection camera model. While this format is well-suited for some applications, it does not directly enable crucial features like aligning a virtual 3D object with a detected face.
The newly introduced module moves away from the screen coordinate space towards a metric 3D space and provides the necessary primitives to handle a detected face as a regular 3D object. By design, you'll be able to use a perspective camera to project the final 3D scene back into the screen coordinate space with a guarantee that the face landmark positions are not changed.
Metric 3D Space
The Metric 3D space established within the new module is a right-handed orthonormal metric 3D coordinate space. Within the space, there is a virtual perspective camera located at the space origin and pointed in the negative direction of the Z-axis. It is assumed that the input camera frames are observed by exactly this virtual camera and therefore its parameters are later used to convert the screen landmark coordinates back into the Metric 3D space. The virtual camera parameters can be set freely, however for better results it is advised to set them as close to the real physical camera parameters as possible.
Figure 2: A visualization of multiple key elements in the metric 3D space. Created in Cinema 4D
Canonical Face Model
The Canonical Face Model is a static 3D model of a human face, which follows the 3D face landmark topology of the MediaPipe Face Landmark Model. The model bears two important functions:
Defines metric units: the scale of the canonical face model defines the metric units of the Metric 3D space. A metric unit used by the default canonical face model is a centimeter;
Bridges static and runtime spaces: the face pose transformation matrix is - in fact - a linear map from the canonical face model into the runtime face landmark set estimated on each frame. This way, virtual 3D assets modeled around the canonical face model can be aligned with a tracked face by applying the face pose transformation matrix to them.
Face Transform Estimation
The face transform estimation pipeline is a key component, responsible for estimating face transform data within the Metric 3D space. On each frame, the following steps are executed in the given order:
Face landmark screen coordinates are converted into the Metric 3D space coordinates;
Face pose transformation matrix is estimated as a rigid linear mapping from the canonical face metric landmark set into the runtime face metric landmark set in a way that minimizes a difference between the two;
A face mesh is created using the runtime face metric landmarks as the vertex positions (XYZ), while both the vertex texture coordinates (UV) and the triangular topology are inherited from the canonical face model.
Effect Renderer
The Effect Renderer is a component, which serves as a working example of a face effect renderer. It targets the OpenGL ES 2.0 API to enable a real-time performance on mobile devices and supports the following rendering modes:
3D object rendering mode: a virtual object is aligned with a detected face to emulate an object attached to the face (example: glasses);
Face mesh rendering mode: a texture is stretched on top of the face mesh surface to emulate a face painting technique.
In both rendering modes, the face mesh is first rendered as an occluder straight into the depth buffer. This step helps to create a more believable effect via hiding invisible elements behind the face surface.
Figure 3: An example of face effects rendered by the Face Effect Renderer.
Using Face Transform Module
The face transform estimation module is available as a part of the MediaPipe Face Mesh solution. It comes with face effect application examples, available as graphs and mobile apps on Android or iOS. If you wish to go beyond examples, the module contains generic calculators and subgraphs - those can be flexibly applied to solve specific use cases in any MediaPipe graph. For more information, please visit our documentation.
Follow MediaPipe
We look forward to publishing more blog posts related to new MediaPipe pipeline examples and features. Please follow the MediaPipe label on Google Developers Blog and Google Developers twitter account (@googledevs).
Acknowledgements
We would like to thank Chuo-Ling Chang, Ming Guang Yong, Jiuqiang Tang, Gregory Karpiak, Siarhei Kazakou, Matsvei Zhdanovich and Matthias Grundman for contributing to this blog post.
Posted by Jennifer Kohl, Program Manager, Developer Community Programs
On October 16-18, thousands of developers from all over the world are coming together for DevFest 2020, the largest virtual weekend of community-led learning on Google technologies.
As people around the world continue to adapt to spending more time at home, developers yearn for community now more than ever. In years past, DevFest was a series of in-person events over a season. For 2020, the community is coming together in a whole new way – virtually – over one weekend to keep developers connected when they may want it the most.
The speakers
The magic of DevFest comes from the people who organize and speak at the events - developers with various backgrounds and skill levels, all with their own unique perspectives. In different parts of the world, you can find a DevFest session in many local languages. DevFest speakers are made up of various types of technologists, including kid developers , self-taught programmers from rural areas , and CEOs and CTOs of startups. DevFest also features a wide range of speakers from Google, Women Techmakers, Google Developer Experts, and more. Together, these friendly faces, with many different perspectives, create a unique and rich developer conference.
The sessions and their mission
Hosted by Google Developer Groups, this year’s sessions include technical talks and workshops from the community, and a keynote from Google Developers. Through these events, developers will learn how Google technologies help them develop, learn, and build together.
At our core, Google Developers believes community-led developer events like these are an integral part of the advancement of technology in the world.
For this reason, Google Developers supports the community-led efforts of Google Developer Groups and their annual tentpole event, DevFest. Google provides esteemed speakers from the company and custom technical content produced by developers at Google. The impact of DevFest is really driven by the grassroots, passionate GDG community organizers who volunteer their time. Google Developers is proud to support them.
The attendees
During DevFest 2019, 138,000+ developers participated across 500+ DevFests in 100 countries. While 2020 is a very different year for events around the world, GDG chapters are galvanizing their communities to come together virtually for this global moment. The excitement for DevFest continues as more people seek new opportunities to meet and collaborate with like-minded, community-oriented developers in our local towns and regions.
Join the conversation on social media with #DevFest.
Posted by Gabriel Rubinsky, Senior Product Manager
Today, we’re excited to announce the Device Access Console is available.
The Device Access program lets individuals and qualified partners securely access and control Nest products with their apps and solutions.
At the heart of the Device Access program is the Smart Device Management API. Since we announced the program, Alarm.com, Control4, DISH, OhmConnect, NRG Energy, and Vivint Smart Home have successfully completed the Early Access Program (EAP) with Nest thermostat, camera, or doorbell traits. In the coming months, we expect additional devices to be supported and more smart home partners to launch their new integrations as well.
Enhanced privacy and security
The Device Access program is built on a foundation of privacy and security. The program requires partner submission of qualified use cases and completion of a security assessment before being allowed to utilize the Smart Device Management API for commercial use. The program process gives our users the confidence that commercial partners offering integrated Nest solutions have data protections and safeguards in place that meet our privacy and security standards.
Nest device access and control
The Device Access program currently allows qualified partners to integrate directly with Nest devices, enable control of thermostats, access and view camera feeds, and receive doorbell notifications with images. All qualified partner solutions and services will require end-user consent before being able to access, control, and manage Nest devices as part of their service offerings, either through a partner client app or service platform. Ultimately, this gives users more choice in how to control their home and their own generated data.
If you’re a developer or a Nest user interested in the Device Access program or access to the sandbox development environment,* you can find more information on our Device Access site.
Device Access for Commercial Developers
The Device Access program allows trusted partners to offer access, management, and control of Nest devices within the partner’s app, solution, and ecosystem. It allows developers to test all API traits in the sandbox environment, before moving forward with commercial integration. Learn more
Device Access for Individuals
For individual smart home developer enthusiasts, you can register to access the sandbox development environment, allowing you to directly control your own Nest devices through your private integrations and automations. Learn more
We’re doing the work to make Nest devices more secure and protect user privacy long into the future. This means expanding privacy and data security programs, and delivering flexibility for our customers to use thousands of products from partners to create a connected, helpful home.
* Registration consists of the acceptance of the Google API and Nest Device Access Sandbox Terms of Service, along with a one-time, non-refundable nominal fee per account
Posted by Gabriel Rubinsky, Senior Product Manager
Today, we’re excited to announce the Device Access Console is available.
The Device Access program lets individuals and qualified partners securely access and control Nest products with their apps and solutions.
At the heart of the Device Access program is the Smart Device Management API. Since we announced the program, Alarm.com, Control4, DISH, OhmConnect, NRG Energy, and Vivint Smart Home have successfully completed the Early Access Program (EAP) with Nest thermostat, camera, or doorbell traits. In the coming months, we expect additional devices to be supported and more smart home partners to launch their new integrations as well.
Enhanced privacy and security
The Device Access program is built on a foundation of privacy and security. The program requires partner submission of qualified use cases and completion of a security assessment before being allowed to utilize the Smart Device Management API for commercial use. The program process gives our users the confidence that commercial partners offering integrated Nest solutions have data protections and safeguards in place that meet our privacy and security standards.
Nest device access and control
The Device Access program currently allows qualified partners to integrate directly with Nest devices, enable control of thermostats, access and view camera feeds, and receive doorbell notifications with images. All qualified partner solutions and services will require end-user consent before being able to access, control, and manage Nest devices as part of their service offerings, either through a partner client app or service platform. Ultimately, this gives users more choice in how to control their home and their own generated data.
If you’re a developer or a Nest user interested in the Device Access program or access to the sandbox development environment,* you can find more information on our Device Access site.
Device Access for Commercial Developers
The Device Access program allows trusted partners to offer access, management, and control of Nest devices within the partner’s app, solution, and ecosystem. It allows developers to test all API traits in the sandbox environment, before moving forward with commercial integration. Learn more
Device Access for Individuals
For individual smart home developer enthusiasts, you can register to access the sandbox development environment, allowing you to directly control your own Nest devices through your private integrations and automations. Learn more
We’re doing the work to make Nest devices more secure and protect user privacy long into the future. This means expanding privacy and data security programs, and delivering flexibility for our customers to use thousands of products from partners to create a connected, helpful home.
* Registration consists of the acceptance of the Google API and Nest Device Access Sandbox Terms of Service, along with a one-time, non-refundable nominal fee per account
Posted by Gabriel Rubinsky, Senior Product Manager
Today, we’re excited to announce the Device Access Console is available.
The Device Access program lets individuals and qualified partners securely access and control Nest products with their apps and solutions.
At the heart of the Device Access program is the Smart Device Management API. Since we announced the program, Alarm.com, Control4, DISH, OhmConnect, NRG Energy, and Vivint Smart Home have successfully completed the Early Access Program (EAP) with Nest thermostat, camera, or doorbell traits. In the coming months, we expect additional devices to be supported and more smart home partners to launch their new integrations as well.
Enhanced privacy and security
The Device Access program is built on a foundation of privacy and security. The program requires partner submission of qualified use cases and completion of a security assessment before being allowed to utilize the Smart Device Management API for commercial use. The program process gives our users the confidence that commercial partners offering integrated Nest solutions have data protections and safeguards in place that meet our privacy and security standards.
Nest device access and control
The Device Access program currently allows qualified partners to integrate directly with Nest devices, enable control of thermostats, access and view camera feeds, and receive doorbell notifications with images. All qualified partner solutions and services will require end-user consent before being able to access, control, and manage Nest devices as part of their service offerings, either through a partner client app or service platform. Ultimately, this gives users more choice in how to control their home and their own generated data.
If you’re a developer or a Nest user interested in the Device Access program or access to the sandbox development environment,* you can find more information on our Device Access site.
Device Access for Commercial Developers
The Device Access program allows trusted partners to offer access, management, and control of Nest devices within the partner’s app, solution, and ecosystem. It allows developers to test all API traits in the sandbox environment, before moving forward with commercial integration. Learn more
Device Access for Individuals
For individual smart home developer enthusiasts, you can register to access the sandbox development environment, allowing you to directly control your own Nest devices through your private integrations and automations. Learn more
We’re doing the work to make Nest devices more secure and protect user privacy long into the future. This means expanding privacy and data security programs, and delivering flexibility for our customers to use thousands of products from partners to create a connected, helpful home.
* Registration consists of the acceptance of the Google API and Nest Device Access Sandbox Terms of Service, along with a one-time, non-refundable nominal fee per account
Posted by David Ko, Engineering Director; Jeff Lim, Software Engineer; Pankaj Gupta, Director of Engineering; Will Horn, Software Engineer
Three years ago, when we launched Google Pay India (then called Tez), our vision was to create a simple and secure payment app for everyone in India. We started with the premise of making payments simple and built a user interface that made making payments as easy as starting a conversation. The simplicity of the design resonated with users instantly and over time, we have added functionality to help users do more than just make payments. Today users can pay their bills, recharge their phones, get loans instantly through banks, buy train tickets and much more all within the app. Last year, we also launched the Spot Platform in India, which allows merchants to create branded experiences within the Google Pay app so they can connect with their customers in a more engaging way.
As we looked at scaling our learnings from India to other parts of the world, we wanted to focus on a fast and efficient development environment, which was modern and engaging with the flexibility needed to keep the UI clean. And more importantly one that enabled us to write once and be able to deploy to both iOS and Android reaching the wide variety of users.
It was clear that we would need to build it, and ensure that it worked across a wide variety of payment rails, infrastructure, and operating systems. But with the momentum we had for Google Pay in India, and the fast evolving product features - we had limited engineering resources to put behind this effort.
After evaluating various options, it was easy to pick Flutter as the obvious choice. The three things that made it click for us were:
We could write once in Dart and deploy on both iOS and Android, which led to a uniform best-in-class experience on both Android and iOS;
Just-in-Time compiler with hot reload during development enabled rapid iteration on UI which tremendously increased developer efficiency; and
Ahead-of-time compilation ensured high performance deployment.
Now the task was to get it done. We started with a small team of three software engineers from both Android and iOS. Those days were focused and intense. To start with we created a vertical slice of the app — home page, chat, and payments (with the critical native plugins for payments in India). The team first tried a hybrid approach, and then decided to do a clean rewrite as it was not scalable.
We ran a few small sprints for other engineers on the team to give them an opportunity to rewrite something in Flutter and provide feedback. Everyone loved Flutter — you could see the thrill on people’s faces as they talked about how fast it was to build a user interface. One of the most exciting things was that the team could get instant feedback while developing. We could also leverage the high quality widgets that Flutter provided to make development easier.
After carefully weighing the risks and our case for migration, we decided to go all in with Flutter. It was a monumental rewrite of a moving target, and the existing app continues to evolve while we were rewriting features. After many months of hard work, Google Pay Flutter implementation is now available in open beta in India and Singapore. Our users in India and Singapore can visit the Google Play Store page for Google Pay to opt into the beta program and experience the latest app built on Flutter. Next, we are looking forward to launching Google Pay on Flutter to everyone across the world on iOS and Android.
We hope this gives you a fair idea of how to approach and launch a complete rewrite of an active app that is used by millions of users and businesses of all sizes. It would not have been possible for us to deliver this without Flutter’s continued advances on the platform. Huge thanks to the Flutter team, as today, we are standing on their shoulders!
When fully migrated, Google Pay will be one of the largest production deployments on the Flutter platform. We look forward to sharing more learnings from our transition to Flutter in the future.