Author Archives: Google Devs

Open Sourcing Resonance Audio

Posted by Eric Mauskopf, Product Manager

Spatial audio adds to your sense of presence when you're in VR or AR, making it feel and sound, like you're surrounded by a virtual or augmented world. And regardless of the display hardware you're using, spatial audio makes it possible to hear sounds coming from all around you.

Resonance Audio, our spatial audio SDK launched last year, enables developers to create more realistic VR and AR experiences on mobile and desktop. We've seen a number of exciting experiences emerge across a variety of platforms using our SDK. Recent examples include apps like Pixar's Coco VR for Gear VR, Disney's Star WarsTM: Jedi Challenges AR app for Android and iOS, and Runaway's Flutter VR for Daydream, which all used Resonance Audio technology.

To accelerate adoption of immersive audio technology and strengthen the developer community around it, we’re opening Resonance Audio to a community-driven development model. By creating an open source spatial audio project optimized for mobile and desktop computing, any platform or software development tool provider can easily integrate with Resonance Audio. More cross-platform and tooling support means more distribution opportunities for content creators, without the worry of investing in costly porting projects.

What's Included in the Open Source Project

As part of our open source project, we're providing a reference implementation of YouTube's Ambisonic-based spatial audio decoder, compatible with the same Ambisonics format (Ambix ACN/SN3D) used by others in the industry. Using our reference implementation, developers can easily render Ambisonic content in their VR media and other applications, while benefiting from Ambisonics open source, royalty-free model. The project also includes encoding, sound field manipulation and decoding techniques, as well as head related transfer functions (HRTFs) that we've used to achieve rich spatial audio that scales across a wide spectrum of device types and platforms. Lastly, we're making our entire library of highly optimized DSP classes and functions, open to all. This includes resamplers, convolvers, filters, delay lines and other DSP capabilities. Additionally, developers can now use Resonance Audio's brand new Spectral Reverb, an efficient, high quality, constant complexity reverb effect, in their own projects.

We've open sourced Resonance Audio as a standalone library and associated engine plugins, VST plugin, tutorials, and examples with the Apache 2.0 license. Most importantly, this means Resonance Audio is yours, so you're free to use Resonance Audio in your projects, no matter where you work. And if you see something you'd like to improve, submit a GitHub pull request to be reviewed by the Resonance Audio project committers. While the engine plugins for Unity, Unreal, FMOD, and Wwise will remain open source, going forward they will be maintained by project committers from our partners, Unity, Epic, Firelight Technologies, and Audiokinetic, respectively.

If you're interested in learning more about Resonance Audio, check out the documentation on our developer site. If you want to get more involved, visit our GitHub to access the source code, build the project, download the latest release, or even start contributing. We're looking forward to building the future of immersive audio with all of you.

Student applications open for Google Summer of Code 2018

Originally posted by Josh Simmons from the Google Open Source Team on the Google Open Source Blog.

Ready, set, go! Today we begin accepting applications from university students who want to participate in Google Summer of Code (GSoC) 2018. Are you a university student? Want to use your software development skills for good? Read on.

Now entering its 14th year, GSoC gives students from around the globe an opportunity to learn the ins and outs of open source software development while working from home. Students receive a stipend for successful contribution to allow them to focus on their project for the duration of the program. A passionate community of mentors help students navigate technical challenges and monitor their progress along the way.

Past participants say the real-world experience that GSoC provides sharpened their technical skills, boosted their confidence, expanded their professional network and enhanced their resume.

Interested students can submit proposals on the program site between now and Tuesday, March 27, 2018 at 16:00 UTC.

While many students began preparing in February when we announced the 212 participating open source organizations, it's not too late to start! The first step is to browse the list of organizations and look for project ideas that appeal to you. Next, reach out to the organization to introduce yourself and determine if your skills and interests are a good fit. Since spots are limited, we recommend writing a strong proposal and submitting a draft early so you can get feedback from the organization and increase the odds of being selected.

You can learn more about how to prepare in the video below and in the Student Guide.

You can find more information on our website, including a full timeline of important dates. We also highly recommend perusing the FAQ and Program Rules, as well as joining the discussion mailing list.

Remember to submit your proposals early as you only have until Tuesday, March 27 at 16:00 UTC. Good luck to all who apply!

New creative ways to build with Actions on Google

Posted by Brad Abrams, Group Product Manager, & Chris Ramsdale, Product Manager

Though it's been just a few short weeks since we released a new set of features for Actions on Google, we're kicking off our presence at South by Southwest (SXSW) with a few more updates for you.

SXSW brings together creatives interested in fusing marketing and technology together, and what better way to start the festival than with new features that enable you to be more creative, and to build new type of Actions that help your users get more things done.

Support for media playback and better content carousels

This past year, we've heard from many developers who want to offer great media experiences as part of their Actions. While you can already make your podcasts discoverable to Assistant users, our new media response API allows you to develop deeper, more-engaging audio-focused conversational Actions that include, for example, clips from TV shows, interactive stories, meditation, relaxing sounds, and news briefings.

Your users can control this audio playback on voice-activated speakers like Google Home, Android phones, and more devices coming soon. On Android phones, they can even use the controls on their phone's notification area and lock screen.

Some developers who are already using our new media response API include The Daily Show, Calm, and CNBC.

To get started using our media response API, head over to our documentation to learn more.

And if your content is more visual than audio-based, we're also introducing a browse carousel for your Actions that allows you to show browsable content -- e.g., products, recipes, places -- with a visual experience that users can simply scroll through, left to right. See an example of how this would look to your users, below, then learn more about our browse carousel here.

Daily updates and push notifications on phones, now available to your users

While having a great user experience is important, we also want to ensure you have the right tools to re-engage your users so they keep coming back to the experience you've built. To that end, a few months ago, we introduced daily updates and push notifications as a developer preview.

Starting today, your users will have access to this feature. Esquire is already using it to send daily "wisdom tips", Forbes sends a quote of the day, and SpeedyBit sends daily updates of cryptocurrency prices to keep them in the know on market fluctuations.

As soon as you submit your Action for review with daily updates or push notifications enabled, and it's approved, your users will be able to opt into this re-engagement channel. Learn more in our docs.

Build connected experiences on Google Assistant for the paying users of your Android app

Actions for Google now allows you to access digital purchases (including paid app purchases, in-app purchases, and in-app subscriptions) that your users make from your Android app. By doing so, you can recognize when you're interacting with a user who's paid for a premium experience on your Android app, and similarly serve that experience in your Action, across devices.

And the best part? This is all done behind the scenes, so the user doesn't need to take any additional steps, like signing in, for you to provide this experience. Economist Espresso, for example, now knows when a user has already paid for a subscription with Google Play, and then offers an upgraded experience to the same user through their Action.

A new way to extend an embedded Google Assistant

In December of last year we announced the addition of Built-in Device Actions to the Google Assistant SDK for devices. This feature allows developers to extend any Google Assistant that is embedded in their device using traits and grammars that are maintained by Google and are largely focused on home automation. For example "turn on", "turn off" and "turn the temperature down".

Today we're announcing the addition of Custom Device Actions which are more flexible Device Actions, allowing developers to specify any grammar and command to be executed by their device. Once you build these Custom Device Actions, users will be able to activate specific capabilities through the Google Assistant. This leads to more natural ways in which users interact with their Assistant-enabled devices, including the ability to utilize more specific device capabilities.

Before:

"Ok Google, turn on the oven"

"Ok, turning on the oven"

After:

"Ok Google, set the oven to convection and preheat to 350 degrees"

"Ok, setting the oven to convection and preheating to 350 degrees"

To give you a sense of how this might work in the real world, check out a prototype, Talk to the Light from the talented Red Paper Heart team, that shows a zany use of this functionality. Then, check out our documentation to learn more about how you can start building these for your devices. We've provided a technical case study from Red Paper Heart and their code repository in case you want to see how they built it.

In addition to Custom Device Actions, we've also integrated device registration into the Actions on Google console, allowing developers to get up and running more quickly. To get started checkout the latest documentation and console.

A few creative explorations to inspire you

Similarly, we teamed up with a few cutting-edge teams to explore the creative potential of the Actions on Google platform. Following the Voice experiments the Google Creative Lab released a few months ago, these teams released four new experiments:

The code for all of these Actions is open source and is accompanied by in-depth technical case studies from each team that shares their learnings when developing Actions.

Case studies of Actions, built with Dialogflow

Ready to build? Take a look at our three new case studies with KLM Royal Dutch Airlines, Domino's, and Ticketmaster. Learn about their development journey with Dialogflow and how the Actions they built help them stay ahead of the conversational technology curve, be where their customers are, and assist throughout the entire user journey:

We hope these updates get your creative juices flowing and inspire you to build even more Actions and embed the Google Assistant on more devices. Don't forget that once you publish your first Action you can join our community program* and receive your exclusive Google Assistant t-shirt and up to $200 of monthly Google Cloud credit. Thanks for being a part of our community, and as always, if you have ideas or requests that you'd like to share with our team, don't hesitate to join the conversation.


*Some countries are not eligible to participate in the developer community program, please review the terms and conditions

Congratulating the latest Open Source Peer Bonus winners

Originally posted by Maria Webb from the Google Open Source Team on the Google Open Source Blog.

To kick off the new year, we're pleased to announce the first round of Open Source Peer Bonus winners. First started by the Google Open Source team seven years ago, this program encourages Google employees to express their gratitude to open source contributors.

As part of the program, Googlers nominate open source contributors outside of the company for their contributions to open source projects, including those used by Google. Nominees are reviewed by a team of volunteers and the winners receive our heartfelt thanks with a token of our appreciation.

So far more than 600 contributors from dozens of countries have received Open Source Peer Bonuses for volunteering their time and talent to over 400 open source projects. You can find some of the previous winners in these blog posts.

We'd like to recognize the latest round of winners and the projects they worked on. Listed below are the individuals who gave us permission to thank them publicly:

Name Project Name Project
Adrien Devresse Abseil C++ Friedel Ziegelmayer Karma
Weston Ruter AMP Plugin for WordPress Davanum Srinivas Kubernetes
Thierry Muller AMP Plugin for WordPress Jennifer Rondeau Kubernetes
Adam Silverstein AMP Project Jessica Yao Kubernetes
Levi Durfee AMP Project Qiming Teng Kubernetes
Fabian Wiles Angular Zachary Corleissen Kubernetes
Paul King Apache Groovy Reinhard Nägele Kubernetes Charts
Eric Eide C-Reduce Erez Shinan Lark
John Regehr C-Reduce Alex Gaynor Mercurial
Yang Chen C-Reduce Anna Henningsen Node.js
Ajith Kumar Velutheri Chromium Michaël Zasso Node.js
Orta Therox CocoaPods Michael Dalessio Nokogiri
Idwer Vollering coreboot Gina Häußge OctoPrint
Paul Ganssle dateutil Michael Stramel Polymer
Zach Leatherman Eleventy La Vesha Parker Progressive HackNight
Daniel Stone freedesktop.org Ian Stapleton Cordasco Python Code Quality Authority
Sergiu Deitsch glog Fabian Henneke Secure Shell
Jonathan Bluett-Duncan Guava Rob Landley Toybox
Karol Szczepański Heracles.ts Peter Wong V8
Paulus Schoutsen Home Assistant Timothy Gu Web platform & Node.js
Nathaniel Welch Fog for Google Ola Hugosson WebM
Shannon Coen Istio Dominic Symes WebM & AOMedia
Max Beatty jsPerf

To each and every one of you: thank you for your contributions to the open source community and congratulations!

Beginnings, Reinventions, and Leaps

Grow with Google Developer Scholars Advancing their Lives and Careers

This is a cross-post with our partner Udacity

The Grow with Google Developer Scholarship—a US-focused program offering learning opportunities to tens of thousands of aspiring developers—has given rise to a wealth of powerful stories from amazing individuals who are using their scholarships to pursue their goals and achieve their dreams. Some are creating new beginnings in new places. Others are reinventing their paths and transforming their futures. Still others are advancing their careers and growing their businesses.

Rei Blanco, Paul Koutroulakis, and Mary Weidner exemplify what the scholarship program is all about.

A New Beginning in Lansing, Michigan

Rei Blanco immigrated to Lansing, Michigan from Cuba seven years ago. He began learning English, and found opportunities to practice his skills in jobs ranging from housekeeping to customer support. Today, as a Grow with Google Developer Sscholarship recipient, he is learning a whole new language—Javascript—as well as HTML and CSS. Rei earned himself a spot as a student in the Front-End Web Developer challenge course, and is now fully-immersed, and loving every part of his journey to becoming a developer.

"When I get home, I immediately go to the basement and start coding!"

Rei studies several hours every night. He credits his partner for the non-stop encouragement she gives him. He embraces a daily workout routine that keeps him focused and energized. He also praises the student community for helping him to advance successfully through the program.

"The live help channel in our Slack workspace is great. Once you get stuck, you get immediate help or you can help out others."

As his skills grow, so does his confidence. A year ago, when he first began taking online coding courses, he felt out of place attending a local developer meetup. These days, he's a busy member of a student group working on outside projects, and has plans to attend many more in-person events. Rei is taking his developer career step-by-step—he's bolstering his chances of earning freelance work by steadily adding new projects to his portfolio, and has his sights set on a full-time job in front-end web development.

A Reinvention in Columbia, South Carolina

Paul Koutroulakis was a 20-year restaurant industry success story. For 10 of those years, he even owned his own establishment. But like it was for so many others, 2008 was a terrible year. Sales dropped, and the burden became too great. Paul lost his restaurant, and ultimately, his home.

Despite the hardships, Paul retained the spirit that had made him a success in the first place, and he was determined to persevere. But he also saw the writing on the wall, and knew he needed to make a change.

"This was a wakeup call with my resume. I didn't want to be an old man managing a restaurant."

From his research, he learned that demand for web developers was growing rapidly, and he recognized the opportunity he was looking for. From that moment forward, Paul focused his energy on becoming a developer.

He worked daytime hours at a logistics company, and started taking computer programming classes at night at a local technical college. Paul earned his associates degree, but he wasn't done. He felt the pressure to go the extra mile, and made the commitment to do so by competing for, and ultimately earning, a Grow with Google Developer Sscholarship.

"I need to make myself more marketable. I would like to show that age doesn't matter and that anyone can make a great contribution to a company or field if they are passionate about learning."

Today, Paul is focused on building a project portfolio, and wants to land a job as an entry-level web developer. His long-term goal is to enter the field of cybersecurity. Despite the hard work and long hours, he's excited by the skills he's learning, and by the transformation he's undergone. Best of all, he knows it's all worth it.

"Even if it's a late night of studying, it's better than coming home at one or two in the morning after a long shift at the restaurant."

A Leap Forward in Pittsburgh, Pennsylvania

Mary Weidner's degree was in finance, and after graduating, she went right into the field, spending several years in a series of finance-related roles. Simultaneously, she was nurturing an interest in coding, even going so far as to take a few free online courses. Everything changed for her when a friend asked her to join him as co-founder for a fitness app he was developing. She was intrigued, and agreed to take the leap. As one-half of a two-person team, she found herself immediately supporting all aspects of the fledgling operation, from launching the database, to filming videos.

Mary's hobbyist-level interest in coding transformed into a primary focus, as she realized early on that building her tech skills would significantly enhance her ability to grow the business. But there was more than just operational necessity at work—Mary recognized she was facing an additional set of challenges.

"Not only do I want to learn how to code in order to help my company, I also want to be more respected in the industry. Being a woman and a non-technical co-founder is not the easiest place to be in tech."

As a Grow with Google Developer Sscholarship recipient, Mary is now engaged in an intensive learning program, and her skills are accelerating accordingly.

Strongr Fastr officially launched in January 2018, and has already been downloaded by thousands of users, boasting a user rating of 4.7 stars. It's an impressive start, but neither Mary nor her partner are resting on their laurels. They're motivated to grow and improve, and are focused on "finding traction channels that work, and trying to find that scalable groove."

Despite her head-down determination and focus, Mary's approach to learning is a spirited one, and she's enjoying every minute of her big leap forward.

"I'm loving it. It's really cool to have apps on my phone that I've made, even if they're the most simple thing. It's very empowering and just ... cool!"

Growing Careers and Skills Across the US

Grow with Google is a new initiative to help people get the skills they need to find a job. Udacity is excited to partner with Google on this powerful effort, and to offer the Ddeveloper Sscholarship program.

Grow with Google Developer scholars come from different backgrounds, live in different cities, and are pursuing different goals in the midst of different circumstances, but they are united by their efforts to advance their lives and careers through hard work, and a commitment to self-empowerment through learning. We're honored to support their efforts, and to share the stories of scholars like Rei, Paul, and Mary.

Making progress (bars) with Slides Add-ons

Originally posted on the G Suite Developers Blog by Wesley Chun (@wescpy), Developer Advocate and Grant Timmerman, Developer Programs Engineer, G Suite

We recently introduced Google Slides Add-ons so developers can add functionality from their apps to ours. Here are examples of Slides Add-ons that some of our partners have already built—remember, you can also add functionality to other apps outside of Slides, like Docs, Sheets, Gmail and more.

When it comes to Slides, if your users are delivering a presentation or watching one, sometimes it's good to know how far along you are in the deck. Wouldn't it be great if Slides featured progress bars?

In the latest episode of the G Suite Dev Show, G Suite engineer Grant Timmerman and I show you how to do exactly that—implement simple progress bars using a Slides Add-on.

Using Google Apps Script, we craft this add-on which lets users turn on or hide progress bars in their presentations. The progress bars are represented as appropriately-sized rectangles at the bottom of slide pages. Here's a snippet of code for createBars(), which adds the rectangle for each slide.

var BAR_ID = 'PROGRESS_BAR_ID';
var BAR_HEIGHT = 10; // px
var presentation = SlidesApp.getActivePresentation();

function createBars() {
var slides = presentation.getSlides();
deleteBars();
for (var i = 0; i < slides.length; ++i) {
var ratioComplete = (i / (slides.length - 1));
var x = 0;
var y = presentation.getPageHeight() - BAR_HEIGHT;
var barWidth = presentation.getPageWidth() * ratioComplete;
if (barWidth > 0) {
var bar = slides[i].insertShape(SlidesApp.ShapeType.RECTANGLE,
x, y, barWidth, BAR_HEIGHT);
bar.getBorder().setTransparent();
bar.setLinkUrl(BAR_ID);
}
}
}

To learn more about this sample and see all of the code, check out the Google Slides Add-on Quickstart. This is just one example of what you can build using Apps Script and add-ons; here's another example where you can create a slide presentation from a collection of images using a Slides Add-on.

If you want to learn more about Apps Script, check out the video library or view more examples of programmatically accessing Google Slides here. To learn about using Apps Script to create other add-ons, check out this page in the docs.

Machine Learning Crash Course

Posted by Barry Rosenberg, Google Engineering Education Team

Today, we're happy to share our Machine Learning Crash Course (MLCC) with the world. MLCC is one of the most popular courses created for Google engineers. Our engineering education team has delivered this course to more than 18,000 Googlers, and now you can take it too! The course develops intuition around fundamental machine learning concepts.

What does the course cover?

MLCC covers many machine learning fundamentals, starting with loss and gradient descent, then building through classification models and neural nets. The programming exercises introduce TensorFlow. You'll watch brief videos from Google machine learning experts, read short text lessons, and play with educational gadgets devised by instructional designers and engineers.

How much does it cost?

MLCC is free.

I don't get it. Why are you offering MLCC to everyone?

We believe that the potential of machine learning is so vast that every technical person should learn machine learning fundamentals. We're offering the course in English, Spanish, Korean, Mandarin, and French.

Does the real world make an appearance in the course?

Yes, MLCC ends with short lessons on designing real-world machine learning systems. MLCC also contains sections enabling you to learn from the mistakes that our experts have made.

Do I have enough mathematical background to understand MLCC?

Understanding a little algebra and a little elementary statistics (mean and standard deviation) is helpful. If you understand calculus, you'll get a bit more out of the course, but calculus is not a requirement. MLCC contains a helpful section to refresh your memory on the background math.

Is this a programming course?

MLCC contains some Python programming exercises. However, those exercises comprise only a small percentage of the course, which non-programmers may safely skip.

I'm new to Python. Will the programming exercises be too hard for me?

Many of the Google engineers who took MLCC didn't know any Python but still completed the exercises. That's because you'll write only a few lines of code during the programming exercises. Instead of writing code from scratch, you'll primarily manipulate the values of existing variables. That said, the code will be easier to understand if you can program in Python.

But how will I learn machine learning concepts without programming?

MLCC relies on a variety of media and hands-on interactive tools to build intuition in fundamental machine learning concepts. You need a technical mind, but you don't need programming skills.

How can I show off my machine learning skills?

As your knowledge about Machine Learning grows, you can test your skill by helping others. We're also kicking off a Kaggle competition to help DonorsChoose.org. DonorsChoose.org is an organization that empowers public school teachers from across the country to request materials and experiences they need to help their students grow. Teachers submit hundreds of thousands of project proposals each year; 500,000 proposals are expected in 2018.

Currently, DonorsChoose.org relies on a large number of volunteers to screen the proposals. The Kaggle competition hopes to help DonorsChoose.org use ML to accelerate the screening process, which will enable volunteers to make better use of their time. In addition, this work should help increase the consistency of decisions about projects.

Is MLCC Google's only machine learning educational project?

MLCC is merely one of many ways to learn about machine learning. To explore the universe of machine learning educational opportunities from Google, see our new Learn with Google AI program at g.co/learnwithgoogleai. To start on MLCC, see g.co/machinelearningcrashcourse.

Develop bot integrations with the Hangouts Chat platform and API

Posted by Mike Sorvillo, Product Manager, Hangouts Chat and Wesley Chun (@wescpy), Developer Advocate, G Suite

You might have seen that we announced new features in G Suite to help teams transform how they work, including Hangouts Chat, a new messaging platform for enterprise collaboration on web and mobile. Perhaps more interesting is that starting today you'll be able to craft your own bot integrations using the Hangouts Chat developer platform and API.

Now, you can create bots to streamline work—automate manual tasks or give your users new ways to connect with your application, all with commands issued from chat rooms or direct messages (DMs). Here are some ideas you might consider:

  • Create a bot that can complete simple tasks or query for information
  • Create a bot that can post asynchronous notifications in any room or DM
  • Use interactive UI cards to bring your message responses to life
  • Use Google Apps Script to create custom bots for your colleagues or organization

For example, a bot can take a location from a user, look it up using the Google Maps API, and display the resulting map right within the same message thread in Hangouts Chat. The bot output you see in the image below is generated from the Apps Script bot integration. It returns the JSON payload just below the same image shown on this page in the documentation.

When messages are sent to an Apps Script bot, the onMessage() function is called and passed an event object. The code below extracts the bot name as well as the location requested by the user. The location is then passed to Google Maps to create the static map as well as an openLink URL that takes the user directly to Google Maps if either the map or "Open in Google Maps" link is clicked.

function onMessage(e) {
var bot = e.message.annotations[0].userMention.user.displayName;
var loc = encodeURI(e.message.text.substring(bot.length+2));
var mapClick = {
"openLink": {
"url": "https://google.com/maps/search/?api=1&query=" + loc
}
};

return {
// see JSON payload in the documentation link above
};
}

Finally, this function returns everything Hangouts Chat needs to render a UI card assuming the appropriate links, data and Google Maps API key were added to the response JSON payload. It may be surprising, but this is the entire bot and follows this common formula: get the user request, collate the results and respond back to the user.

When results are returned immediately like this, it's known as a synchronous bot. Using the API isn't necessary because you're just responding to the HTTP request. If your bot requires additional processing time or must execute a workflow out-of-band, return immediately then post an asynchronous response when the background jobs have completed with data to return. Learn more about bot implementation, its workflow, as well as synchronous vs. asynchronous responses.

Developers are not constrained to using Apps Script, although it is perhaps one of the easiest ways to create and deploy bots. Overall, you can write and host bots on a variety of platforms:

No longer are chat rooms just for conversations. With feature-rich, intelligent bots, users can automate tasks, get critical information or do other heavy-lifting with a simple message. We're excited at the possibilities that await both developers and G Suite users on the new Hangouts Chat platform and API.

Announcing Flutter beta 1: Build beautiful native apps

Originally posted on Flutter's Medium by Seth Ladd

Today, as part of Mobile World Congress 2018, we are excited to announce the first beta release of Flutter. Flutter is Google's new mobile UI framework that helps developers craft high-quality native interfaces for both iOS and Android. Get started today at flutter.io to build beautiful native apps in record time.

Flutter targets the sweet spot of mobile development: performance and platform integrations of native mobile, with high-velocity development and multi-platform reach of portable UI toolkits.

Designed for both new and experienced mobile developers, Flutter can help you build beautiful and successful apps in record time with benefits such as:

  • High-velocity development with features like stateful Hot Reload, a new reactive framework, rich widget set, and integrated tooling.
  • Expressive and flexible designs with composible widget sets, rich animation libraries, and a layered, extensible architecture.
  • High-quality experiences across devices and platforms with our portable, GPU-accelerated renderer and high-performance, native ARM code runtime, and platform interop.

Since our alpha release last year, we delivered, with help from our community, features such as screen reader support and other accessibility features, right-to-left text, localization and internationalization, iPhone X and iOS 11 support, inline video, additional image format support, running Flutter code in the background, and much more.

Our tools also improved significantly, with support for Android Studio, Visual Studio Code, new refactorings to help you manage your widget code, platform interop to expose the power of mobile platforms to Flutter code, improved stateful hot reloads, and a new widget inspector to help you browse the widget tree.

Thanks to the many new features across the framework and tools, teams across Google (such as AdWords) and around the world have been successful with Flutter. Flutter has been used in production apps with millions of installs, apps built with Flutter have been featured in the App Store and Play Store (for example, Hamilton: The Musical), and startups and agencies have been successful with Flutter.

For example, Codemate, a development agency in Finland, attributes Flutter's high-velocity dev cycle and customizable UI toolkit to their ability to quickly build a beautiful app for Hookle. "We now confidently recommend Flutter to help our clients perform better and deliver more value to their users across mobile," said Toni Piirainen, CEO of Codemate.

Apps built with Flutter deliver quality, performance, and customized designs across platforms.

Flutter's beta also works with a pre-release of Dart 2, with improved support for declaring UI in code with minimal language ceremony. For example, Dart 2 infers new and const to remove boilerplate when building UI. Here is an example:


// Before Dart 2
Widget build(BuildContext context) {
return new Container(
height: 56.0,
padding: const EdgeInsets.symmetric(horizontal: 8.0),
decoration: new BoxDecoration(color: Colors.blue[500]),
child: new Row(
...
),
);
}

// After Dart 2
Widget build(BuildContext context) =>
Container(
height: 56.0,
padding: EdgeInsets.symmetric(horizontal: 8.0),
decoration: BoxDecoration(color: Colors.blue[500]),
child: Row(
...
),
);

widget.dart on GitHub

We're thrilled to see Flutter's ecosystem thriving. There are now over 1000 packages that work with Flutter (for example: SQLite, Firebase, Facebook Connect, shared preferences, GraphQL, and lots more), over 1700 people in our chat, and we're delighted to see our community launch new sites such as Flutter Institute, Start Flutter, and Flutter Rocks. Plus, you can now subscribe to the new Flutter Weekly newsletter, edited and published by our community.

As we look forward to our 1.0 release, we are focused on stabilization and scenario completion. Our roadmap, largely influenced by our community, currently tracks features such as making it easier to embed Flutter into an existing app, inline WebView, improved routing and navigation APIs, additional Firebase support, inline maps, a smaller core engine, and more. We expect to release new betas approximately every four weeks, and we highly encourage you to vote (?) on issues important to you and your app via our issue tracker.

Now is the perfect time to try Flutter. You can go from zero to your first running Flutter app quickly with our Getting Started guide. If you already have Flutter installed, you can switch to the beta channel using these instructions.

We want to extend our sincere thanks for your support, feedback, and many contributions. We look forward to continuing this journey with everyone, and we can't wait to see what you build!

Hello, developers in China! This post is also available in Chinese.

Actions on Google now supports 16 languages, android app integration and better geo capabilities

Posted by Brad Abrams, Product Manager

While Actions on the Google Assistant are available to users on more than 400 million devices, we're focused on expanding the availability of the developer platform even further. At Mobile World Congress, we're sharing some good news for our international developer community.

Starting today, you can build Actions for the Google Assistant in seven new languages:

  • Hindi
  • Thai
  • Indonesian
  • Danish
  • Norwegian
  • Swedish
  • Dutch

These new additions join English, French, German, Japanese, Korean, Spanish, Brazilian Portuguese, Italian and Russian. That brings our total count of supported languages to 16! You can develop for all of them using Dialogflow and its natural language processing capabilities, or directly with the Actions SDK. And we're not stopping here–expect more languages to be added later this year.

If you localize your apps in these new languages you won't just be among the first Actions available in the new locales, you'll also earn rewards while you do it! And if you're new to Actions on Google, check out our community program* to learn how you can snag an exclusive Google Assistant t-shirt and up to $200 of monthly Google Cloud credit by publishing your first Action. Already we've seen partners take advantage of other languages we've launched in the past like Bring!, which is now available in both English and German.

New updates to make it easier to build for global audiences

Besides supporting new languages, we're also making it easier to build your Action for global audiences. First, we recently added support for building with templates—creating an Action by filling in a Google Sheet without a single line of code—for French, German, and Japanese. For example, TF1 built Téléfoot, using templates in French to create an engaging World Cup-themed trivia game with famous commentators included as sound effects.

Additionally, we've made it a little easier for you to localize your Actions into different languages by enabling you to export your directory listing information as a file. With the file in hand, you can translate offline and upload the translations to your console, making localization quicker and more organized.

But before you run off and start building Actions in new languages, take a quick tour of some of the useful developer features rolling out this week…

Link to your Android app to help users get things done from their mobile devices

By the end of the year the Assistant will reach 95 percent of all eligible Android phones worldwide, and Actions are a great way for you to reach those users to help them get things done easily over voice. Sometimes, however, users may benefit from the versatility of your Android app for particularly complex or highly interactive tasks.

So today, we're introducing a new feature that lets you deep link from your Actions in the Google Assistant to a specific intent in your Android app. Here's an example of SpotHero linking from their Action to their Android app after a user purchased a parking reservation. The Android app allows the user to see more details about the reservation or redeem their spot.

As you integrate these links in your Action, you'll make it easier for your users to find what they're looking for and to move seamlessly to your Android app to complete their user journey. This new feature will roll out over the coming weeks, but you can check out our developer documentation for more information on how to get started.

A faster, easier way to help with location queries

We're also introducing askForPlace, a new conversation helper that integrates the Google Places API to enable developers to use the Google Assistant to understand location-based user queries mid-conversation.

Using the new helper, the Assistant leverages Google Maps' location and points of interest (POI) expertise to provide fast, accurate places for all your users' location queries. Once the location details have been sorted out with the user, the Assistant returns the conversation back to your Action so the user can finish the interaction.

So whether your business specializes in delivering a beautiful bouquet of flowers or a piping hot pepperoni pizza, you no longer need to spend time designing models for gathering users' location requests, instead you can focus on your Action's core experience.

Let's take a look at an example of how Uber uses the askForPlace helper to help their users book a ride:

We joined halfway through the interaction above, but it's worth pointing out that once the Uber action asked the user "Where would you like to go?" the developer triggered the askForPlace helper to handle location disambiguation. The user is still speaking with Uber, but the Assistant handled all user inputs on the back end until a drop-off location was resolved. From there, Uber was able to wrap up the interaction and dispatch a driver.


Head over to the askForPlace docs to learn how to create a better user experience for your customers.

Fewer introductions for returning users

And to wrap up our new feature announcements, today we're introducing an improved experience for users who use your app regularly—without any work required on your end. Specifically, if users consistently come back to your app, we'll cut back on the introductory lead-in to get users into your Actions as quickly as possible.

Today's updates are part of our commitment to improving the platform for developers, and making the Google Assistant and Actions on Google more widely available around the globe. If you have ideas or requests that you'd like to share with our team, don't hesitate to join the conversation.

*Some countries are not eligible to participate in the developer community program, please review the terms and conditions