Tag Archives: sdk

How to integrate your web app with Google Ads

TL;DR: You can now have a web application integrated with Google Ads in just a few minutes!

Google Ads
Google Ads is an online advertising platform where advertisers can create and manage their Google marketing campaigns. The Google Ads API is the modern programmatic interface to Google Ads and the next generation of the AdWords API. It enables developers to interact directly with the Google Ads platform, vastly increasing the efficiency of managing large or complex Google Ads accounts and campaigns.

A typical use case is when a company wants to offer Google ads natively on their platform to their users. For example, customers who have an online store with Shopify can promote their business using Google ads, with just a few clicks and without needing to go to the Google Ads platform. They’re able to do it directly on Shopify’s platform—the Google Ads API makes this possible.

Demo App
Francisco Blasco, Strategic Technical Solutions Manager at Google, designed and built an open source web application that is integrated with Google Ads and Business Profile (aka Google My Business).

Anyone can use the app, called Fran Ads, to save significant time on product development. Just follow the simple installation steps in the README files (frontend README file and backend README file) on the GitHub repo! The app uses React for the frontend, and Django for the backend; two of the most popular web frameworks.

App's Logo


Check out a product demo here! You can have this app running in your local machine in a few minutes. To learn how, check out the video tutorial.

Blasco acts as an external Product Manager for Google’s strategic partners, driving the entire product development lifecycle. He created this project to help Google’s partners and businesses seeking to offer Google Ads to their users.

The goal is to accelerate the Google Ads integration process and decrease associated development costs. Some companies are using Fran Ads to see what an integration looks like, while others are using the technical guide to learn how to start using the Google Ads API.

In general, companies can use Fran Ads as an SDK to begin working with elements within the Google Ads API, and serve as a guidance system for integrating with Google. This project will minimize the number of times the wheel needs to be reinvented, accelerating innovation and facilitating adoption. Developers can clone the code repositories, follow the steps, and have a web app integrated with Google Ads in just a few minutes. They can adapt and build on top of this project, or they can just use the functions they need for the features they want to develop



App Architecture

Furthermore, you will learn how to create credentials to consume Google APIs; specifically, the README files show how to create a project on Google Cloud Platform (GCP), and how to set it up correctly so a web app can consume Google Ads API and Business Profile APIs.

Also, you will learn how refresh tokens work for Google APIs, and how to manage them for your web application.

Francisco wrote a detailed technical guide explaining how to build every feature of the app. Some of the most important features are:
        1. Create a new Google Ads account
        2. Link an existing Google Ads account
        3. OAuth authentication & authorization
        4. Refresh token management
        5. List of Google Ads accounts associated with Google account
        6. Reporting on performance for all campaign types
        7. Create Smart Campaign (automated ads on Google and across the web)
        8. Edit Smart Campaign settings

As you can see from the list above, the app will create Smart Campaigns — a simplified, automated campaign designed for new advertisers and SMBs

Google made public the suggestion services through the Google Ads API. Fran Ads uses those services to recommend keyword themes, headlines & descriptions for the ad, and budget. These recommendations are specific for each advertiser, depending on several factors such as type of business, location, and keyword themes.



An example of three Google recommendations for an advertiser.


The image above shows the final step of creating a Smart Campaign on Fran Ads. In this step, users have to set a daily budget for the campaign. Not only will you receive recommendations for the budget, but an estimate of how many ad clicks you will get per month. This is a great feature for users who are new to digital marketing and aren’t aware of their spending needs.

You can also see an alert message that the budget can be changed anytime, so users can pause spending on the campaign. This is important because many new users, especially SMBs, have doubts about spending on something new. Therefore, it is important to communicate to them that the decision they are making at that moment is not set in stone.

When you start using Fran Ads, you will see there is guidance so users complete the tasks they want.


Guidance on how to complete tasks based on Google’s best practices.


Furthermore, the app is designed based on Google’s best practices. For example, when users are creating a Smart Campaign, in step three (see the above image) they need to select keyword themes (group of keywords). If you choose “bakery” as the keyword theme, your ad is eligible to show when people search for “bakery near me”, “local bakery”, and “cake shop”.

Google’s best practices suggest that advertisers use between seven and ten keyword themes per campaign. Therefore, Fran Ads is designed for users to select up to seven keyword themes. Refer to the image of step three when creating a Smart Campaign on Fran Ads. However, you can set it to ten if you like.

The technical guide also provides:

        1. Production-ready code for both the frontend and backend
        2. Engineering flow diagrams
        3. Best practices
        4. High-fidelity mockups
        5. App architecture and structure diagrams
        6. Workarounds to current bugs on Google Ads API v9
        7. Important information on how to handle important tasks necessary for integrating your platform with Google Ads
        8. Help with the design strategy for the UX and design elements of the UI.

Important resources

See below the list summarizing the important resources that will help you integrate with Google Ads easier, faster, and better.
        1. Frontend repo: all the code for the frontend of Fran Ads.
        2. Backend repo: all the code for the backend of Fran Ads.
        3. Technical guide: 3 sections: ‘Before Starting’, ‘Configurations & Installation’, and ‘Build web              app’. In section 3, you have explanations on how to build all the features of the app.
        4. Product demo: 15-minute demo of Fran Ads showing many core features.
        5. Video tutorial: 17-minute tutorial on how to set up and run Fran Ads.


By Francisco Blasco – Launch, Channel Partners

Google Cardboard XR Plugin for Unity

Late in 2019, we decided to open source Google Cardboard. Since then, our developer community has had access to create a plethora of experiences on both iOS and Android platforms, while reaching millions of users around the world. While this release has been considered a success by our developer community, we also promised that we would release a plugin for Unity. Our users have long preferred developing Cardboard experiences in Unity, so we made it a priority to develop a Unity SDK. Today, we have fulfilled that promise, and the Google Cardboard open source plugin for Unity is now available via the Unity Asset Store

What's Included in the Cardboard Unity SDK

Today, we’re releasing the Cardboard Unity SDK to our users so that they can continue creating smartphone XR experiences using Unity. Unity is one of the most popular 3D and XR development platforms in the world, and our release of this SDK will give our content creators a smoother workflow with Unity when developing for Cardboard.

In addition to the Unity SDK, we are also providing a sample application for iOS/Android, which will be a great aid for developers trying to debug their own creations. This release not only fulfills a promise we made to our Cardboard community, but also shows our support, as we move away from smartphone VR and leave it in the more-than-capable hands of our development community.



If you’re interested in learning how to develop with the Cardboard open source project, please see our developer documentation or visit the Google VR GitHub repo to access source code, build the project, and download the latest release.

By Jonathan Goodlow, Product Manager, AR & VR

Announcing ARCore 1.0 and new updates to Google Lens

Anuj Gosalia, Director of Engineering, AR

With ARCore and Google Lens, we're working to make smartphone cameras smarter. ARCore enables developers to build apps that can understand your environment and place objects and information in it. Google Lens uses your camera to help make sense of what you see, whether that's automatically creating contact information from a business card before you lose it, or soon being able to identify the breed of a cute dog you saw in the park. At Mobile World Congress, we're launching ARCore 1.0 along with new support for developers, and we're releasing updates for Lens and rolling it out to more people.

ARCore, Google's augmented reality SDK for Android, is out of preview and launching as version 1.0. Developers can now publish AR apps to the Play Store, and it's a great time to start building. ARCore works on 100 million Android smartphones, and advanced AR capabilities are available on all of these devices. It works on 13 different models right now (Google's Pixel, Pixel XL, Pixel 2 and Pixel 2 XL; Samsung's Galaxy S8, S8+, Note8, S7 and S7 edge; LGE's V30 and V30+ (Android O only); ASUS's Zenfone AR; and OnePlus's OnePlus 5). And beyond those available today, we're partnering with many manufacturers to enable their upcoming devices this year, including Samsung, Huawei, LGE, Motorola, ASUS, Xiaomi, HMD/Nokia, ZTE, Sony Mobile, and Vivo.

Making ARCore work on more devices is only part of the equation. We're bringing developers additional improvements and support to make their AR development process faster and easier. ARCore 1.0 features improved environmental understanding that enables users to place virtual assets on textured surfaces like posters, furniture, toy boxes, books, cans and more. Android Studio Beta now supports ARCore in the Emulator, so you can quickly test your app in a virtual environment right from your desktop.

Everyone should get to experience augmented reality, so we're working to bring it to people everywhere, including China. We'll be supporting ARCore in China on partner devices sold there— starting with Huawei, Xiaomi and Samsung—to enable them to distribute AR apps through their app stores.

We've partnered with a few great developers to showcase how they're planning to use AR in their apps. Snapchat has created an immersive experience that invites you into a "portal"—in this case, FC Barcelona's legendary Camp Nou stadium. Visualize different room interiors inside your home with Sotheby's International Realty. See Porsche's Mission E Concept vehicle right in your driveway, and explore how it works. With OTTO AR, choose pieces from an exclusive set of furniture and place them, true to scale, in a room. Ghostbusters World, based on the film franchise, is coming soon. In China, place furniture and over 100,000 other pieces with Easyhome Homestyler, see items and place them in your home when you shop on JD.com, or play games from NetEase, Wargaming and Game Insight.

With Google Lens, your phone's camera can help you understand the world around you, and, we're expanding availability of the Google Lens preview. With Lens in Google Photos, when you take a picture, you can get more information about what's in your photo. In the coming weeks, Lens will be available to all Google Photos English-language users who have the latest version of the app on Android and iOS. Also over the coming weeks, English-language users on compatible flagship devices will get the camera-based Lens experience within the Google Assistant. We'll add support for more devices over time.

And while it's still a preview, we've continued to make improvements to Google Lens. Since launch, we've added text selection features, the ability to create contacts and events from a photo in one tap, and—in the coming weeks—improved support for recognizing common animals and plants, like different dog breeds and flowers.

Smarter cameras will enable our smartphones to do more. With ARCore 1.0, developers can start building delightful and helpful AR experiences for them right now. And Lens, powered by AI and computer vision, makes it easier to search and take action on what you see. As these technologies continue to grow, we'll see more ways that they can help people have fun and get more done on their phones.

Android Wear SDK and Emulator Update

Posted by Hoi Lam, Lead Developer Advocate, Android Wear
Today we launched the latest version of the Android Wear SDK (2.2.0) with several watch face related enhancements. These include the addition of an unread notification indicator for all watch faces, which is planned to be part of the upcoming consumer release of Android Wear. With the Wear SDK 2.2.0, you can customize the notification indicator or display your own. This feature is available to the developer community early, via the SDK and emulator, so you can verify that the indicator fits the design of your watch face. In addition, we are adding enhancements to the ComplicationDrawable class and publishing the final version of the Wear emulator based on Android Oreo.

Introducing the unread notification indicator


Notification is a vital part of the Wear experience. As a result, starting from the next consumer release of Wear (version 2.9.0), a dot-shaped indicator will be displayed by default at the bottom of the watch face if there are new, unread notifications. Watch face developers can preview the indicator with their watch faces by using the latest version of the emulator. Developers can customise the indicator's accent color via WatchFaceStyle.setAccentColor - the default color is white as shown in the example below, but developers can set the color for the ring around the dot to an accent color of their choice, to match the rest of the watch face.
If the new indicator does not fit with the design of your watch face, you can switch it off using WatchFaceStyle.setHideNotificationIndicator and choose another option for displaying the notification, including: 1) displaying the number of unread notifications in the system tray using WatchFaceStyle.setShowUnreadCountIndicator, or 2) getting the number of unread notifications using WatchFaceStyle.getUnreadCount and displaying the number in a way that fits your watch face's unique style.

Enhancement to ComplicationDrawable


We launched the ComplicationDrawable class at last year's Google I/O, and we are continuing to improve it. In this latest SDK release, we added two enhancements:
  • Permission Handling - If the watch face lacks the correct permission to display the content of a complication, the complication type of TYPE_NO_PERMISSION is issued. ComplicationDrawable now handles this automatically and will launch a permission request in onTap. If you previously implemented your own code to start the permission screen, please check that the permission screen is not triggered twice and, if necessary, remove unneeded code.
  • Drawable Callback - If a complication contains an image or an icon, it can take a small amount of time to load after the other initial data arrives. Our previous recommendation therefore was that you redraw the screen every second. But this is unnecessary for watch faces that only update once per minute, for example. As a result, we have added new support for Drawable.Callback to ComplicationDrawable. Developers who update the screen less frequently than once per second should adopt this new callback to redraw the watch face when images have loaded.
For more, please see the Android Wear Release Notes which includes other information regarding the emulator.

More improvements to come


Many of you have noticed a steady release of enhancements to Android Wear over the last few months since the launch of Wear 2.0. We are developing many more for the months ahead and look forward to sharing more when the features are ready.



Resonance Audio: Multi-platform spatial audio at scale

Posted by Eric Mauskopf, Product Manager

As humans, we rely on sound to guide us through our environment, help us communicate with others and connect us with what's happening around us. Whether walking along a busy city street or attending a packed music concert, we're able to hear hundreds of sounds coming from different directions. So when it comes to AR, VR, games and even 360 video, you need rich sound to create an engaging immersive experience that makes you feel like you're really there. Today, we're releasing a new spatial audio software development kit (SDK) called Resonance Audio. It's based on technology from Google's VR Audio SDK, and it works at scale across mobile and desktop platforms.

Experience spatial audio in our Audio Factory VR app for Daydreamand SteamVR

Performance that scales on mobile and desktop

Bringing rich, dynamic audio environments into your VR, AR, gaming, or video experiences without affecting performance can be challenging. There are often few CPU resources allocated for audio, especially on mobile, which can limit the number of simultaneous high-fidelity 3D sound sources for complex environments. The SDK uses highly optimized digital signal processing algorithms based on higher order Ambisonics to spatialize hundreds of simultaneous 3D sound sources, without compromising audio quality, even on mobile. We're also introducing a new feature in Unity for precomputing highly realistic reverb effects that accurately match the acoustic properties of the environment, reducing CPU usage significantly during playback.

Using geometry-based reverb by assigning acoustic materials to a cathedral in Unity

Multi-platform support for developers and sound designers

We know how important it is that audio solutions integrate seamlessly with your preferred audio middleware and sound design tools. With Resonance Audio, we've released cross-platform SDKs for the most popular game engines, audio engines, and digital audio workstations (DAW) to streamline workflows, so you can focus on creating more immersive audio. The SDKs run on Android, iOS, Windows, MacOS and Linux platforms and provide integrations for Unity, Unreal Engine, FMOD, Wwise and DAWs. We also provide native APIs for C/C++, Java, Objective-C and the web. This multi-platform support enables developers to implement sound designs once, and easily deploy their project with consistent sounding results across the top mobile and desktop platforms. Sound designers can save time by using our new DAW plugin for accurately monitoring spatial audio that's destined for YouTube videos or apps developed with Resonance Audio SDKs. Web developers get the open source Resonance Audio Web SDK that works in the top web browsers by using the Web Audio API.

DAW plugin for sound designers to monitor audio destined for YouTube 360 videos or apps developed with the SDK

Model complex Sound Environments Cutting edge features

By providing powerful tools for accurately modeling complex sound environments, Resonance Audio goes beyond basic 3D spatialization. The SDK enables developers to control the direction acoustic waves propagate from sound sources. For example, when standing behind a guitar player, it can sound quieter than when standing in front. And when facing the direction of the guitar, it can sound louder than when your back is turned.

Controlling sound wave directivity for an acoustic guitar using the SDK

Another SDK feature is automatically rendering near-field effects when sound sources get close to a listener's head, providing an accurate perception of distance, even when sources are close to the ear. The SDK also enables sound source spread, by specifying the width of the source, allowing sound to be simulated from a tiny point in space up to a wall of sound. We've also released an Ambisonic recording tool to spatially capture your sound design directly within Unity, save it to a file, and use it anywhere Ambisonic soundfield playback is supported, from game engines to YouTube videos.

If you're interested in creating rich, immersive soundscapes using cutting-edge spatial audio technology, check out the Resonance Audio documentation on our developer site, let us know what you think through GitHub, and show us what you build with #ResonanceAudio on social media; we'll be resharing our favorites.

Open sourcing the Firebase SDKs

Today, at Google I/O 2017, we are pleased to announce that we are taking our first steps towards open sourcing our client libraries. By making our SDKs open, we’re aiming to show our commitment to greater transparency and to building a stronger developer community. To help further that goal, we’ll be using GitHub as a core part of our own toolchain to enable all of you to contribute as well. As you find issues in our code, from inconsistent style to bugs, you can file issues through the standard GitHub issue tracker. You can also find our project in the Google Open Source directory. We’re really looking forward to your pull requests!

What’s open?

We’re starting by open sourcing several products in our iOS, JavaScript, Java, Node.js and Python SDKs. We'll be looking at open sourcing our Android SDK as well. The SDKs are being licensed under Apache 2.0, the same flexible license as existing Firebase open source projects like FirebaseUI.

Let's take a look at each repo:

Firebase iOS SDK 4.0

https://github.com/firebase/firebase-ios-sdk

With the launch of the Firebase iOS 4.0 SDKs we have made several improvements to the developer experience, such as more idiomatic API names for our Swift users. By open sourcing our iOS SDKs we hope to provide an additional avenue for you to give us feedback on such features. For this first release we are open sourcing our Realtime Database, Auth, Cloud Storage and Cloud Messaging (FCM) SDKs, but going forward we intend to release more.

Because we aren't yet able to open source some of the Firebase components, the full product build process isn't available. While you can use this repo to build a FirebaseDev pod, our libraries distributed through CocoaPods will continue to be static frameworks for the time being. We are continually looking for ways to improve the developer experience for developers, however you integrate.

Our GitHub README provides more details on how you build, test and contribute to our iOS SDKs.

Firebase JavaScript SDK 4.0

https://github.com/firebase/firebase-js-sdk

We are excited to announce that we are open sourcing our Realtime Database, Cloud Storage and Cloud Messaging (FCM) SDKs for JavaScript. We’ll have a couple of improvements hot on the heels of this initial release, including open sourcing Firebase Authentication. We are also in the process of releasing the source maps for our components, which we expect would really improve the debuggability of your app.

Our GitHub repo includes instructions on how you can build, test and contribute.

Firebase Admin SDKs

Node.js: https://github.com/firebase/firebase-admin-node
Java: https://github.com/firebase/firebase-admin-java
Python: https://github.com/firebase/firebase-admin-python

We are happy to announce that all three of our Admin SDKs for accessing Firebase on privileged environments are now fully open source, including our recently-launched Python SDK. While we continue to explore supporting more languages, we encourage you to use our source as inspiration to enable Firebase for your environment (and if you do, we'd love to hear about it!)

We're really excited to see what you do with the updated SDKs - as always reach out to us with feedback or questions in the Firebase-Talk Google Group, on Stack Overflow, via the Firebase Support team, and now on GitHub for SDK issues and pull requests! And to read about the other improvements to Firebase that launched at Google I/O, head over to the Firebase blog.

By Salman Qadri, Firebase Product Manager

Introducing the Google Assistant SDK

Posted by Chris Ramsdale, Product Manager

When we first announced the Google Assistant, we talked about helping users get things done no matter what device they're using. We started with Google Allo, Google Home and Pixel phones, and expanded the Assistant ecosystem to include Android Wear and Android phones running Marshmallow and Nougat over the last few months. We also announced that Android Auto and Android TV will get support soon.

Today, we're taking another step towards building out that ecosystem by introducing the developer preview of the Google Assistant SDK. With this SDK you can now start building your own hardware prototypes that include the Google Assistant, like a self-built robot or a voice-enabled smart mirror. This allows you to interact with the Google Assistant from any platform.

The Google Assistant SDK includes a gRPC API, a Python open source client that handles authentication and access to the API, samples and documentation. The SDK allows you to capture a spoken query, for example "what's on my calendar", pass that up to the Google Assistant service and receive an audio response. And while it's ideal for prototyping on Raspberry Pi devices, it also adds support for many other platforms.

To get started, visit the Google Assistant SDK website for developers, download the SDK, and start building. In addition, Wayne Piekarski from our Developer Relations team has a video introducing the Google Assistant SDK, below.


And for some more inspiration, try our samples or check out an example implementation by Deeplocal, an innovation studio out of Pittsburgh that took the Google Assistant SDK for a spin and built a fun mocktails mixer. You can even build one for yourself: go here to learn more and read their documentationon Github. Or check out the video below on how they built their demo from scratch.


This is a developer preview and we have a number of features in development including hotword support, companion app integration and more. If you're interested in building a commercial product with the Google Assistant, we encourage you to reach out and contact us. We've created a new developer community on Google+ at g.co/assistantsdkdev for developers to keep up to date and discuss ideas. There is also a stackoverflow tag [google-assistant-sdk] for questions, and a mailing list to keep up to date on SDK news. We look forward to seeing what you create with the Google Assistant SDK!

Google VR SDK graduates out of beta

Posted by Nathan Martz, Product Manager, Google VR

At Google I/O, we announced Daydream—Google's platform for high quality, mobile virtual reality—and released early developer resources to get the community started with building for Daydream. Since then, the team has been hard at work, listening to feedback and evolving these resources into a suite of powerful developer tools.

Today, we are proud to announce that the Google VR SDK 1.0 with support for Daydream has graduated out of beta, and is now available on the Daydream developer site. Our updated SDK simplifies common VR development tasks so you can focus on building immersive, interactive mobile VR applications for Daydream-ready phones and headsets, and supports integrated asynchronous reprojection, high fidelity spatialized audio, and interactions using the Daydream controller.

To make it even easier to start developing with the Google VR SDK 1.0, we’ve partnered with Unity and Unreal so you can use the game engines and tools you’re already familiar with. We’ve also updated the site with full documentation, reference sample apps, and tutorials.

Native Unity integration

This release marks the debut of native Daydream integration in Unity, which enables Daydream developers to take full advantage of all of Unity’s optimizations in VR rendering. It also adds support for features like head tracking, deep linking, and easy Android manifest configuration. Many Daydream launch apps are already working with the newest integration features, and you can now download the new Unity binary here and the Daydream plugin here.

Native UE4 integration

We’ve made significant improvements to our UE4 native integration that will help developers build better production-quality Daydream apps. The latest version introduces Daydream controller support in the editor, a neck model, new rendering optimizations, and much more. UE4 developers can download the source here.

Get started today

While the first Daydream-ready phones and headset are coming this fall, you can start developing high-quality Daydream apps right now with the Google VR SDK 1.0 and the DIY developer kit.

We’re also opening applications to our Daydream Access Program (DAP) so we can work closely with even more developers building great content for Daydream. Submit your Daydream app proposal to apply to be part of our DAP.

When you create content for the Daydream platform, you know your apps will work seamlessly across every Daydream-ready phone and headset. Daydream is just getting started, and we’re looking forward to working together to help you build new immersive, interactive VR experiences. Stay tuned for more information about Daydream-ready phones and the Daydream headset and controller coming soon.

New Google Cast SDK released for Android and iOS

Posted by Adam Champy, Product Manager for Google Cast SDK

Google Cast makes it easy for developers to extend their mobile experience to the most beautiful screens and speakers in the home.

At Google I/O, we announced our new Google Cast SDK. This new SDK focuses on making development for Cast quicker, more reliable, and easier to maintain. We’ve introduced full state management that helps you implement the right abstraction between your app and Google Cast. We’ve also delivered a full Cast user experience, matching the Google Cast design checklist.

Today we are releasing this SDK for Android and iOS Senders, including an introductory video, full documentation, and reference sample apps and codelab tutorials for both platforms. Initial developer feedback is that first-time implementations can save significant development time compared with our previous SDKs.


A few things we’ve announced will be coming in the next few months, including a customizable Expanded Controller and adding customization to the Mini Controller, to help accelerate development even further.

Drop by our Cast developer site to learn about the new SDK and APIs, and join our developer community on Google+ at g.co/googlecastdev to discuss this with other developers.

Spatial audio comes to the Cardboard SDK

Posted by Nathan Martz, Product Manager, Google Cardboard

Human beings experience sound in all directions—like when a fire truck zooms by, or when an airplane is overhead. Starting today, the Cardboard SDKs for Unity and Android support spatial audio, so you can create equally immersive audio experiences in your virtual reality (VR) apps. All your users need is their smartphone, a regular pair of headphones, and a Google Cardboard viewer.

Sound the way you hear it

Many apps create simple versions of spatial audio—by playing sounds from the left and right in separate speakers. But with today’s SDK updates, your app can produce sound the same way humans actually hear it. For example:

  • The SDK combines the physiology of a listener’s head with the positions of virtual sound sources to determine what users hear. For example: sounds that come from the right will reach a user’s left ear with a slight delay, and with fewer high frequency elements (which are normally dampened by the skull).
  • The SDK lets you specify the size and material of your virtual environment, both of which contribute to the quality of a given sound. So you can make a conversation in a tight spaceship sound very different than one in a large, underground (and still virtual) cave.

Optimized for today’s smartphones

We built today’s updates with performance in mind, so adding spatial audio to your app has minimal impact on the primary CPU (where your app does most of its work). We achieve these results in a couple of ways:

  • The SDK is optimized for mobile CPUs (e.g. SIMD instructions) and actually computes the audio in real-time on a separate thread, so most of the processing takes place outside of the primary CPU.
  • The SDK allows you to control the fidelity of each sound. As a result, you can allocate more processing power to critical sounds, while de-emphasizing others.

Simple, native integrations

It’s really easy to get started with the SDK’s new audio features. Unity developers will find a comprehensive set of components for creating soundscapes on Android, iOS, Windows and OS X. And native Android developers will now have a simple Java API for simulating virtual sounds and environments.


Experience spatial audio in our sample app for developers

Check out our Android sample app (for developer reference only), browse the documentation on the Cardboard developers site, and start experimenting with spatial audio today. We’re excited to see (and hear) the new experiences you’ll create!