Tag Archives: AR

Updates to the Android XR SDK: Introducing Developer Preview 2

Posted by Matthew McCullough – VP of Product Management, Android Developer

Since launching the Android XR SDK Developer Preview alongside Samsung, Qualcomm, and Unity last year, we’ve been blown away by all of the excitement we’ve been hearing from the broader Android community. Whether it's through coding live-streams or local Google Developer Group talks, it's been an outstanding experience participating in the community to build the future of XR together, and we're just getting started.

Today we’re excited to share an update to the Android XR SDK: Developer Preview 2, packed with new features and improvements to help you develop helpful and delightful immersive experiences with familiar Android APIs, tools and open standards created for XR.

At Google I/O, we have two technical sessions related to Android XR. The first is Building differentiated apps for Android XR with 3D content, which covers many features present in Jetpack SceneCore and ARCore for Jetpack XR. The future is now, with Compose and AI on Android XR covers creating XR-differentiated UI and our vision on the intersection of XR with cutting-edge AI capabilities.

Android XR sessions at Google I/O 2025
Building differentiated apps for Android XR with 3D content and The future is now, with Compose and AI on Android XR

What’s new in Developer Preview 2

Since the release of Developer Preview 1, we’ve been focused on making the APIs easier to use and adding new immersive Android XR features. Your feedback has helped us shape the development of the tools, SDKs, and the platform itself.

With the Jetpack XR SDK, you can now play back 180° and 360° videos, which can be stereoscopic by encoding with the MV-HEVC specification or by encoding view-frames adjacently. The MV-HEVC standard is optimized and designed for stereoscopic video, allowing your app to efficiently play back immersive videos at great quality. Apps built with Jetpack Compose for XR can use the SpatialExternalSurface composable to render media, including stereoscopic videos.

Using Jetpack Compose for XR, you can now also define layouts that adapt to different XR display configurations. For example, use a SubspaceModifier to specify the size of a Subspace as a percentage of the device’s recommended viewing size, so a panel effortlessly fills the space it's positioned in.

Material Design for XR now supports more component overrides for TopAppBar, AlertDialog, and ListDetailPaneScaffold, helping your large-screen enabled apps that use Material Design effortlessly adapt to the new world of XR.

An app adapts to XR using Material Design for XR with the new component overrides
An app adapts to XR using Material Design for XR with the new component overrides

In ARCore for Jetpack XR, you can now track hands after requesting the appropriate permissions. Hands are a collection of 26 posed hand joints that can be used to detect hand gestures and bring a whole new level of interaction to your Android XR apps:

moving image demonstrates how hands bring a natural input method to your Android XR experience.
Hands bring a natural input method to your Android XR experience.

For more guidance on developing apps for Android XR, check out our Android XR Fundamentals codelab, the updates to our Hello Android XR sample project, and a new version of JetStream with Android XR support.

The Android XR Emulator has also received updates to stability, support for AMD GPUs, and is now fully integrated within the Android Studio UI.

the Android XR Emulator in Android STudio
The Android XR Emulator is now integrated in Android Studio

Developers using Unity have already successfully created and ported existing games and apps to Android XR. Today, you can upgrade to the Pre-Release version 2 of the Unity OpenXR: Android XR package! This update adds many performance improvements such as support for Dynamic Refresh Rate, which optimizes your app’s performance and power consumption. Shaders made with Shader Graph now support SpaceWarp, making it easier to use SpaceWarp to reduce compute load on the device. Hand meshes are now exposed with occlusion, which enables realistic hand visualization.

Check out Unity’s improved Mixed Reality template for Android XR, which now includes support for occlusion and persistent anchors.

We recently launched Android XR Samples for Unity, which demonstrate capabilities on the Android XR platform such as hand tracking, plane tracking, face tracking, and passthrough.

moving image of Google’s open-source Unity samples demonstrating platform features and showing how they’re implemented
Google’s open-source Unity samples demonstrate platform features and show how they’re implemented

The Firebase AI Logic for Unity is now in public preview! This makes it easy for you to integrate gen AI into your apps, enabling the creation of AI-powered experiences with Gemini and Android XR. The Firebase AI Logic fully supports Gemini's capabilities, including multimodal input and output, and bi-directional streaming for immersive conversational interfaces. Built with production readiness in mind, Firebase AI Logic is integrated with core Firebase services like App Check, Remote Config, and Cloud Storage for enhanced security, configurability, and data management. Learn more about this on the Firebase blog or go straight to the Gemini API using Vertex AI in Firebase SDK documentation to get started.

Continuing to build the future together

Our commitment to open standards continues with the glTF Interactivity specification, in collaboration with the Khronos Group. which will be supported in glTF models rendered by Jetpack XR later this year. Models using the glTF Interactivity specification are self-contained interactive assets that can have many pre-programmed behaviors, like rotating objects on a button press or changing the color of a material over time.

Android XR will be available first on Samsung’s Project Moohan, launching later this year. Soon after, our partners at XREAL will release the next Android XR device. Codenamed Project Aura, it’s a portable and tethered device that gives users access to their favorite Android apps, including those that have been built for XR. It will launch as a developer edition, specifically for you to begin creating and experimenting. The best news? With the familiar tools you use to build Android apps today, you can build for these devices too.

product image of XREAL’s Project Aura against a nebulous black background
XREAL’s Project Aura

The Google Play Store is also getting ready for Android XR. It will list supported 2D Android apps on the Android XR Play Store when it launches later this year. If you are working on an Android XR differentiated app, you can get it ready for the big launch and be one of the first differentiated apps on the Android XR Play Store:

And we know many of you are excited for the future of Android XR on glasses. We are shaping the developer experience now and will share more details on how you can participate later this year.

To get started creating and developing for Android XR, check out developer.android.com/develop/xr where you will find all of the tools, libraries, and resources you need to work with the Android XR SDK. In particular, try out our samples and codelabs.

We welcome your feedback, suggestions, and ideas as you’re helping shape Android XR. Your passion, expertise, and bold ideas are vital as we continue to develop Android XR together. We look forward to seeing your XR-differentiated apps when Android XR devices launch later this year!

Explore this announcement and all Google I/O 2025 updates on io.google starting May 22.


Introducing Android XR SDK Developer Preview

Posted by Matthew McCullough – VP of Product Management, Android Developer

Today, we're launching the developer preview of the Android XR SDK - a comprehensive development kit for Android XR. It's the newest platform in the Android family built for extended reality (XR) headsets (and glasses in the future!). You’ll have endless opportunities to create and develop experiences that blend digital and physical worlds, using familiar Android APIs, tools and open standards created for XR. All of this means: if you build for Android, you're already building for XR! Read on to get started with development for headsets.

With the Android XR SDK you can:

    • Break free of traditional screens by spatializing your app with rich 3D elements, spatial panels, and spatial audio that bring a natural sense of depth, scale, and tangible realism
    • Transport your users to a fantastical virtual space, or engage with them in their own homes or workplaces
    • Take advantage of natural, multimodal interaction capabilities such as hands and eyes
"We believe Android XR is a game-changer for storytelling. It allows us to merge narrative depth with advanced interactive features, creating an immersive world where audiences can engage with characters and stories like never before." 
- Jed Weintrob, Partner at 30 Ninjas

Your apps on Android XR

The Android XR SDK is built on the existing foundations of Android app development. We're also bringing the Play Store to Android XR, where most Android apps will automatically be made available without any additional development effort. Users will be able to discover and use your existing apps in a whole new dimension. To differentiate your existing Compose app, you may opt-in, to automatically spatialize Material Design (M3) components and Compose for adaptive layouts in XR.

Moving image showing sizing capabilities in Android XR
Apps optimized for large screens take advantage of sizing capabilities in Android XR

The Android XR SDK has something for every developer:

Building with Kotlin and Android Studio? You'll feel right at home with the Jetpack XR SDK, a suite of familiar libraries and tools to simplify development and accelerate productivity.

    • Using Unity’s real-time 3D engine? The Android XR Extensions for Unity provides the packages you need to build or port powerful, immersive experiences.
    • Developing on the web? Use WebXR to add immersive experiences supported on Chrome.
    • Working with native languages like C/C++? Android XR supports the OpenXR 1.1 standard.

Creating with Jetpack XR SDK

The Jetpack XR SDK includes new Jetpack libraries purpose-built for XR. The highlights include:

    • Jetpack Compose for XR - enables you to declaratively create spatial UI layouts and spatialize your existing 2D UI built with Compose or Views
    • ARCore for Jetpack XR - brings powerful perception capabilities for your app to understand the real world
“With Android XR, we can bring Calm directly into your world, capturing the senses and allowing you to experience it in a deeper and more transformative way. By collaborating closely with the Android XR team on this cutting-edge technology, we’ve reimagined how to create a sense of depth and space, resulting in a level of immersion that instantly helps you feel more present, focused, and relaxed.” 
- Dan Szeto, Vice President at Calm Studios

Kickstart your Jetpack XR SDK journey with the Hello XR Sample, a straightforward introduction to the essential features of Jetpack Compose for XR.

Learn more about developing with the Jetpack XR SDK.

Moving image of the JetNews sample app adapted for Android XR
The JetNews sample app is an Android large-screen app adapted for Android XR

We're also introducing new tools and capabilities to the latest preview of Android Studio Meerkat to boost productivity and simplify your creation process for Android XR.

    • Use the new Android XR Emulator to create a virtualized XR device for deploying and testing apps built with the Jetpack XR SDK. The emulator includes XR-specific controls for using a keyboard and mouse to navigate an emulated virtual space.
    • Use the Android XR template to get a jump-start on creating an app with Jetpack Compose for XR.
    • Use the updated Layout Inspector to inspect and debug spatialized UI components created with Jetpack Compose for XR.

Learn more about the XR enabled tools in Android Studio and the Android XR Emulator.

Moving image of the The Android XR Emulator in Android Studio
The Android XR Emulator in Android Studio has new controls to explore 3D space within the emulator

Creating with Unity

We've partnered with Unity to natively integrate their real-time 3D engine with Android XR starting with Unity 6. Unity is introducing the Unity OpenXR: Android XR package for bringing your multi-platform XR experiences to Android XR.

Unity is adding Android XR support to these popular XR packages:

We're also rolling out the Android XR Extensions for Unity with samples and innovative features such as mouse interaction profile, environment blend mode, personalized hand mesh, object tracking, and more.

"Having already brought Demeo to most commercially available platforms, it's safe to say we were impressed with the process of adapting the game to run on Android XR." 
– Johan Gastrin, CTO at Resolution Games

Check out our getting started guide for unity and Unity’s blog post to learn more.

Moving image of the The Vacation Simulator
Vacation Simulator has been updated to Unity 6 and supports Android XR

Creating for the Web

Chrome on Android XR supports the WebXR standard. If you're building for the web, you can enhance existing sites with 3D content or build new immersive experiences. You can also use full-featured frameworks like three.js, A-Frame, or PlayCanvas to create virtual worlds, or you can use a simpler API like model-viewer so your users can visualize products in an e-commerce site. And because WebXR is an open standard, the same experiences you build for mobile AR devices or dedicated VR hardware seamlessly work on Android XR.

Learn more about developing with WebXR.

Moving image demonstrating virtual objects interacting with real world surfaces in Chrome on Android XR
Chrome on Android XR supports WebXR features including depth maps allowing virtual objects to interact with real world surfaces

Built on Open Standards

We’re continuing the Android tradition of building with open standards. At the heart of the Android perception stack is OpenXR - a high-performance, cross-platform API focused on portability. Android XR is compliant with OpenXR 1.1, and we’re also expanding the Open XR standards with leading-edge vendor extensions to introduce powerful world-sensing capabilities such as:

    • AI-powered hand mesh, designed to adapt to the shape and size of hands to better represent the diversity of your users
    • Sophisticated light estimation, for lighting your digital content to match real-world lighting conditions
    • New trackables that let you bring real world objects like laptops, phones, keyboards, and mice into a virtual environment

The Android XR SDK also supports open standard formats such as glTF 2.0 for 3D models and OpenEXR for high-dynamic-range environments.

Building the future together

We couldn't be more proud or excited to be announcing the Developer Preview of the Android XR SDK. We’re releasing this developer preview, because we want to build the future of XR together with you. We welcome your feedback and can’t wait to work with you and build your ideas and suggestions into the platform. Your passion, expertise, and bold ideas are absolutely essential as we continue to build Android XR.

We look forward to interacting with your apps, reimagined to take advantage of the unique spatial capabilities of Android XR, using familiar tools like Android Studio and Jetpack Compose. We’re eager to visit the amazing 3D worlds you build using powerful tools and open standards like Unity and OpenXR. Most of all, we can’t wait to go on this journey with all of you that make up the amazing community of Android and Unity developers.

To get started creating and developing for Android XR, check out developer.android.com/develop/xr where you will find all of the tools, libraries and resources you need to create with the Android XR SDK! If you are interested in getting access to prerelease hardware and collaborating with the Android XR team, express your interest to participate in an Android XR Developer Bootcamp in 2025 by filling out this form.

7 dos and don’ts of using ML on the web with MediaPipe

Posted by Jen Person, Developer Relations Engineer

If you're a web developer looking to bring the power of machine learning (ML) to your web apps, then check out MediaPipe Solutions! With MediaPipe Solutions, you can deploy custom tasks to solve common ML problems in just a few lines of code. View the guides in the docs and try out the web demos on Codepen to see how simple it is to get started. While MediaPipe Solutions handles a lot of the complexity of ML on the web, there are still a few things to keep in mind that go beyond the usual JavaScript best practices. I've compiled them here in this list of seven dos and don'ts. Do read on to get some good tips!


❌ DON'T bundle your model in your app

As a web developer, you're accustomed to making your apps as lightweight as possible to ensure the best user experience. When you have larger items to load, you already know that you want to download them in a thoughtful way that allows the user to interact with the content quickly rather than having to wait for a long download. Strategies like quantization have made ML models smaller and accessible to edge devices, but they're still large enough that you don't want to bundle them in your web app. Store your models in the cloud storage solution of your choice. Then, when you initialize your task, the model and WebAssembly binary will be downloaded and initialized. After the first page load, use local storage or IndexedDB to cache the model and binary so future page loads run even faster. You can see an example of this in this touchless ATM sample app on GitHub.


✅ DO initialize your task early

Task initialization can take a bit of time depending on model size, connection speed, and device type. Therefore, it's a good idea to initialize the solution before user interaction. In the majority of the code samples on Codepen, initialization takes place on page load. Keep in mind that these samples are meant to be as simple as possible so you can understand the code and apply it to your own use case. Initializing your model on page load might not make sense for you. Just focus on finding the right place to spin up the task so that processing is hidden from the user.

After initialization, you should warm up the task by passing a placeholder image through the model. This example shows a function for running a 1x1 pixel canvas through the Pose Landmarker task:

function dummyDetection(poseLandmarker: PoseLandmarker) { const width = 1; const height = 1; const canvas = document.createElement('canvas'); canvas.width = width; canvas.height = height; const ctx = canvas.getContext('2d'); ctx.fillStyle = 'rgba(0, 0, 0, 1)'; ctx.fillRect(0, 0, width, height); poseLandmarker.detect(canvas); }


✅ DO clean up resources

One of my favorite parts of JavaScript is automatic garbage collection. In fact, I can't remember the last time memory management crossed my mind. Hopefully you've cached a little information about memory in your own memory, as you'll need just a bit of it to make the most of your MediaPipe task. MediaPipe Solutions for web uses WebAssembly (WASM) to run C++ code in-browser. You don't need to know C++, but it helps to know that C++ makes you take out your own garbage. If you don't free up unused memory, you will find that your web page uses more and more memory over time. It can have performance issues or even crash.

When you're done with your solution, free up resources using the .close() method.

For example, I can create a gesture recognizer using the following code:

const createGestureRecognizer = async () => { const vision = await FilesetResolver.forVisionTasks( "https://cdn.jsdelivr.net/npm/@mediapipe/[email protected]/wasm" ); gestureRecognizer = await GestureRecognizer.createFromOptions(vision, { baseOptions: { modelAssetPath: "https://storage.googleapis.com/mediapipe-models/gesture_recognizer/gesture_recognizer/float16/1/gesture_recognizer.task", delegate: "GPU" }, }); }; createGestureRecognizer();

Once I'm done recognizing gestures, I dispose of the gesture recognizer using the close() method:

gestureRecognizer.close();

Each task has a close method, so be sure to use it where relevant! Some tasks have close() methods for the returned results, so refer to the API docs for details.


✅ DO try out tasks in MediaPipe Studio

When deciding on or customizing your solution, it's a good idea to try it out in MediaPipe Studio before writing your own code. MediaPipe Studio is a web-based application for evaluating and customizing on-device ML models and pipelines for your applications. The app lets you quickly test MediaPipe solutions in your browser with your own data, and your own customized ML models. Each solution demo also lets you experiment with model settings for the total number of results, minimum confidence threshold for reporting results, and more. You'll find this especially useful when customizing solutions so you can see how your model performs without needing to create a test web page.

Screenshot of Image Classification page in MediaPipe Studio


✅ DO test on different devices

It's always important to test your web apps on various devices and browsers to ensure they work as expected, but I think it's worth adding a reminder here to test early and often on a variety of platforms. You can use MediaPipe Studio to test devices as well so you know right away that a solution will work on your users' devices.


❌ DON'T default to the biggest model

Each task lists one or more recommended models. For example, the Object Detection task lists three different models, each with benefits and drawbacks based on speed, size and accuracy. It can be tempting to think that the most important thing is to choose the model with the very highest accuracy, but if you do so, you will be sacrificing speed and increasing the size of your model. Depending on your use case, your users might benefit from a faster result rather than a more accurate one. The best way to compare model options is in MediaPipe Studio. I realize that this is starting to sound like an advertisement for MediaPipe Studio, but it really does come in handy here!

photo of a whale breeching against a background of clouds in a deep, vibrant blue sky

✅ DO reach out!

Do you have any dos or don'ts of ML on the web that you think I missed? Do you have questions about how to get started? Or do you have a cool project you want to share? Reach out to me on LinkedIn and tell me all about it!

Google and Binomial partner to open source high quality basis universal

Today, Google and Binomial are excited to announce the high quality update to the original Basis Universal release.

Basis Universal allows you to have state of the art web performance with your images, keeping images compressed even on the GPU. Older systems like JPEG and PNG may look small in storage size, but once they hit the GPU they are processed as uncompressed data! The original Basis Universal codec created images that were 6-8 times smaller than JPEG on the GPU while maintaining a similar storage size.

Today we release a high quality Basis Universal codec that utilizes the highest quality formats modern GPUs support, finally bringing the web up to modern GPU texture standards—with cross platform support. The textures are larger in storage size and GPU compressed size, but are still 3-4 times smaller than sending a JPEG or PNG file to be processed on the GPU, and can transcode to a lower quality format for older GPUs.
Original Image by Erol Ahmed from Unsplash.com
Visual comparison of Basis Universal High Quality

Best of all, we are actively working on standardizing Basis Universal with the Khronos Group.

Since our original release in Summer 2019 we’ve seen widespread adoption of Basis Universal in engines like three.js, Babylon.js, Godot, and more, changing what is possible for people to create on the web. Now that a high quality option is available, we expect to see even more adoption and groundbreaking applications created with it.

Please feel free to join our community on Github and check out the full demo there as well. You can also follow standardization efforts via Khronos Group events and forums.

By Stephanie Hurlburt, Binomial and Jamieson Brettle, Chrome Media

New experimental features for Daydream

Posted by Jonathan Huang, Senior Product Manager, Google AR/VR

Since we first launched Daydream, developers have responded by creating virtual reality (VR) experiences that are entertaining, educational and useful. Today, we're announcing a new set of experimental features for developers to use on the Lenovo Mirage Solo—our standalone Daydream headset—to continue to push the platform forward. Here's what's coming:

Experimental 6DoF Controllers

First, we're adding APIs to support positional controller tracking with six degrees of freedom—or 6DoF—to the Mirage Solo. With 6DoF tracking, you can move your hands more naturally in VR, just like you would in the physical world. To date, this type of experience has been limited to expensive PC-based VR with external tracking.

We've also created experimental 6DoF controllers that use a unique optical tracking system to help developers start building with 6DoF features on the Mirage Solo. Instead of using expensive external cameras and sensors that have to be carefully calibrated, our system uses machine learning and off-the-shelf parts to accurately estimate the 3D position and orientation of the controllers. We're excited about this approach because it can reduce the need for expensive hardware and make 6DoF experiences more accessible to more people.

We've already put these experimental controllers in the hands of a few developers and we're excited for more developers to start testing them soon.

Experimental 6DoF controllers

See-Through Mode

We're also introducing what we call see-through mode, which gives you the ability to see what's right in front of you in the physical world while you're wearing your VR headset.

See-through mode takes advantage of our WorldSense technology, which was built to provide accurate, low latency tracking. And, because the tracking cameras on the Mirage Solo are positioned at approximately eye-distance apart, you also get precise depth perception. The result is a see-through mode good enough to let you play ping pong with your headset on.

Playing ping pong with see-through mode on the Mirage Solo.

The combination of see-through mode and the Mirage Solo's tracking technology also opens up the door for developers to blend the digital and physical worlds in new ways by building Augmented Reality (AR) prototypes. Imagine, for example, an interior designer being able to plan a new layout for a room by adding virtual chairs, tables and decorations on top of the actual space.

Experimental app using objects from Poly, see-through mode and 6DoF Controllers to design a space in our office.

Smartphone Android Apps in VR

Finally, we're introducing the capability to open any smartphone Android app on your Daydream device, so you can use your favorite games, tools and more in VR. For example, you can play the popular indie game Mini Metro on a virtual big screen, so you have more space to view and plan your own intricate public transit system.

Playing Mini Metro on a virtual big screen in VR.

With support for Android Apps in VR, developers will be able to add Daydream VR support to their existing 2D applications without having to start from scratch. The Chrome team re-used the existing 2D interfaces for Chrome Browser Sync, settings and more to provide a feature-rich browsing experience in Daydream.

The Chrome app on Daydream uses the 2D settings within VR.

Try These Features

We've really loved building with these tools and can't wait to see what you do with them. See-through mode and Android Apps in VR will be available for all developers to try soon.

If you're a developer in the U.S., click here to learn more and apply now for an experimental 6DoF controller developer kit.

Creating AR Experiences for I/O: Our Process

Posted by Karin Levi, Product Marketing, ARCore

A few weeks ago at Google I/O we released a major update to ARCore, Google's AR development platform. We added new APIs like Cloud Anchors, that enable multi-user, collaborative AR experiences and Augmented Images that enable activation of 2D images into 3D objects. All of these updates are going to change the way we use AR today and enable developers to create richer, more immersive AR apps.

With these new capabilities, we decided to put our platform to the test. So we built real experiences to showcase how these all come to life. All demos were presented at the I/O AR & VR sandbox area. We open sourced them to make sure you can see how simple it is to build these experiences. We're pretty happy with how they turned out and would love to share with you some learning and insights from behind the scenes.

Light Board - Multiplayer game

Light Board is an AR multiplayer tabletop game where two players on floating game boards launch colored projectiles at each other.

While building Light Board it was important for us to keep in mind who the end users are. We wanted it to be a simple/fun game for developers to try out while visiting the I/O sandbox. The developers would only have a couple minutes to play while passing through, so it needed to allow players (even non-gamers) to pick it up and play with very little setup.

The artwork for Light Board was a major focus. Our mission for the look of the game was to align with the design and decor of I/O 2018. This way, our app would feel like an extension of everything the attendees saw around them. As a result, our design philosophy had 3 goals; bright accent colors, simple graphic shapes and natural physical materials.

Left: Design for AR/VR Sandbox at I/O 2018. Right: Key art for Light Board game boards

The artwork was created in Maya and Cinema 4D. We created physically based materials for our models using Substance Painter. Just as continuous iteration is crucial for engineering, it is also important when creating art assets. With that in mind, we kept careful track of our content pipeline, even for this relatively simple project. This allowed us to quickly try out different looks and board styles before settling on our final design.

On the engineering front we selected the Unity game engine as our dev environment. Unity gives us a couple of important advantages. First, it is easy to get great looking 3D graphics up and running right away. Second, the engine component is already complete, so we could immediately start iterating on gameplay code. As with the artwork, this allowed us to test gameplay options before we made a final decision. Additionally, Unity gave us support for both Android and iOS with only a little extra work.

To handle the multiplayer aspect we used Firebase Realtime Database. We were concerned with network performance at the event, and felt that the persistent nature of a database would make it more tolerant of poor networks. As it turned out, it worked very well and we got the ability to quit and rejoin games for free!

We had a lot of fun building Light Board and we hope people can use it as an example of how easy it can be to not only build AR apps, but to use really cool features like Cloud Anchors. Please check out our open source repo and give Light Board a try!

Just a line - Draw with your friends

In March, we released Just a Line, an Android app that lets you draw in the air with your phone. It's a simple experiment meant to showcase the power of ARCore. At Google I/O, we added Cloud Anchors to the app so that two people can draw at once in the same space, even if one of them is using Android and the other iOS.

Both apps were built natively: The Android version was written in Android Studio, and the iOS version was built in xCode. ARCore's Cloud Anchors enable Just a Line to pair two phones, allowing users to draw simultaneously in a shared space. Pairing works across Android and iOS devices, and drawings are synchronized live through a Firebase Realtime Database. You can find the open-source code for iOS here and for Android here.

Illusive Images - Art exhibition comes to life

"Illusive Images" demo is an augmented gallery consisting of 3 artworks, each exploring a different augmented image use case and user experience. As one walks from side to side, around the object, or gazes in a specific direction, 2D artworks are married with 3D, inviting the viewer to enter into the space of the artwork spanning well beyond the physical frame.

Due to the visual design nature of our augmented images, we experimented a lot with creating databases with varying degrees of features. In order to get the best results, we iterated quickly by resizing the canvas for the artwork. We also moved and stretched the brightness and contrast levels. These variations helped to achieve the most optimal image without compromising design intent.

The app was built in Unity with ARCore, with the majority of assets created in Cinema 4D. Mograph animations were imported into Unity as an fbx, and driven entirely by the position of the user in relation to the artwork. An example project can be found here.

To make your development experience easier, we open sourced all the demos our team built. We hope you find this useful! You can also visit our website to learn more and start building AR experiences today.

Open Sourcing Resonance Audio

Posted by Eric Mauskopf, Product Manager

Spatial audio adds to your sense of presence when you're in VR or AR, making it feel and sound, like you're surrounded by a virtual or augmented world. And regardless of the display hardware you're using, spatial audio makes it possible to hear sounds coming from all around you.

Resonance Audio, our spatial audio SDK launched last year, enables developers to create more realistic VR and AR experiences on mobile and desktop. We've seen a number of exciting experiences emerge across a variety of platforms using our SDK. Recent examples include apps like Pixar's Coco VR for Gear VR, Disney's Star WarsTM: Jedi Challenges AR app for Android and iOS, and Runaway's Flutter VR for Daydream, which all used Resonance Audio technology.

To accelerate adoption of immersive audio technology and strengthen the developer community around it, we’re opening Resonance Audio to a community-driven development model. By creating an open source spatial audio project optimized for mobile and desktop computing, any platform or software development tool provider can easily integrate with Resonance Audio. More cross-platform and tooling support means more distribution opportunities for content creators, without the worry of investing in costly porting projects.

What's Included in the Open Source Project

As part of our open source project, we're providing a reference implementation of YouTube's Ambisonic-based spatial audio decoder, compatible with the same Ambisonics format (Ambix ACN/SN3D) used by others in the industry. Using our reference implementation, developers can easily render Ambisonic content in their VR media and other applications, while benefiting from Ambisonics open source, royalty-free model. The project also includes encoding, sound field manipulation and decoding techniques, as well as head related transfer functions (HRTFs) that we've used to achieve rich spatial audio that scales across a wide spectrum of device types and platforms. Lastly, we're making our entire library of highly optimized DSP classes and functions, open to all. This includes resamplers, convolvers, filters, delay lines and other DSP capabilities. Additionally, developers can now use Resonance Audio's brand new Spectral Reverb, an efficient, high quality, constant complexity reverb effect, in their own projects.

We've open sourced Resonance Audio as a standalone library and associated engine plugins, VST plugin, tutorials, and examples with the Apache 2.0 license. Most importantly, this means Resonance Audio is yours, so you're free to use Resonance Audio in your projects, no matter where you work. And if you see something you'd like to improve, submit a GitHub pull request to be reviewed by the Resonance Audio project committers. While the engine plugins for Unity, Unreal, FMOD, and Wwise will remain open source, going forward they will be maintained by project committers from our partners, Unity, Epic, Firelight Technologies, and Audiokinetic, respectively.

If you're interested in learning more about Resonance Audio, check out the documentation on our developer site. If you want to get more involved, visit our GitHub to access the source code, build the project, download the latest release, or even start contributing. We're looking forward to building the future of immersive audio with all of you.

Open sourcing Resonance Audio

Spatial audio adds to your sense of presence when you’re in VR or AR, making it feel and sound, like you’re surrounded by a virtual or augmented world. And regardless of the display hardware you’re using, spatial audio makes it possible to hear sounds coming from all around you.

Resonance Audio, our spatial audio SDK launched last year, enables developers to create more realistic VR and AR experiences on mobile and desktop. We’ve seen a number of exciting experiences emerge across a variety of platforms using our SDK. Recent examples include apps like Pixar’s Coco VR for Gear VR, Disney’s Star Wars™: Jedi Challenges AR app for Android and iOS, and Runaway’s Flutter VR for Daydream, which all used Resonance Audio technology.

To accelerate adoption of immersive audio technology and strengthen the developer community around it, we’re opening Resonance Audio to a community-driven development model. By creating an open source spatial audio project optimized for mobile and desktop computing, any platform or software development tool provider can easily integrate with Resonance Audio. More cross-platform and tooling support means more distribution opportunities for content creators, without the worry of investing in costly porting projects.

What’s Included in the Open Source Project

As part of our open source project, we’re providing a reference implementation of YouTube’s Ambisonic-based spatial audio decoder, compatible with the same Ambisonics format (Ambix ACN/SN3D) used by others in the industry. Using our reference implementation, developers can easily render Ambisonic content in their VR media and other applications, while benefiting from Ambisonics open source, royalty-free model. The project also includes encoding, sound field manipulation and decoding techniques, as well as head related transfer functions (HRTFs) that we’ve used to achieve rich spatial audio that scales across a wide spectrum of device types and platforms. Lastly, we’re making our entire library of highly optimized DSP classes and functions, open to all. This includes resamplers, convolvers, filters, delay lines and other DSP capabilities. Additionally, developers can now use Resonance Audio’s brand new Spectral Reverb, an efficient, high quality, constant complexity reverb effect, in their own projects.

We’ve open sourced Resonance Audio as a standalone library and associated engine plugins, VST plugin, tutorials, and examples with the Apache 2.0 license. This means Resonance Audio is yours, so you’re free to use Resonance Audio in your projects, no matter where you work. And if you see something you’d like to improve, submit a GitHub pull request to be reviewed by the Resonance Audio project committers. While the engine plugins for Unity, Unreal, FMOD, and Wwise will remain open source, going forward they will be maintained by project committers from our partners, Unity, Epic, Firelight Technologies, and Audiokinetic, respectively.

If you’re interested in learning more about Resonance Audio, check out the documentation on our developer site. If you want to get more involved, visit our GitHub to access the source code, build the project, download the latest release, or even start contributing. We’re looking forward to building the future of immersive audio with all of you.

By Eric Mauskopf, Google AR/VR Team

Announcing ARCore 1.0 and new updates to Google Lens

Anuj Gosalia, Director of Engineering, AR

With ARCore and Google Lens, we're working to make smartphone cameras smarter. ARCore enables developers to build apps that can understand your environment and place objects and information in it. Google Lens uses your camera to help make sense of what you see, whether that's automatically creating contact information from a business card before you lose it, or soon being able to identify the breed of a cute dog you saw in the park. At Mobile World Congress, we're launching ARCore 1.0 along with new support for developers, and we're releasing updates for Lens and rolling it out to more people.

ARCore, Google's augmented reality SDK for Android, is out of preview and launching as version 1.0. Developers can now publish AR apps to the Play Store, and it's a great time to start building. ARCore works on 100 million Android smartphones, and advanced AR capabilities are available on all of these devices. It works on 13 different models right now (Google's Pixel, Pixel XL, Pixel 2 and Pixel 2 XL; Samsung's Galaxy S8, S8+, Note8, S7 and S7 edge; LGE's V30 and V30+ (Android O only); ASUS's Zenfone AR; and OnePlus's OnePlus 5). And beyond those available today, we're partnering with many manufacturers to enable their upcoming devices this year, including Samsung, Huawei, LGE, Motorola, ASUS, Xiaomi, HMD/Nokia, ZTE, Sony Mobile, and Vivo.

Making ARCore work on more devices is only part of the equation. We're bringing developers additional improvements and support to make their AR development process faster and easier. ARCore 1.0 features improved environmental understanding that enables users to place virtual assets on textured surfaces like posters, furniture, toy boxes, books, cans and more. Android Studio Beta now supports ARCore in the Emulator, so you can quickly test your app in a virtual environment right from your desktop.

Everyone should get to experience augmented reality, so we're working to bring it to people everywhere, including China. We'll be supporting ARCore in China on partner devices sold there— starting with Huawei, Xiaomi and Samsung—to enable them to distribute AR apps through their app stores.

We've partnered with a few great developers to showcase how they're planning to use AR in their apps. Snapchat has created an immersive experience that invites you into a "portal"—in this case, FC Barcelona's legendary Camp Nou stadium. Visualize different room interiors inside your home with Sotheby's International Realty. See Porsche's Mission E Concept vehicle right in your driveway, and explore how it works. With OTTO AR, choose pieces from an exclusive set of furniture and place them, true to scale, in a room. Ghostbusters World, based on the film franchise, is coming soon. In China, place furniture and over 100,000 other pieces with Easyhome Homestyler, see items and place them in your home when you shop on JD.com, or play games from NetEase, Wargaming and Game Insight.

With Google Lens, your phone's camera can help you understand the world around you, and, we're expanding availability of the Google Lens preview. With Lens in Google Photos, when you take a picture, you can get more information about what's in your photo. In the coming weeks, Lens will be available to all Google Photos English-language users who have the latest version of the app on Android and iOS. Also over the coming weeks, English-language users on compatible flagship devices will get the camera-based Lens experience within the Google Assistant. We'll add support for more devices over time.

And while it's still a preview, we've continued to make improvements to Google Lens. Since launch, we've added text selection features, the ability to create contacts and events from a photo in one tap, and—in the coming weeks—improved support for recognizing common animals and plants, like different dog breeds and flowers.

Smarter cameras will enable our smartphones to do more. With ARCore 1.0, developers can start building delightful and helpful AR experiences for them right now. And Lens, powered by AI and computer vision, makes it easier to search and take action on what you see. As these technologies continue to grow, we'll see more ways that they can help people have fun and get more done on their phones.

Getting Started with the Poly API

Posted by Bruno Oliveira, Software Engineer

As developers, we all know that having the right assets is crucial to the success of a 3D application, especially with AR and VR apps. Since we launched Poly a few weeks ago, many developers have been downloading and using Poly models in their apps and games. To make this process easier and more powerful, today we launched the Poly API, which allows applications to dynamically search and download 3D assets at both edit and run time.

The API is REST-based, so it's inherently cross-platform. To help you make the API calls and convert the results into objects that you can display in your app, we provide several toolkits and samples for some common game engines and platforms. Even if your engine or platform isn't included in this list, remember that the API is based on HTTP, which means you can call it from virtually any device that's connected to the Internet.

Here are some of the things the API allows you to do:

  • List assets, with many possible filters:
    • keyword
    • category ("Animals", "Technology", "Transportation", etc.)
    • asset type (Blocks, Tilt Brush, etc)
    • complexity (low, medium, high complexity)
    • curated (only curated assets or all assets)
  • Get a particular asset by ID
  • Get the user's own assets
  • Get the user's liked assets
  • Download assets. Formats vary by asset type (OBJ, GLTF1, GLTF2).
  • Download material files and textures for assets.
  • Get asset metadata (author, title, description, license, creation time, etc)
  • Fetch thumbnails for assets

Poly Toolkit for Unity Developers

If you are using Unity, we offer Poly Toolkit for Unity, a plugin that includes all the necessary functionality to automatically wrap the API calls and download and convert assets, exposing it through a simple C# API. For example, you can fetch and import an asset into your scene at runtime with a single line of code:

PolyApi.GetAsset(ASSET_ID,
result => { PolyApi.Import(result.Value, PolyImportOptions.Default()); });

Poly Toolkit optionally also handles authentication for you, so that you can list the signed in user's own private assets, or the assets that the user has liked on the Poly website.

In addition, Poly Toolkit for Unity also comes with an editor window, where you can search for and import assets from Poly into your Unity scene directly from the editor.

Poly Toolkit for Unreal Developers

If you are using Unreal, we also offer Poly Toolkit for Unreal, which wraps the API and performs automatic download and conversion of OBJs and Blocks models from Poly. It allows you to query for assets and filter results, download assets and import assets as ready-to-use Unreal actors that you can use in your game.

Credit: Piano by Bruno Oliveira

How to use Poly API with Android, Web or iOS app

Not using a game engine? No problem! If you are developing for Android, check out our Android sample code, which includes a basic sample with no external dependencies, and also a sample that shows how to use the Poly API in conjunction with ARCore. The samples include:

  • Asynchronous HTTP connections to the Poly API.
  • Asynchronous downloading of asset files.
  • Conversion of OBJ and MTL files to OpenGL-compatible VBOs and IBOs.
  • Examples of basic shaders.
  • Integration with ARCore (dynamically downloads an object from Poly and lets the user place it in the scene).

Credit: Cactus wrenby Poly by Google

If you are an iOS developer, we have two samples for you as well: one using SceneKit and one using ARKit, showing how to build an iOS app that downloads and imports models from Poly. This includes all the logicnecessary to open an HTTP connection, make the API requests, parse the results, build the 3D objects from the data and place them on the scene.

For web developers, we also offer a complete WebGL sample using Three.js, showing how to get and display a particular asset, or perform searches. There is also a sample showing how to import and display Tilt Brush sketches.

Credit: Forest by Alex "SAFFY" Safayan

No matter what engine or platform you are using, we hope that the Poly API will help bring high quality assets to your app and help you increase engagement with your users! You can find more information about the Poly API and our toolkits and samples on our developers site.