Tag Archives: augmented reality

Creating More Realistic AR experiences with updates to ARCore & Sceneform

Posted by Ashish Shah, Product Manager, Google AR & VR

The magic of augmented reality is in the way it blends the digital and the physical worlds. For AR experiences to feel truly immersive, digital objects need to look realistic -- as if they were actually there with you, in your space. This is something we continue to prioritize as we update ARCore and Sceneform, our 3D rendering library for Java developers.

Today, with the release of ARCore 1.6, we're bringing further improvements to help you build more realistic and compelling experiences, including better plane boundary tracking and several lighting improvements in Sceneform.

With 250M devices now supporting ARCore, developers can bring these experiences to an even larger and growing user base.

More Realistic Lighting in Sceneform

Previous versions of Sceneform defaulted to optimizing ambient light as yellow. Version 1.6 defaults to neutral and white. This aligns more closely to the way light appears in the real world, making digital objects look more natural. You can see the differences below.

Left side image: Sceneform 1.5Right side image: Sceneform 1.6

This change will also make objects rendered with Sceneform look as if they're affected more naturally by color and lighting in the surrounding environment. For example, if you're viewing an AR object at sunset, it would appear to be illuminated by the red and orange hues, just like real objects in the scene.

In addition, we've updated Sceneform's built-in environmental image to provide a more neutral scene for your app. This will be most noticeable when viewing reflections in smooth metallic surfaces.

Adding screen capture and recording to the mix

To help you further improve quality and engagement in your AR apps, we're adding screen capture and recording to Sceneform. This is something a number of developers have requested to help with demo recording and prototyping. It can also be used as an external facing feature, allowing your users to share screenshots and videos on social media more easily, which can help get the word out about your app.

You can access this functionality through the surface mirroring API for the SceneView class. The API allows you to display the Sceneform view on a device's screen at the same time it's being rendered to another surface (such as the input surface for the Android MediaRecorder).

Learn more and get started

The new updates to Sceneform and ARCore are available today. With these new versions also comes support for new devices, such as the Samsung Galaxy A3 and the Huawei P20 Lite, that will join the list of ARCore-enabled devices. More information is available on the ARCore developer website.

The Instant Motion Tracking Behind Motion Stills AR



Last summer, we launched Motion Stills on Android, which delivered a great video capture and viewing experience on a wide range of Android phones. Then, we refined our Motion Stills technology further to enable the new motion photos feature in Pixel 2.

Today, we are excited to announce the new Augmented Reality (AR) mode in Motion Stills for Android. With the new AR mode, a user simply touches the viewfinder to place fun, virtual 3D objects on static or moving horizontal surfaces (e.g. tables, floors, or hands), allowing them to seamlessly interact with a dynamic real-world environment. You can also record and share the clips as GIFs and videos.
Motion Stills with instant motion tracking in action
AR mode is powered by instant motion tracking, a six degree of freedom tracking system built upon the technology that powers Motion Text in Motion Stills iOS and the privacy blur on YouTube to accurately track static and moving objects. We refined and enhanced this technology to enable fun AR experiences that can run on any Android device with a gyroscope.

When you touch the viewfinder, Motion Stills AR “sticks” a 3D virtual object to that location, making it look as if it’s part of the real-world scene. By assuming that the tracked surface is parallel to the ground plane, and using the device’s accelerometer sensor to provide the initial orientation of the phone with respect to the ground plane, one can track the six degrees of freedom of the camera (3 for translation and 3 for rotation). This allows us to accurately transform and render the virtual object within the scene.
When the phone is approximately steady, the accelerometer sensor provides the acceleration due to the Earth’s gravity. For horizontal planes the gravity vector is parallel to normal of the tracked plane and can accurately provide the initial orientation of phone.
Instant Motion Tracking
The core idea behind instant motion tracking is to decouple the camera’s translation and rotation estimation, treating them instead as independent optimization problems. First, we determine the 3D camera translation solely from the visual signal of the camera. To do this, we observe the target region's apparent 2D translation and relative scale across frames. A simple pinhole camera model relates both translation and scale of a box in the image plane with the final 3D translation of the camera.
The translation and the change in size (relative scale) of the box in the image plane can be used to determine 3D translation between two camera position C1 and C2. However, as our camera model doesn’t assume the focal length of the camera lens, we do not know the true distance/depth of the tracked plane.
To account for this, we added scale estimation to our existing tracker (the one used in Motion Text) as well as region tracking outside the field of view of the camera. When the camera gets closer to the tracked surface, the virtual content scales accurately, which is consistent with perception of real-world objects. When you pan outside the field of view of the target region and back the virtual object will reappear in approximately the same spot.
Independent translation (from visual signal only as shown by red box) and rotation tracking (from gyro; not shown)
After all this, we obtain the device’s 3D rotation (roll, pitch and yaw) using the phone’s built-in gyroscope. The estimated 3D translation combined with the 3D rotation provides us with the ability to render the virtual content correctly in the viewfinder. And because we treat rotation and translation separately, our instant motion tracking approach is calibration free and works on any Android device with a gyroscope.
Augmented chicken family with Motion Stills AR mode
We are excited to bring this new mode to Motion Stills for Android, and we hope you’ll enjoy it. Please download the new release of Motion Stills and keep sending us feedback with #motionstills on your favorite social media.

Acknowledgements
For rendering, we are thankful we were able to leverage Google’s Lullaby engine using animated Poly models. A thank you to our team members who worked on the tech and this launch with us: John Nack, Suril Shah, Igor Kibalchich, Siarhei Kazakou, and Matthias Grundmann.

ARCore: Augmented reality at Android scale

Posted by Dave Burke, VP, Android Engineering

With more than two billion active devices, Android is the largest mobile platform in the world. And for the past nine years, we've worked to create a rich set of tools, frameworks and APIs that deliver developers' creations to people everywhere. Today, we're releasing a preview of a new software development kit (SDK) called ARCore. It brings augmented reality capabilities to existing and future Android phones. Developers can start experimenting with it right now.

We've been developing the fundamental technologies that power mobile AR over the last three years with Tango, and ARCore is built on that work. But, it works without any additional hardware, which means it can scale across the Android ecosystem. ARCore will run on millions of devices, starting today with the Pixel and Samsung's S8, running 7.0 Nougat and above. We're targeting 100 million devices at the end of the preview. We're working with manufacturers like Samsung, Huawei, LG, ASUS and others to make this possible with a consistent bar for quality and high performance.

ARCore works with Java/OpenGL, Unity and Unreal and focuses on three things:

  • Motion tracking: Using the phone's camera to observe feature points in the room and IMU sensor data, ARCore determines both the position and orientation (pose) of the phone as it moves. Virtual objects remain accurately placed.
  • Environmental understanding: It's common for AR objects to be placed on a floor or a table. ARCore can detect horizontal surfaces using the same feature points it uses for motion tracking.
  • Light estimation: ARCore observes the ambient light in the environment and makes it possible for developers to light virtual objects in ways that match their surroundings, making their appearance even more realistic.

Alongside ARCore, we've been investing in apps and services which will further support developers in creating great AR experiences. We built Blocks and Tilt Brush to make it easy for anyone to quickly create great 3D content for use in AR apps. As we mentioned at I/O, we're also working on Visual Positioning Service (VPS), a service which will enable world scale AR experiences well beyond a tabletop. And we think the Web will be a critical component of the future of AR, so we're also releasing prototype browsers for web developers so they can start experimenting with AR, too. These custom browsers allow developers to create AR-enhanced websites and run them on both Android/ARCore and iOS/ARKit.

ARCore is our next step in bringing AR to everyone, and we'll have more to share later this year. Let us know what you think through GitHub, and check out our new AR Experiments showcase where you can find some fun examples of what's possible. Show us what you build on social media with #ARCore; we'll be resharing some of our favorites.