Tag Archives: 3d

Google and Binomial partner to open source high quality basis universal

Today, Google and Binomial are excited to announce the high quality update to the original Basis Universal release.

Basis Universal allows you to have state of the art web performance with your images, keeping images compressed even on the GPU. Older systems like JPEG and PNG may look small in storage size, but once they hit the GPU they are processed as uncompressed data! The original Basis Universal codec created images that were 6-8 times smaller than JPEG on the GPU while maintaining a similar storage size.

Today we release a high quality Basis Universal codec that utilizes the highest quality formats modern GPUs support, finally bringing the web up to modern GPU texture standards—with cross platform support. The textures are larger in storage size and GPU compressed size, but are still 3-4 times smaller than sending a JPEG or PNG file to be processed on the GPU, and can transcode to a lower quality format for older GPUs.
Original Image by Erol Ahmed from Unsplash.com
Visual comparison of Basis Universal High Quality

Best of all, we are actively working on standardizing Basis Universal with the Khronos Group.

Since our original release in Summer 2019 we’ve seen widespread adoption of Basis Universal in engines like three.js, Babylon.js, Godot, and more, changing what is possible for people to create on the web. Now that a high quality option is available, we expect to see even more adoption and groundbreaking applications created with it.

Please feel free to join our community on Github and check out the full demo there as well. You can also follow standardization efforts via Khronos Group events and forums.

By Stephanie Hurlburt, Binomial and Jamieson Brettle, Chrome Media

New UI tools and a richer creative canvas come to ARCore

Posted by Evan Hardesty Parker, Software Engineer

ARCore and Sceneform give developers simple yet powerful tools for creating augmented reality (AR) experiences. In our last update (version 1.6) we focused on making virtual objects appear more realistic within a scene. In version 1.7, we're focusing on creative elements like AR selfies and animation as well as helping you improve the core user experience in your apps.

Creating AR Selfies

Example of 3D face mesh application

ARCore's new Augmented Faces API (available on the front-facing camera) offers a high quality, 468-point 3D mesh that lets users attach fun effects to their faces. From animated masks, glasses, and virtual hats to skin retouching, the mesh provides coordinates and region specific anchors that make it possible to add these delightful effects.

You can get started in Unity or Sceneform by creating an ARCore session with the "front-facing camera" and Augmented Faces "mesh" mode enabled. Note that other AR features such as plane detection aren't currently available when using the front-facing camera. AugmentedFace extends Trackable, so faces are detected and updated just like planes, Augmented Images, and other trackables.

// Create ARCore session that support Augmented Faces for use in Sceneform.
public Session createAugmentedFacesSession(Activity activity) throws UnavailableException {
// Use the front-facing (selfie) camera.
Session session = new Session(activity, EnumSet.of(Session.Feature.FRONT_CAMERA));
// Enable Augmented Faces.
Config config = session.getConfig();
config.setAugmentedFaceMode(Config.AugmentedFaceMode.MESH3D);
session.configure(config);
return session;
}

Animating characters in your Sceneform AR apps

Another way version 1.7 expands the AR creative canvas is by letting your objects dance, jump, spin and move around with support for animations in Sceneform. To start an animation, initialize a ModelAnimator (an extension of the existing Android animation support) with animation data from your ModelRenderable.

void startDancing(ModelRenderable andyRenderable) {
AnimationData data = andyRenderable.getAnimationData("andy_dancing");
animator = new ModelAnimator(data, andyRenderable);
animator.start();
}

Solving common AR UX challenges in Unity with new UI components

In ARCore version 1.7 we also focused on helping you improve your user experience with a simplified workflow. We've integrated "ARCore Elements" -- a set of common AR UI components that have been validated with user testing -- into the ARCore SDK for Unity. You can use ARCore Elements to insert AR interactive patterns in your apps without having to reinvent the wheel. ARCore Elements also makes it easier to follow Google's recommended AR UX guidelines.

ARCore Elements includes two AR UI components that are especially useful:

  • Plane Finding - streamlining the key steps involved in detecting a surface
  • Object Manipulation - using intuitive gestures to rotate, elevate, move, and resize virtual objects

We plan to add more to ARCore Elements over time. You can download the ARCore Elements app available in the Google Play Store to learn more.

Improving the User Experience with Shared Camera Access

ARCore version 1.7 also includes UX enhancements for the smartphone camera -- specifically, the experience of switching in and out of AR mode. Shared Camera access in the ARCore SDK for Java lets users pause an AR experience, access the camera, and jump back in. This can be particularly helpful if users want to take a picture of the action in your app.

More details are available in the Shared Camera developer documentation and Java sample.

Learn more and get started

For AR experiences to capture users' imaginations they need to be both immersive and easily accessible. With tools for adding AR selfies, animation, and UI enhancements, ARCore version 1.7 can help with both these objectives.

You can learn more about these new updates on our ARCore developer website.

Creating More Realistic AR experiences with updates to ARCore & Sceneform

Posted by Ashish Shah, Product Manager, Google AR & VR

The magic of augmented reality is in the way it blends the digital and the physical worlds. For AR experiences to feel truly immersive, digital objects need to look realistic -- as if they were actually there with you, in your space. This is something we continue to prioritize as we update ARCore and Sceneform, our 3D rendering library for Java developers.

Today, with the release of ARCore 1.6, we're bringing further improvements to help you build more realistic and compelling experiences, including better plane boundary tracking and several lighting improvements in Sceneform.

With 250M devices now supporting ARCore, developers can bring these experiences to an even larger and growing user base.

More Realistic Lighting in Sceneform

Previous versions of Sceneform defaulted to optimizing ambient light as yellow. Version 1.6 defaults to neutral and white. This aligns more closely to the way light appears in the real world, making digital objects look more natural. You can see the differences below.

Left side image: Sceneform 1.5Right side image: Sceneform 1.6

This change will also make objects rendered with Sceneform look as if they're affected more naturally by color and lighting in the surrounding environment. For example, if you're viewing an AR object at sunset, it would appear to be illuminated by the red and orange hues, just like real objects in the scene.

In addition, we've updated Sceneform's built-in environmental image to provide a more neutral scene for your app. This will be most noticeable when viewing reflections in smooth metallic surfaces.

Adding screen capture and recording to the mix

To help you further improve quality and engagement in your AR apps, we're adding screen capture and recording to Sceneform. This is something a number of developers have requested to help with demo recording and prototyping. It can also be used as an external facing feature, allowing your users to share screenshots and videos on social media more easily, which can help get the word out about your app.

You can access this functionality through the surface mirroring API for the SceneView class. The API allows you to display the Sceneform view on a device's screen at the same time it's being rendered to another surface (such as the input surface for the Android MediaRecorder).

Learn more and get started

The new updates to Sceneform and ARCore are available today. With these new versions also comes support for new devices, such as the Samsung Galaxy A3 and the Huawei P20 Lite, that will join the list of ARCore-enabled devices. More information is available on the ARCore developer website.

Open sourcing Seurat: bringing high-fidelity scenes to mobile VR

Crossposted from the Google Developers Blog

Great VR experiences make you feel like you’re really somewhere else. To create deeply immersive experiences, there are a lot of factors that need to come together: amazing graphics, spatialized audio, and the ability to move around and feel like the world is responding to you.

Last year at I/O, we announced Seurat as a powerful tool to help developers and creators bring high-fidelity graphics to standalone VR headsets with full positional tracking, like the Lenovo Mirage Solo with Daydream. Seurat is a scene simplification technology designed to process very complex 3D scenes into a representation that renders efficiently on mobile hardware. Here’s how ILMxLAB was able to use Seurat to bring an incredibly detailed ‘Rogue One: A Star Wars Story’ scene to a standalone VR experience.

Today, we’re open sourcing Seurat to the developer community. You can now use Seurat to bring visually stunning scenes to your own VR applications and have the flexibility to customize the tool for your own workflows.

Behind the scenes: how Seurat works

Seurat works by taking advantage of the fact that VR scenes are typically viewed from within a limited viewing region, and leverages this to optimize the geometry and textures in your scene. It takes RGBD images (color and depth) as input and generates a textured mesh, targeting a configurable number of triangles, texture size, and fill rate, to simplify scenes beyond what traditional methods can achieve.


To demonstrate what Seurat can do, here’s a snippet from Blade Runner: Revelations, which launched today with the Lenovo Mirage Solo.

Blade Runner: Revolution by Alcon Interactive and Seismic Games
The Blade Runner universe is known for its stunning worlds, and in Revelations, you get to unravel a mystery around fugitive Replicants in the futuristic but gritty streets. To create the look and feel for Revelations, Seismic used Seurat to bring a scene of 46.6 million triangles down to only 307,000, improving performance by more than 100x with almost no loss in visual quality:

Original scene:

Seurat-processed scene: 

If you’re interested in learning more about Seurat or trying it out yourself, visit the Seurat GitHub page to access the documentation and source code. We’re looking forward to seeing what you build!

By Manfred Ernst, Software Engineer

Diagnose and understand your app’s GPU behavior with GAPID

Posted by Andrew Woloszyn, Software Engineer

Developing for 3D is complicated. Whether you're using a native graphics API or enlisting the help of your favorite game engine, there are thousands of graphics commands that have to come together perfectly to produce beautiful 3D images on your phone, desktop or VR headsets.

GAPID (Graphics API Debugger) is a new tool that helps developers diagnose rendering and performance issues with their applications. With GAPID, you can capture a trace of your application and step through each graphics command one-by-one. This lets you visualize how your final image is built and isolate problematic calls, so you spend less time debugging through trial-and-error.

GAPID supports OpenGL ES on Android, and Vulkan on Android, Windows and Linux.

Debugging in action, one draw call at a time

GAPID not only enables you to diagnose issues with your rendering commands, but also acts as a tool to run quick experiments and see immediately how these changes would affect the presented frame.

Here are a few examples where GAPID can help you isolate and fix issues with your application:

What's the GPU doing?

Why isn't my text appearing?!

Working with a graphics API can be frustrating when you get an unexpected result, whether it's a blank screen, an upside-down triangle, or a missing mesh. As an offline debugger, GAPID lets you take a trace of these applications, and then inspect the calls afterwards. You can track down exactly which command produced the incorrect result by looking at the framebuffer, and inspect the state at that point to help you diagnose the issue.

What happens if I do X?

Using GAPID to edit shader code

Even when a program is working as expected, sometimes you want to experiment. GAPID allows you to modify API calls and shaders at will, so you can test things like:

  • What if I used a different texture on this object?
  • What if I changed the calculation of bloom in this shader?

With GAPID, you can now iterate on the look and feel of your app without having to recompile your application or rebuild your assets.

Whether you're building a stunning new desktop game with Vulkan or a beautifully immersive VR experience on Android, we hope that GAPID will save you both time and frustration and help you get the most out of your GPU. To get started with GAPID and see just how powerful it is, download it, take your favorite application, and capture a trace!

Getting Started with the Poly API

Posted by Bruno Oliveira, Software Engineer

As developers, we all know that having the right assets is crucial to the success of a 3D application, especially with AR and VR apps. Since we launched Poly a few weeks ago, many developers have been downloading and using Poly models in their apps and games. To make this process easier and more powerful, today we launched the Poly API, which allows applications to dynamically search and download 3D assets at both edit and run time.

The API is REST-based, so it's inherently cross-platform. To help you make the API calls and convert the results into objects that you can display in your app, we provide several toolkits and samples for some common game engines and platforms. Even if your engine or platform isn't included in this list, remember that the API is based on HTTP, which means you can call it from virtually any device that's connected to the Internet.

Here are some of the things the API allows you to do:

  • List assets, with many possible filters:
    • keyword
    • category ("Animals", "Technology", "Transportation", etc.)
    • asset type (Blocks, Tilt Brush, etc)
    • complexity (low, medium, high complexity)
    • curated (only curated assets or all assets)
  • Get a particular asset by ID
  • Get the user's own assets
  • Get the user's liked assets
  • Download assets. Formats vary by asset type (OBJ, GLTF1, GLTF2).
  • Download material files and textures for assets.
  • Get asset metadata (author, title, description, license, creation time, etc)
  • Fetch thumbnails for assets

Poly Toolkit for Unity Developers

If you are using Unity, we offer Poly Toolkit for Unity, a plugin that includes all the necessary functionality to automatically wrap the API calls and download and convert assets, exposing it through a simple C# API. For example, you can fetch and import an asset into your scene at runtime with a single line of code:

PolyApi.GetAsset(ASSET_ID,
result => { PolyApi.Import(result.Value, PolyImportOptions.Default()); });

Poly Toolkit optionally also handles authentication for you, so that you can list the signed in user's own private assets, or the assets that the user has liked on the Poly website.

In addition, Poly Toolkit for Unity also comes with an editor window, where you can search for and import assets from Poly into your Unity scene directly from the editor.

Poly Toolkit for Unreal Developers

If you are using Unreal, we also offer Poly Toolkit for Unreal, which wraps the API and performs automatic download and conversion of OBJs and Blocks models from Poly. It allows you to query for assets and filter results, download assets and import assets as ready-to-use Unreal actors that you can use in your game.

Credit: Piano by Bruno Oliveira

How to use Poly API with Android, Web or iOS app

Not using a game engine? No problem! If you are developing for Android, check out our Android sample code, which includes a basic sample with no external dependencies, and also a sample that shows how to use the Poly API in conjunction with ARCore. The samples include:

  • Asynchronous HTTP connections to the Poly API.
  • Asynchronous downloading of asset files.
  • Conversion of OBJ and MTL files to OpenGL-compatible VBOs and IBOs.
  • Examples of basic shaders.
  • Integration with ARCore (dynamically downloads an object from Poly and lets the user place it in the scene).

Credit: Cactus wrenby Poly by Google

If you are an iOS developer, we have two samples for you as well: one using SceneKit and one using ARKit, showing how to build an iOS app that downloads and imports models from Poly. This includes all the logicnecessary to open an HTTP connection, make the API requests, parse the results, build the 3D objects from the data and place them on the scene.

For web developers, we also offer a complete WebGL sample using Three.js, showing how to get and display a particular asset, or perform searches. There is also a sample showing how to import and display Tilt Brush sketches.

Credit: Forest by Alex "SAFFY" Safayan

No matter what engine or platform you are using, we hope that the Poly API will help bring high quality assets to your app and help you increase engagement with your users! You can find more information about the Poly API and our toolkits and samples on our developers site.

Introducing Draco: compression for 3D graphics

3D graphics are a fundamental part of many applications, including gaming, design and data visualization. As graphics processors and creation tools continue to improve, larger and more complex 3D models will become commonplace and help fuel new applications in immersive virtual reality (VR) and augmented reality (AR).  Because of this increased model complexity, storage and bandwidth requirements are forced to keep pace with the explosion of 3D data.

The Chrome Media team has created Draco, an open source compression library to improve the storage and transmission of 3D graphics. Draco can be used to compress meshes and point-cloud data. It also supports compressing points, connectivity information, texture coordinates, color information, normals and any other generic attributes associated with geometry.

With Draco, applications using 3D graphics can be significantly smaller without compromising visual fidelity. For users this means apps can now be downloaded faster, 3D graphics in the browser can load quicker, and VR and AR scenes can now be transmitted with a fraction of the bandwidth, rendered quickly and look fantastic.


Sample Draco compression ratios and encode/decode performance*

Transmitting 3D graphics for web-based applications is significantly faster using Draco’s JavaScript decoder, which can be tied to a 3D web viewer. The following video shows how efficient transmitting and decoding 3D objects in the browser can be - even over poor network connections.



Video and audio compression have shaped the internet over the past 10 years with streaming video and music on demand. With the emergence of VR and AR, on the web and on mobile (and the increasing proliferation of sensors like LIDAR) we will soon be swimming in a sea of geometric data. Compression technologies, like Draco, will play a critical role in ensuring these experiences are fast and accessible to anyone with an internet connection. More exciting developments are in store for Draco, including support for creating multiple levels of detail from a single model to further improve the speed of loading meshes.

We look forward to seeing what people do with Draco now that it's open source. Check out the code on GitHub and let us know what you think. Also available is a JavaScript decoder with examples on how to incorporate Draco into the three.js 3D viewer.

By Jamieson Brettle and Frank Galligan, Chrome Media Team

* Specifications: Tests ran with textures and positions quantized at 14-bit precision, normal vectors at 7-bit precision. Ran on a single-core of a 2013 MacBook Pro.  JavaScript decoded using Chrome 54 on Mac OS X.