Tag Archives: Google AR and VR

Travel through time with Pepsi and WebVR

Ah, the Super Bowl—come for the action, stay for the commercials. This year, as part of its “Pepsi Generations” global campaign, Pepsi will extend its TV commercial into virtual reality.

Pepsi’s new commercial, "This is the Pepsi,"  takes viewers on a journey through some of the brand’s most iconic moments. In VR, Pepsi fans can remember those moments, and  feel what it was like to be there.

That’s why we collaborated to create “Pepsi Go Back," a WebVR experience where fans travel through time and step into Pepsi commercials that became some of their biggest pop culture milestones.

Hop into the driver’s seat of Jeff Gordon’s car and hold on tight as you race against the “Back to the Future” DeLorean. 

gif1

Then, zip to 1992, and explore the Halfway House Cafe, where Cindy Crawford dazzled fans in one of the most famous commercials of all time.

gif3

In both environments, you can look around, interact with different parts of the experience and unlock cool stuff.

gif2

WebVR enables anybody with a desktop or mobile device to experience immersive content. This made it the ideal technology to take Pepsi’s fans on this nostalgic journey. Check out "Pepsi Go Back" on your smartphone with a VR headset like Cardboard or Daydream View, on Chrome, or any a desktop browser that supports WebVR.  And if you find yourself in Minneapolis for the game, stop by the Pepsi Generations Live pop up event for a demo with Daydream. 

Take your Blocks models to the next level

Since Blocks launched six months ago, it's been amazing to see all the incredible creations built by novices and professional modelers alike. We’ve witnessed everything from a retro roller skate, to an old timey photograph, to our very own JUMP camera. We've also gotten tons of feedback about ways we could improve the experience. The latest release, available today on Steam and the Oculus Store, has lots of new features that make Blocks more powerful and even easier to use. Let's take a look.

Environment Options

Modeling in the desert got you seeing 3D mirages? Don’t fret, you’ll now have the option to pick from four modeling environments. We’ve added a night version of the current environment for those who found the desert a bit too bright after long creation sessions. You’ll also find plain white and black options. Make sure to look up while creating in the black environment for a night sky surprise. Plus, we’ll remember which environment you used in your last session and automatically default to that selection your next time around.

Models_Optimize.gif

Improved Snapping

The ability to snap objects, edges and vertices together helps make your creations precise. However, we heard that the existing snapping behavior was often unpredictable or difficult to control. To ensure every snap you make does what you expect it to do, we’ve vastly improved our snapping algorithm and introduced a brand new user experience to guide you.

When trying to snap two objects together, make sure to half-press the alternate trigger to see a helpful guide line. The line will preview the spot to which your object will snap if the trigger is fully pressed. Use this guide to locate your snapping gesture to the exact face you intend before pressing the trigger all the way down.

Snapgif

You can also more easily snap meshes together. Let’s say, for example, that you’d like to snap a torus around a cylinder. Half-press the alternate trigger while placing the torus to get helpful guidelines for placing one mesh around the other. Fully press the trigger to snap the torus in place.

snipgif

Labs, with Your Most-Requested Features

At the very bottom of your Blocks menu you'll now see a beaker icon which lets you access prototype versions of your most-asked-for features. Here’s a breakdown:


Non-coplanar face mode: Many of you have noticed that Blocks will create coplanar faces when reshaping meshes. This is helpful in many cases, but in others it creates extraneous triangles that make further operations like perfect subdivision difficult. Now you can enable non-coplanar faces to avoid creation of extra triangles.

NCPM

Loop subdivide: Subdivision can be a really powerful tool. It’s even more powerful if you can cut a loop around an entire mesh. With loop subdivide enabled, simply long press on the trigger while subdividing to see a perfect subdivision loop form around your object.

Models_LoopCut.gif

Edge, Face and Vertex Deletion: Many have asked for the ability to delete a single edge, face or vertex. With this feature enabled, use the eraser tool to do just that. We'll “collapse” the mesh based on the edge, face or vertex you delete.

EraseEdgeGif

Worldspace grids: Another option that helps with precision is enabling worldspace grids. This feature will show grids along every side of your worldspace. The grid units are equivalent to the actual worldspace grid units, so you can precisely measure and place objects along the grids.

Models_Worldspace.gif

Volume insertion ruler: Modeling very precisely in Blocks can be difficult without a sense of relative scale. This experimental feature allows you to enable a ruler when you are inserting a mesh. As you insert the object, you'll see relative measurements in meters appear on each axis so you can precisely and accurately measure every object relative to the others.

Models_Ruler.gif

Expanded mesh wireframe:When reshaping a mesh you see a helpful wireframe around the section of the mesh you are reshaping. Many have asked for the ability to turn that wireframe on for the entire mesh, and this feature does exactly that.

Reshape

Stepwise selection undo: Multi-selecting a lot of objects can be frustrating if you select the wrong object in the middle of your selection process. We wanted to make this easier, so we’ve experimented with allowing you to undo and redo steps in your multi-selection. You can use the undo and redo buttons on your non-dominant controller to undo or redo the selection of objects in order. Make sure to keep your trigger held down while undoing or redoing to ensure you can keep multi-selecting after correcting your mistake!

Stepwise

It’s important to note that since these features are experimental, there may be minor bugs or issues when using them.

We can’t wait to see what you build with the latest version of Blocks. You can download the update from Steam and the Oculus Store for free today.

Header image: Blocks models by Damon Pidhajecki, Jacques Fourie, Jerad Bitner, and Michael Fuchs

Take your Blocks models to the next level

Since Blocks launched six months ago, it's been amazing to see all the incredible creations built by novices and professional modelers alike. We’ve witnessed everything from a retro roller skate, to an old timey photograph, to our very own JUMP camera. We've also gotten tons of feedback about ways we could improve the experience. The latest release, available today on Steam and the Oculus Store, has lots of new features that make Blocks more powerful and even easier to use. Let's take a look.

Environment Options

Modeling in the desert got you seeing 3D mirages? Don’t fret, you’ll now have the option to pick from four modeling environments. We’ve added a night version of the current environment for those who found the desert a bit too bright after long creation sessions. You’ll also find plain white and black options. Make sure to look up while creating in the black environment for a night sky surprise. Plus, we’ll remember which environment you used in your last session and automatically default to that selection your next time around.

Models_Optimize.gif

Improved Snapping

The ability to snap objects, edges and vertices together helps make your creations precise. However, we heard that the existing snapping behavior was often unpredictable or difficult to control. To ensure every snap you make does what you expect it to do, we’ve vastly improved our snapping algorithm and introduced a brand new user experience to guide you.

When trying to snap two objects together, make sure to half-press the alternate trigger to see a helpful guide line. The line will preview the spot to which your object will snap if the trigger is fully pressed. Use this guide to locate your snapping gesture to the exact face you intend before pressing the trigger all the way down.

Snapgif

You can also more easily snap meshes together. Let’s say, for example, that you’d like to snap a torus around a cylinder. Half-press the alternate trigger while placing the torus to get helpful guidelines for placing one mesh around the other. Fully press the trigger to snap the torus in place.

snipgif

Labs, with Your Most-Requested Features

At the very bottom of your Blocks menu you'll now see a beaker icon which lets you access prototype versions of your most-asked-for features. Here’s a breakdown:


Non-coplanar face mode: Many of you have noticed that Blocks will create coplanar faces when reshaping meshes. This is helpful in many cases, but in others it creates extraneous triangles that make further operations like perfect subdivision difficult. Now you can enable non-coplanar faces to avoid creation of extra triangles.

NCPM

Loop subdivide: Subdivision can be a really powerful tool. It’s even more powerful if you can cut a loop around an entire mesh. With loop subdivide enabled, simply long press on the trigger while subdividing to see a perfect subdivision loop form around your object.

Models_LoopCut.gif

Edge, Face and Vertex Deletion: Many have asked for the ability to delete a single edge, face or vertex. With this feature enabled, use the eraser tool to do just that. We'll “collapse” the mesh based on the edge, face or vertex you delete.

EraseEdgeGif

Worldspace grids: Another option that helps with precision is enabling worldspace grids. This feature will show grids along every side of your worldspace. The grid units are equivalent to the actual worldspace grid units, so you can precisely measure and place objects along the grids.

Models_Worldspace.gif

Volume insertion ruler: Modeling very precisely in Blocks can be difficult without a sense of relative scale. This experimental feature allows you to enable a ruler when you are inserting a mesh. As you insert the object, you'll see relative measurements in meters appear on each axis so you can precisely and accurately measure every object relative to the others.

Models_Ruler.gif

Expanded mesh wireframe:When reshaping a mesh you see a helpful wireframe around the section of the mesh you are reshaping. Many have asked for the ability to turn that wireframe on for the entire mesh, and this feature does exactly that.

Reshape

Stepwise selection undo: Multi-selecting a lot of objects can be frustrating if you select the wrong object in the middle of your selection process. We wanted to make this easier, so we’ve experimented with allowing you to undo and redo steps in your multi-selection. You can use the undo and redo buttons on your non-dominant controller to undo or redo the selection of objects in order. Make sure to keep your trigger held down while undoing or redoing to ensure you can keep multi-selecting after correcting your mistake!

Stepwise

It’s important to note that since these features are experimental, there may be minor bugs or issues when using them.

We can’t wait to see what you build with the latest version of Blocks. You can download the update from Steam and the Oculus Store for free today.

Header image: Blocks models by Damon Pidhajecki, Jacques Fourie, Jerad Bitner, and Michael Fuchs

Pioneer new lessons in your classroom with Google Expeditions

Editor's note:This week our Google for Education team will be joining thousands of educators at Bett in London. At our booth, C230, you can learn more about Google Expeditions in person. Follow along on The Keyword and Twitter for the latest news and updates.

Since 2015, educators have been using Expeditions to bring lessons to life with the power of virtual reality. As part of our wider Grow with Google efforts, we’re bringing even more immersive learning experiences to classrooms through the Google Expeditions AR Pioneer Program. With augmented reality, students can explore the eye of a tornado or step foot in historic landmarks by interacting with digital objects right in front of them.

pasted image 0.png
Student views the asteroid belt in AR using Expeditions.

Through our travels with the Google Expeditions Pioneer Program, we’ve worked alongside teachers and students to improve the overall Expeditions experience. One of the top requests we’ve heard from teachers and students is the ability to create their own Expeditions. Today, we are excited to announce a beta program that allows schools and educators to do just that. Classrooms will be able to create immersive tours of the world around them -- their classrooms, their schools, their communities. We'll provide participating schools with all the tools and hardware required to capture 360 images and curate unique Expeditions. For more information about the program, sign up here.

This feature transforms the classroom from a content consumption space to an immersive content creation space with the student taking the lead. Paul Zimmerman
Technology Innovator, Blaine County, Idaho

We are eager to hear feedback from teachers and students about how they use these new tools. In the past year, we’ve used feedback directly from our users to make Expeditions even more engaging and effective. We’ve added personalization features like annotations to allow a teacher to highlight their own observations in a panorama. We’ve also enabled students and all lifelong learners (we’re looking at you, parents and guardians) to visit and discover new places through self-guided mode.

We can’t wait to see what you create and remember to keep the feedback and suggestions coming through the app or here. Thank you for helping us make Google Expeditions even better.

Source: Education


Augmented reality on the web, for everyone

In the next few months, there will be hundreds of millions of Android and iOS devices that are able to provide augmented reality experiences - meaning you'll be able to look at the world through your phone, and place digital objects wherever you look. To help bring this to as many phones as possible, we've been exploring how to make a web browser that is AR capable, so almost anyone with a modern smartphone could access this new technology. In this post, we’ll take a look at a recent prototype we built with Article. It shows how AR content could work across the web, from today’s standard mobile and desktop browsers, to future AR-enabled browsing environments and devices. Techies, take note: the last section of the post focuses on technical details, so stick around if you want to dig deeper.

How the prototype works

Article is a 3D model viewer that works for all browsers. On desktop, users can check out a 3D model—in this case a space suit—by dragging to rotate, or scrolling to zoom. On mobile the experience is similar: users touch and drag to rotate the model, or drag with two fingers to zoom in.
gifarticle1
The desktop model viewing experience

To help convey that the model is 3D and interactive—and not just a static image—the model rotates slightly in response to the user scrolling.

gifarticle2

With augmented reality, the model comes alive. The unique power of AR is to blend digital content with the real world. So we can, for example, surf the web, find a model, place it in our room to see just how large it truly is, and physically walk around it.

When Article is loaded on an AR-capable device and browser, an AR button appears in the bottom right. Tapping on it activates the device camera, and renders a reticle on the ground in front of the user. When the user taps the screen, the model sprouts from the reticle, fixed to the ground and rendered at its physical size. The user can walk around the object and get a sense of scale and immediacy that images and video alone cannot convey.

gifarticle3
Article’s AR interface as viewed on an AR-capable tablet

To reposition the model, users can tap-and-drag, or drag with two fingers to rotate it. Subtle features such as shadows and even lighting help to blend the model with its surroundings.

gifarticle4
Moving and rotating the model

Small touches make it easy to learn how to use AR. User testing has taught us that clear interface cues are key to helping users learn how AR works. For example, while the user waits momentarily for the system to identify a surface that the model can be placed upon, a circle appears on the floor, tilting with the movement of the device. This helps introduce the concept of an AR interface, with digital objects that intersect with physical environment (also known as diagetic UI).

gifarticle5
Diagetic activity indicators hint at the AR nature of the experience

Under the hood (and on to the technical stuff!)

We built our responsive model viewer with Three.js. Three.js makes the low-level power of WebGL more accessible to developers, and it has a large community of examples, documentation and Stack Overflow answers to help ease learning curves.

To ensure smooth interactions and animations, we finessed factors that contribute to performance:

  • Using a low polygon-count model;

  • Carefully controlling the number of lights in the scene;

  • Decreasing shadow resolution when on mobile devices;

  • Rendering the emulator UI (discussed below) using shaders that utilize signed distance functions to render their effects at infinite resolution in an efficient manner.

To accelerate iteration times, we created a desktop AR emulator that enables us to test UX changes on desktop Chrome. This makes previewing changes nearly instant. Before the emulator, each change—no matter how minor—had to be loaded onto a connected mobile device, taking upwards of 10 seconds for each build-push-reload cycle. With the emulator we can instead preview these tweaks on desktop almost instantly, and then push to device only when needed.

The emulator is built on a desktop AR polyfill and Three.js. If one line of code (which include the polyfill), is uncommented in the index.js file , it instantiates a gray grid environment and adds keyboard and mouse controls as substitutes for physically moving in the real world. The emulator is included in the Article project repo.

gifarticle6

The spacesuit model was sourced from Poly. Many Poly models are licensed under Creative Commons Attribution Generic (CC-BY), which lets users copy and/or remix them, so long as the creator is credited. Our astronaut was created by the Poly team.

Article’s 2D sections were built with off-the-shelf libraries and modern web tooling. For responsive layout and typography and overall theme, we used Bootstrap, which makes it easy for developers to create great looking sites that adapt responsively across device screen sizes. As an nod to the aesthetics of Wikipedia and Medium, we went with Bootswatch’s Paper theme. For managing dependencies, classes, and build steps we used NPM, ES6, Babel and Webpack.

Looking ahead

There’s vast potential for AR on the web—it could be used in shopping, education, entertainment, and more. Article is just one in a series of prototypes, and there’s so much left to explore—from using light estimation to more seamlessly blend 3D objects with the real world, to adding diegetic UI annotations to specific positions on the model. Mobile AR on the web is incredibly fun right now because there’s a lot to be discovered. If you’d like learn more about our experimental browsers and get started creating your own prototypes, please visit our devsite.

Memory machines: VR180 cameras, and capturing life as you see it

When I was growing up, my dad and even my grandfather always had camcorders stuck to their shoulders. They were our family documentarians, and were always the first to try a new gadget or gizmo if it would help us remember the places we went and the special times we shared. Decades later, I’m so grateful, and I treasure the memories they captured on Betamax and film.

cbtl1
My grandfather Henry in the backyard with his video camera.

We care about photos and videos because they connect us with important moments, special trips, and time together with the people who matter most to us. They’re abstract representations that help us remember—little visual gifts to our future selves. That being said, for most of the 20th century, photos and videos were the best you could do. They’re better than nothing, but so far from the real thing.

cbtl2
A photo of me at Disneyland at age 4, taken by my dad with a Nikon EM 35mm SLR.

But as the technology used to capture these moments has improved, the fidelity has also increased. From primitive pinhole cameras, to black and white film cameras, to color, to video, there’s been a continuous upward trajectory of resolution and quality. Today's high-end VR cameras are a big leap forward. Through immersive, stereoscopic footage, they do something more compelling than refreshing your memory—they make you feel like you're there. And the closer cameras get to capturing the moment just the way we experienced it, the closer we get to creating time machines for ourselves.

Though Google started by making VR cameras for filmmakers and professional creators a few years ago, our team has always aimed to help people capture their personal memories in VR. But in order to make this tech accessible to everyone, we had to rethink the camera itself. There are 360 cameras in the market today, but they present some challenges—they can be costly, confusing to use (where do you point it?), and the photographer always ends up in the frame. So, we focused on the pixels that matter (the ones in front of you!) with a new format we're calling VR180. And we started designing high-quality, pocket-sized cameras that anyone could use to capture VR180 experiences with just a click of a button. The first VR180 cameras will hit shelves throughout this year, just in time for you to start hitting “record” on your own memories in 2018.

I've been using the VR180 prototypes for a while now, in places like my living room or on trips to the beach. It’s easy to share the captures with my family and friends. They can look at them on their phones, or use a viewer like Cardboard or Daydream View to step into the moment as if they were there. It’s amazing that I can film my sons jumping on the trampoline, or having a quiet breakfast, or being back where I was many years ago, on a ride at a carnival—and not only share those moments with family far away, but also relive them myself, in a way that makes me feel like I’m right back in each moment.

cbtl3
VR180 capture of one of my sons on a carnival ride, captured with one of our camera prototypes.

That’s why these VR180 cameras are so special. They do your memories justice, by enabling you to capture life the way you see it—with two eyes. When I’ve shown my family these recordings, they look into the headset, and smile. They say things like, “This is amazing!” and, when they take the headset off: “I only wish we had these cameras sooner.”

I couldn’t agree more.

A new way to experience Daydream and capture memories in VR

Since we launched Cardboard, our goal has been to create virtual reality experiences that are accessible, useful, and relevant to as many people as possible. With Daydream, we’ve been building a platform for high-quality mobile VR: we’ve worked with lots of different partners to bring fifteen Daydream-ready phones to market for smartphone VR. And today marks another step, with Lenovo unveiling new details about the Mirage Solo, a Daydream standalone headset we first announced at Google I/O. With it, you’ll have a more immersive and streamlined way to experience the best of what Daydream has to offer without needing a smartphone.

We've also been investing in ways to help you capture your life's most important moments in VR. We've designed high-quality, yet simple and pocket-sized cameras that anyone can use with just the click of a button. Our partners Lenovo and YI are sharing more on these, and they'll be available beginning in the second quarter this year.

Experience Daydream in a new way

The Lenovo Mirage Solo builds on everything that’s great about smartphone-based VR—portability and ease of use—and it delivers an even more immersive virtual reality experience. You don’t need a smartphone to use it: you just pick it up, put it on, and you’re ready to go. The headset is more comfortable and natural because of a new technology we created at Google called WorldSense. Based on years of investment in simultaneous localization and mapping (SLAM), it enables PC-quality positional tracking on a mobile device without the need for any additional external sensors. WorldSense lets you duck, dodge and lean, and step backwards, forwards or side to side, unlocking new gameplay elements that bring the virtual world to life. WorldSense tracking and Mirage Solo's high performance graphics mean that the objects you see will stay fixed in place just like in the real world, no matter which way you tilt or move your head. The Lenovo Mirage Solo will also have a wide field of view for great immersion, and an advanced display optimized for virtual reality, so everything you see stays crystal clear. It’s the best way to access Daydream.

Standalone
Lenovo Mirage Solo

We’re working closely with developers to bring new experiences to the platform that take advantage of all these new technologies, including a new game based on the iconic universe of Blade Runner called Blade Runner: Revelations. You’ll also have access to the entire Daydream catalog of over 250 apps, including Google apps like Street View, Photos, and Expeditions. With YouTube VR, you can watch the best VR video content, from powerful short pieces chronicling extraordinary role models to music, fashion, sports and epic journeys around the world. The Lenovo Mirage Solo also has built-in casting support, so you’re just a couple clicks away from sharing your virtual experiences onto a television for your friends and family to follow along. It will hit shelves beginning in the second quarter this year.

Capture your most important memories with VR180 cameras

Photos and videos matter to us because they help us remember the special moments in our lives. But what if you could do more than just remember a moment; what if you could relive it? That’s the idea behind the VR180 format, and we created VR180 cameras so that anyone could have an easy way to capture and then re-experience the past.

VR180

For the full effect, check out this video in a VR headset like Cardboard or Daydream View.

VR180 cameras are simple and designed for anyone to use, even if they’ve never tried VR before. There are other consumer VR cameras available today, but you have to think carefully about where you place these cameras when recording, and they capture flat 360 footage that doesn’t create a realistic sense of depth. In contrast, with VR180 cameras, you just point and shoot to take 3D photos and videos of the world in stunning 4K resolution. The resulting imagery is far more immersive than what you get with a traditional camera. You just feel like you’re there. You can re-experience the memories you capture in virtual reality with a headset like Cardboard or Daydream View. Or for a lightweight but more accessible experience, you can watch on your phone.

With options for unlimited private storage in Google Photos, you’ll have complete control over these irreplaceable memories, and you can also view them anytime in 2D on your mobile or desktop devices without a VR headset. If you want to share them, uploading to services like YouTube is easy.

LenovoStand
Lenovo Mirage Camera

Several VR180 cameras will be available soon. Different models will sport different features—like live streaming, which lets you share special moments in real time. The Lenovo Mirage Camera and YI Technology’s YI Horizon VR180 Camera will hit shelves beginning in the second quarter, and a camera from LG will be coming later this year. For professional creators, the Z Cam K1 Pro recently launched, and Panasonic is building VR180 support for their just-announced GH5 cameras with a new add-on.

YI camera
YI Horizon VR180 Camera

We’re continuing to invest in the virtual reality experiences that are compelling and relevant for everyone. Whether you access Daydream through a Daydream View and the Daydream-ready smartphone of your choice or the new, more immersive Lenovo Mirage Solo, you’ll get the best mobile VR apps and videos anywhere. And with a range of VR180 cameras to choose from, you’ll be able to capture your most important memories in a new way.

We also want to hear from you. Starting today, we're launching a VR180 contest: tell us about a special memory you’d like to capture, and we'll work with the winners to bring their ideas to life.

Say hello to our third round of Jump Start creators

Jump is Google’s platform for professional VR video capture. It combines high-quality VR cameras and automated stitching that simplifies VR video production and helps filmmakers create amazing content. We launched the Jump Start program so that creators of all backgrounds can get access to Jump cameras and bring their ideas for VR video projects to life.

We're wrapping up the year for the Jump Start program, and it’s been great to see the diversity of creators around the world using Jump cameras for a whole range of projects, everything from Lions in Los Angeles to a tour of the ancient Roman Forum to a sci-fi movie set on a futuristic Lunar Base. You can check out some recently published pieces on YouTube. We also just announced our third round of Jump Start participants. Let’s take a look at the cool stuff they’re working on.

jumpstart_11.jpg

Aidan Brezonick (Director), Justin Benzel (Author), Ivanna Kozak (Producer, Laïdak Films), Antoine Liétout (Producer, Laïdak Films), and Ivan Zuber (Producer, Laïdak Films)

Locations: LA, USA; Chicago, USA; Berlin, Germany; Paris, France

The team is working on a story set in the French countryside. It follows Henry, an aggrieved inventor struggling to overcome the laws of physics by reversing entropy. 

jumpstart_1.jpg

Alvaro Morales

Location: Washington, D.C., USA

Alvaro’s the co-founder of the Family Reunions Project.  He’s working on a collection of immersive experiences centered on undocumented immigrants.

jumpstart_2.jpg

Amaury La Burthe

Location: Toulouse, France

Amaury is creative director of Novelab/Audiogaming.  He’s working with Corinne Linder on a hybrid live action and CGI project about modern-day circuses.

jumpstart_4.jpg

Becky Lane

Location: Ithaca, USA

As a filmmaker and sociologist, Becky is creating an interactive journey through the history of burlesque dance to discover its impact on U.S. culture and women’s sexual empowerment.
jumpstart_5.jpg

Carmen Guzmán

Location: Puerto Rico

Carmen Guzmán is a Puerto Rican filmmaker based in NYC. She’s exploring the impact Hurricane Maria had on Puerto Rico’s communication systems and culture.

jumpstart_6.jpg

DimensionGate (Ian Tuason)

Location: Toronto, Canada

Ian Tuason, founder of DimensionGate, has showcased his work at the Cannes Film Festival, and is shooting the pilot episode of a VR horror serial.

jumpstart_7.jpg

Dominic Nahr and Sam Wolson

Location: Zurich, Switzerland

Dominic and Sams's film will explore the aftermath of the Fukushima Daiichi nuclear disaster in Japan.

jumpstart_8.jpg

Fifer Garbesi

Location: Oakland, USA

Fifer’s project will traverse the many offshoots of our lingual creation myth in a delicate interactive dance between viewer and journey.

jumpstart_9.jpg

Harmonic Laboratory

Location: Eugene, USA

The interdisciplinary arts collective Harmonic Laboratory is documenting TESLA: Light, Sound, Color, an original 90-minute theatre performance on the elusive physicist and inventor, Nikola Tesla.

jumpstart_10.jpg

iNK Stories

Location: Brooklyn, USA

iNK Stories is a Story Innovation Studio. They’re working on the immersive experience Fire Escape and the large-scale VR installation, HERO (premiering at Sundance).
jumpstart_12.jpg

Lisa London

Location: San Francisco, USA

Lisa is producing "Keep Tahoe Blue,” a look at the successful environmental monitoring organization. It's a piece on community, volunteerism, and making a difference.

jumpstart_13.jpg

Lizzie Warren

Location: Brooklyn, USA

Lizzie co-founded AROO, a feminist VR collective. A documentary filmmaker, one of Lizzie’s current VR projects explores the human/animal relationships within a wolf sanctuary.

jumpstart_14.jpg

Majka Burhardt and Ross Henry

Locations (Respectively): Jackson, USA; Chagrin Falls, USA

Majka and Ross share a VR journey about the power of one mountain and the water that takes you from the summit of Mount Namuli, Mozambique to the Indian Ocean.

jumpstart_17.jpg

Making360

Location: Venice, USA

More than 50 creators are coupling neurofeedback with stunning VR video to unlock creativity by training people to consciously control their state of mind in any environment.

jumpstart_3.jpg

MeeRa Kim & Michael Henderson (Arbor Entertainment)

Location: Los Angeles, USA
The Arbor team is working on several projects including a 360 exploration of dance and music from the 1920s through present day.

jumpstart_15.jpg

Noam Argov

Location: San Francisco, USA

Noam is a producer and National Geographic Explorer. Her team will use VR to get an inside look into the life of a Kyrgyz nomad as he pioneers a new adventure sport: horse-backcountry-skiing. 

jumpstart_16.jpg

Sarah Hill

Location: Columbia, USA

The StoryUP XR team is creating a brain-controlled VR experience where you conduct a handbell orchestra with your positive emotions.

jumpstart_18.jpg

Sherpas Cinema

Location: Whistler, Canada

The team is working on an experience that will you on a guided heli-ski trip deep into the backcountry. High adrenaline, no crowds, and all the untouched powder you could ask for.

ARCore Developer Preview 2

Augmented reality is a powerful way to bring the physical and digital worlds together. AR places digital objects and useful information into the real world around us, which creates a huge opportunity to make our phones more intuitive, more helpful and a whole lot more fun.

We’ve been working on augmented reality since 2014, with our earliest investments in Project Tango. We’ve taken everything we learned from that to build ARCore, which launched in preview earlier this year. Whereas Tango required special hardware, ARCore is a fast, performant, Android-scale SDK that enables high-quality augmented reality across millions of qualified mobile devices.

Developers can experiment with ARCore now, and we’ve seen some amazing creations from the community. ARCore also powers AR Stickers on the Pixel camera, which launched earlier this week and lets you add interactive AR characters and playful emojis directly into photos and videos to bring your favorite stories to life.

Today, we’re releasing an update to our ARCore Developer Preview with several technical improvements to the SDK, including:

  • A new C API for use with the Android NDK that complements our existing Java, Unity, and Unreal SDKs;

  • Functionality that lets AR apps pause and resume AR sessions, for example to let a user return to an AR app after taking a phone call;

  • Improved accuracy and runtime efficiency across our anchor, plane finding, and point cloud APIs.

To learn more about the SDK updates, check out the Android, Unity, and Unreal Github pages.

As we focus on bringing augmented reality to the entire Android ecosystem with ARCore, we’re turning down support of Tango. Thank you to our incredible community of developers who made such progress with Tango over the last three years. We look forward to continuing the journey with you on ARCore.

If you’re a developer interested in AR, now's the time to start experimenting. In the coming months, we’ll launch ARCore v1.0, with support for over 100 million devices. And soon, many augmented reality experiences will be available in the Play Store. We can’t wait to see what you create.

Go beyond the gridiron in VR with “NFL Immersed” season two

Jump, Google’s platform for virtual reality video capture that combines high-quality VR cameras and automated stitching, simplifies VR video production and helps filmmakers of all backgrounds and skill levels create amazing content. For the past two years, we’ve worked with NFL Films, one of the most recognized team of filmmakers in sports and the recipient of 112 Sports Emmys, to show what some of the best creators could do with Jump. Last year they debuted the first season of the virtual reality docuseries “Immersed,” and today the first three episodes of season two land on Daydream through YouTube VR and the NFL’s YouTube channel. This season will give fans an even more in-depth look at some of the NFL’s most unique personalities through three multi-episode arcs, each dedicated to a different player.

Shot with the latest Jump camera, the YI HALO, the first three episodes follow Chris Long, defensive end for the Philadelphia Eagles. Each episode gives fans a sneak peek into his life on and off the field, from his decision to donate his salary to charity to a look at how he prepares for game day. They’re available on Daydream through YouTube VR and the NFL’s YouTube channel today, with future episodes featuring Calais Campbell of the Jacksonville Jaguars and players from the 2018 Pro Bowl coming soon.

We caught up with NFL Films Senior Producer Jason Weber to hear more about season two, what it was like to use Jump and advice for other filmmakers creating VR video content for the first time:

What makes season two of “Immersed” different from the first season?

For season two of NFL “Immersed,” we wanted to try and dig a bit deeper into the stories of our players and give fans a real sense of what makes them who they are on and off the field, so we’re devoting three episodes to each subject.

VR is such a strong vehicle for empathy, and we wanted to focus the segments on players who are making a difference on and off the field. Chris Long is having a tremendous season with the Eagles as part of one of the best defenses in football, but his impact off the field is equally inspiring. Calais Campbell is a larger-than-life character whose influence is being felt on the resurgent Jaguars and throughout his new community in Jacksonville. And the Pro Bowl is a unique event where all of the best players come to have fun, and the relaxed setting gives us a chance to put cameras where they normally can’t go, giving viewers a true feeling of what it’s like to play with the NFL’s finest.

nflvr3

Last year was NFL Films’ first foray into shooting content in VR. What was it like filming and producing season one, and how did it compare to your experience with season two this year?

We learned a lot last season; in particular, the challenges of bringing multiple VR cameras to the sidelines on game day. As fast as the game looks on TV, it moves even faster when you’re right there on the field. Being able to get the footage we need, while also being ready to get out of the way when a ball or player is coming right at you took some time to master.

What makes shooting for VR different from traditional video content? What considerations do you have to make when shooting in VR?

Camera position is one big difference in shooting VR versus traditional video content. When we shoot in traditional video formats our cinematographers are constantly moving to capture different angles and frames of our subjects and scenes. With VR—though we've noticed a slight shift toward more cuts and angles in edited content in the past year—letting a scene play longer from one angle and positioning the camera so that the action takes advantage of the 360-degree range of vision helps differentiate a VR production from a standard format counterpart.

yi top

What did you like about using the Yi Halo to shoot the second season of “Immersed?”

With the Halo, we were most excited about the Up camera. You might not think that a camera facing straight up would make that much of a difference in football, but there’s a lot happening in that space that would get lost without it. We can now place a camera in front of a quarterback and have him throw the ball over the Halo, giving a viewer a more realistic view of that scene. With field goals, placing the camera under the goal posts produces a very interesting visual that wouldn’t work if the top camera wasn’t able to capture the ball going through the uprights. One of the most goosebump-inducing moments at any NFL game is a pregame flyover, which we can now capture in its full glory thanks to the top camera.

nfltop

What tips do you have for other filmmakers thinking of getting into making VR video content?

Take the time to consider why you want to use VR versus traditional formats to tell your story. I work in both formats and feel that if I’m just telling the same story in VR that I would in HD, then I’m not doing my job as a VR filmmaker. VR gives you the unique opportunity to tell a story in a 360-degree space. Use that space to your advantage in creating something memorable.

Grab your Daydream View and head to YouTube today to watch the first three episodes, and be sure to check back soon to see the rest of season two of “Immersed.”