Tag Archives: Google AR and VR

We’re bringing VR Creator Lab to Europe

YouTube’s Virtual Reality creators have shown us that VR is a powerful tool for storytelling, artistic expression, and teaching.  We want more creators across the world to be able to share their stories in this emerging medium. To make it easier and less expensive to create in VR, we introduced the VR180 format and we’re partnering with YouTube and VR Scout to launch a global series of VR180 training academies, which we call VR Creator Lab.

Today we’re announcing that VR Creator Lab is coming to London. Participants will receive between $30,000 and $40,000 USD in funding towards their VR project, attend a three day “boot camp” September 18-20, 2018, and receive three months of training from leading VR instructors and filmmakers.  

Applications are open through 5pm British Summer Time on August 6, 2018.  YouTube creators with a minimum of 10,000 subscribers and independent filmmakers are eligible. Participants must be 18+ years old. We’ll select the participants based on the quality of their pitches and the feasibility of completing their projects within the three month timeframe.

The London Creator Lab will follow our first Lab, which we launched in Los Angeles last month.  We put together a brief video featuring a few of the participants:

Creators at VR Creator Lab Los Angeles share their experience

Creators at VR Creator Lab Los Angeles share their experience

Even if you’re not selected to join us in London, you can check out our guide to getting started with VR180cameras and editing tools.  We believe that everyone benefits when creators share their creative vision with the world, so we’re always on the lookout for new ways to make it easier and more affordable to create in VR.  We hope you’ll give VR180 filmmaking a try!

Bring abstract concepts to life with AR expeditions

Over the last three years, Google Expeditions has helped students go on virtual field trips to far-off places like Machu Picchu, the International Space Station and the Galapagos Islands. The more you look around those places in virtual reality (VR), the more you notice all the amazing things that are there. And while we’ve seen first hand how powerful a tool VR is for going places, we think augmented reality (AR) is the best way to learn more about the things you find there. Imagine walking around a life-sized African elephant in your classroom or putting a museum's worth of ancient Greek statues on your table.


Last year at Google I/O we announced the Google Expeditions AR Pioneer Program, and over the last school year, one million students have used AR in their classrooms. With AR expeditions, teachers can bring digital 3D objects into their classrooms to help their students learn about everything from biology to Impressionist art.


Starting today, Expeditions AR tours are available to anyone via the Google Expeditions app on both Android and iOS. We’ve also updated the Expeditions app to help you discover new tours, find your saved tours, and more easily start a solo adventure. It’s never been easier to start a tour on your own, at home with your family or in the classroom.

AR takes the abstract and makes it concrete to the students. We wouldn’t be able to see a heart right on the desk, what it looks like when beating, and the blood circulating. Darin Nakakihara
Irvine Unified School District

Google Expeditions makes it easy to guide yourself or an entire classroom through more than 100 AR and 800 VR tours created by Google Arts & Culture partners like the Smithsonian Freer|Sackler, Museo Dolores Olmedo, and Smarthistory, as well as pedagogical partners like Houghton Mifflin Harcourt, Hodder Education a division of Hachette, Oxford University Press and Aquila Education.

Expeditions AR gif

Upgrade the Google Expeditions app now to try out AR expeditions with a compatible Android (ARCore) or iOS (ARKit) device. And starting today, interested schools can also purchase the first Expeditions AR/VR kits from Best Buy Education. Like VR, we believe AR can enhance the way we understand the world around us—it’s show-and-tell for a new generation.

Now students can create their own VR tours

Editor’s note: For Teacher Appreciation Week, we’re highlighting a few ways Google is supporting teachers—including Tour Creator, which we launched today to help schools create their own VR tours. Follow along on Twitter throughout the week to see more on how we’re celebrating Teacher Appreciation Week.

Since 2015, Google Expeditions has brought more than 3 million students to places like the Burj Khalifa, Antarctica, and Machu Picchu with virtual reality (VR) and augmented reality (AR). Both teachers and students have told us that they’d love to have a way to also share their own experiences in VR. As Jen Zurawski, an educator with Wisconsin’s West De Pere School District, put it: “With Expeditions, our students had access to a wide range of tours outside our geographical area, but we wanted to create tours here in our own community."  


That’s why we’re introducing Tour Creator, which enables students, teachers, and anyone with a story to tell, to make a VR tour using imagery from Google Street View or their own 360 photos. The tool is designed to let you produce professional-level VR content without a steep learning curve. “The technology gets out of the way and enables students to focus on crafting fantastic visual stories,” explains Charlie Reisinger, a school Technology Director in Pennsylvania.


Once you’ve created your tour, you can publish it to Poly, Google’s library of 3D content. From Poly, it’s  easy to view. All you need to do is open the link in your browser or view in Google Cardboard. You can also embed it on your school's website for more people to enjoy. Plus, later this year, we’ll add the ability to import these tours into the Expeditions application.


Tour Creator- Show people your world

Here’s how a school in Lancaster, PA is using Tour Creator to show why they love where they live.

"Being able to work with Tour Creator has been an awesome experience,” said Jennifer Newton, a school media coordinator in Georgia. “It has allowed our students from a small town in Georgia to tell our story to the world.”


To build your first tour, visit g.co/tourcreator. Get started by showing us what makes your community special and why you #LoveWhereYouLive!

Experience augmented reality together with new updates to ARCore

Three months ago, we launched ARCore, Google’s platform for building augmented reality (AR) experiences. There are already hundreds of apps on the Google Play Store that are built on ARCore and help you see the world in a whole new way. For example, with Human Anatomy you can visualize and learn about the intricacies of the nervous system in 3D. Magic Plan lets you create a floor plan for your next remodel just by walking around the house. And Jenga AR lets you stack blocks on your dining room table with no cleanup needed after your tower collapses.

ar_usecases_pr_050718.gif

As announced today at Google I/O, we’re rolling out a major update to ARCore to help developers build more collaborative and immersive augmented reality apps.

  • Shared AR experiences:Many things in life are better when you do them with other people. That’s true of AR too, which is why we’re introducing a capability called Cloud Anchors that will enable new types of collaborative AR experiences, like redecorating your home, playing games and painting a community mural—all together with your friends. You’ll be able to do this across Android and iOS.

google_justALine_GIF6_180506a.gif

Just a Line will be updated with Cloud Anchors, and available on Android & iOS in the coming weeks

  • AR all around you:ARCore now features Vertical Plane Detection which means you can place AR objects on more surfaces, like textured walls. This opens up new experiences like viewing artwork above your mantlepiece before buying it. And thanks to a capability called Augmented Images, you’ll be able to bring images to life just by pointing your phone at them—like seeing what’s inside a box without opening it.  

ARCore: Augmented Images
  • Faster AR development:With Sceneform, Java developers can now build immersive, 3D apps without having to learn complicated APIs like OpenGL. They can use it to build AR apps from scratch as well as add AR features to existing ones. And it’s highly optimized for mobile.

nyt_sceneform_050718.gif

The New York Times used Sceneform for faster AR development

Developers can start building with these new capabilities today, and you can try augmented reality apps enabled by ARCore on the Google Play Store.

Google Lens: real-time answers to questions about the world around you

There’s so much information available online, but many of the questions we have are about the world right in front of us. That’s why we started working on Google Lens, to put the answers right where the questions are, and let you do more with what you see.

Last year, we introduced Lens in Google Photos and the Assistant. People are already using it to answer all kinds of questions—especially when they’re difficult to describe in a search box, like “what type of dog is that?” or “what’s that building called?”

Today at Google I/O, we announced that Lens will now be available directly in the camera app on supported devices from LGE, Motorola, Xiaomi, Sony Mobile, HMD/Nokia, Transsion, TCL, OnePlus, BQ, Asus, and of course the Google Pixel. We also announced three updates that enable Lens to answer more questions, about more things, more quickly:

First, smart text selection connects the words you see with the answers and actions you need. You can copy and paste text from the real world—like recipes, gift card codes, or Wi-Fi passwords—to your phone. Lens helps you make sense of a page of words by showing you relevant information and photos. Say you’re at a restaurant and see the name of a dish you don’t recognize—Lens will show you a picture to give you a better idea.  This requires not just recognizing shapes of letters, but also the meaning and context behind the words. This is where all our years of language understanding in Search help.

lens_menu_050718 (1).gif

Second, sometimes your question is not, “what is that exact thing?” but instead, "what are things like it?" Now, with style match, if an outfit or home decor item catch your eye, you can open Lens and not only get info on that specific item—like reviews—but see things in a similar style that fit the look you like.

lens_clothing_inPhone.gif

Third, Lens now works in real time. It’s able to proactively surface information instantly—and anchor it to the things you see. Now you’ll be able to browse the world around you, just by pointing your camera. This is only possible with state-of-the-art machine learning, using both on-device intelligence and cloud TPUs, to identify billions of words, phrases, places, and things in a split second.

lens_multielements_050718 (1).gif

Much like voice, we see vision as a fundamental shift in computing and a multi-year journey. We’re excited about the progress we’re making with Google Lens features that will start rolling out over the next few weeks.

Introducing the first Daydream standalone VR headset and new ways to capture memories

Back in January, we announced the Lenovo Mirage Solo, the first standalone virtual reality headset that runs Daydream. Alongside it, we unveiled the Lenovo Mirage Camera, the first camera built for VR180. Designed with VR capture and playback in mind, these devices work great separately and together. And both are available for purchase today.

More immersive

The Mirage Solo puts everything you need for mobile VR in a single device. You don't need a smartphone, PC, or any external sensors—just pick it up, put it on, and you're in VR in seconds.

The headset was designed with comfort in mind, and it has a wide field of view and an advanced display that’s optimized for VR. It also features WorldSense, a powerful new technology that enables PC-quality positional tracking on a mobile device, without the need for any additional sensors. With it, you can duck, dodge and lean, step backward, forward or side-to-side. All of this makes for a more natural and immersive experience, so you really feel like you’re there.

10_Vega_Hero_Bundled_With_Rigel.png

Lenovo Mirage Solo

With over 350 games, apps and experiences in the Daydream library, there's tons to see and do. WorldSense unlocks new gameplay elements that bring the virtual world to life, and more than 70 of these titles make use of the technology, including Blade Runner: Revelations, Extreme Whiteout, Narrows, BBC Earth Live in VR, Fire Escape, Eclipse: Edge of Light, Virtual Virtual Reality, Merry Snowballs, and Rez Infinite. So whether you’re a gamer or an explorer, there’s something for everyone.

Point and shoot VR capture


Alongside the Mirage Solo, we worked with Lenovo to develop the first VR180 consumer camera, the Lenovo Mirage Camera. VR180 lets anyone capture immersive VR content with point and shoot simplicity. Photos and videos taken with the camera transport you back to the moment of capture with a 180 degree field of view and crisp, three-dimensional imagery.


There’s no better place to relive your VR180 memories than in the Lenovo Mirage Solo headset. And with support for VR180 built into Google Photos, you can easily share those moments with your friends and family—regardless of what device they have.

30_Rigel_Hero_Front_facing_right.jpg

Lenovo Mirage Camera

We can’t wait for you to try out the Lenovo Mirage Solo and Mirage Camera to dive into new immersive experiences, and to start capturing your favorite moments in VR.


Premiering now: The first-ever VR Google Doodle starring illusionist & film director Georges Méliès

In 1902, George Méliès sent his audience on a trip to the moon in his adventure film “Le Voyage dans la Lune.” That was more than half a century before humans ever landed on the moon, and more than a century before people around the world started blasting into space via virtual reality technology.

georges-melies-doodle.jpg

Méliès pioneered film techniques that immersed people in unfamiliar experiences—an early precursor to today’s virtual reality. So it’s only fitting that today’s first-ever VR-enabled and 360° video Google Doodle celebrates Méliès, and debuts on the anniversary of the release of one of his greatest cinematic masterpieces in 1912: “À la conquête du pôle” (“The Conquest of the Pole). Created in collaboration with the Google Spotlight Stories, Google Arts & Culture, and Cinémathèque Française teams (as well as production partners Nexus Studios), the Doodle represents some of the iconic techniques and styles found in Méliès’ films, including strong colors, engaging storylines and optical effects.

melies_dance.png

An illusionist before he was a filmmaker, Méliès discovered and exploited basic camera techniques to transport viewers into magical worlds and zany stories. He saw film and cameras as more than just tools to capture images, he saw them as vehicles to transport and truly immerse people into a story. He played around with stop motion, slow motion, dissolves, fade-outs, superimpositions, and double exposures.

“Méliès was fascinated by new technologies and was constantly on the lookout for new inventions. I imagine he would have been delighted to live in our era, which is so rich with immersive cinema, digital effects, and spectacular images on screen,” says Laurent Manonni, Director of Heritage at The Cinémathèque Française.  “I have no doubt he would have been flattered to find himself in the limelight via today’s very first virtual reality / 360° video Google Doodle, propelled around the world thanks to a new medium with boundless magical powers.”

Enjoy the full Google Doodle VR experience on mobile, Cardboard or Daydream by downloading the Google Spotlight Stories app on Google Play or in the App Store. You can can also experience the Doodle without a headset as a 360° video on the Google homepage for 48 hours or the Google Spotlight Stories YouTube Channel anytime. Bon voyage on your fantastical VR adventure to the moon!

Behind the scenes: Coachella in VR180

Last weekend, fans from all around the world made the trek to Southern California to see some of music’s biggest names perform at Coachella. To make those not at the festival feel like they were there, we headed to the desert with VR180 cameras to capture all the action.

Throughout the first weekend of Coachella, we embarked on one of the largest VR live streams to date, streaming more than 25 performances (with as many cameras to boot) across 20 hours and capturing behind-the-scenes footage of fans and the bands they love. If you missed it live, you can enjoy some of the best experiences—posted here.

VR180 can take you places you never thought possible—the front row at a concert, a faraway travel destination, the finals of your favorite sporting event, or a memorable location. This year at Coachella, we pushed the format even further by adding augmented reality, AR, overlays on top of the performances—like digital confetti that falls when the beat drops, or virtual objects that extend the into the crowd.

AR

AR Confetti in the VR180 stream.

To add these overlays in real time, we used our VR180 cameras together with stitching servers running a custom 3D graphics engine and several positionally tracked cameras. This allowed us to add a layer of spatially relevant visuals to the video feed. Simply put, it's like AR stickers for VR180.

In addition to the responsive AR elements during performances, we also featured Tilt Brush art by artist-in-residence Cesar Ortega, who drew his live impressions of the iconic Coachella landscape at daylight, dusk and night. We then inserted Cesar’s designs into the video stream in VR180 to allow the viewer to see the art.

Coachella_Dusk.png

Cesar Ortega’s recreation of Coachella in Tilt Brush

Watch the festival footage, including performances and behind-the-scenes footage from the point of view of both the fans and the bands, here. And for the most immersive experience, check it out in VR with Daydream View or Cardboard.

How to publish VR180

Last year we introduced VR180, a new video format that makes it possible to capture or create engaging immersive videos for your audience. Most VR180 cameras work just like point-and-shoot models. However, what you capture in VR180 is far more immersive. You’re able to create VR photos and videos in stunning 4K resolution with just the click of a button.

Today, we’re publishing the remaining details about creating VR180 videos on github and photos on the Google Developer website, so any developer or manufacturer can start engaging with VR180.

For VR180 video, we simply extended the Spherical Video Metadata V2standard. Spherical V2 supports the mesh-based projection needed to allow consumer cameras to output raw fisheye footage. We then created the Camera Motion Metadata Track so that you’re able to stabilize the video according to the camera motion after video capture. This results in a more comfortable VR experience for viewers. The photos that are generated by the cameras are written in the existing VR Photo Format pioneered by Cardboard Camera.

When you use a Cardboard or Daydream View to look back on photos and videos captured using VR180, you’ll feel like you’re stepping back into your memory. And you can share the footage with others using Google Photos or YouTube, on your phone or the web. We hope that this makes it simple for anyone to shoot VR content, and watch it too.

In the coming months, we will be publishing tools that help with writing appropriately formatted VR180 photos and videos and playing it back, so stay tuned!

Announcing high-quality stitching for Jump

We announced Jump in 2015 to simplify VR video production from capture to playback. High-quality VR cameras make capture easier, and Jump Assembler makes automated stitching quicker, more accessible and affordable for VR creators. Using sophisticated computer vision algorithms and the computing power of Google's data centers, Jump Assembler creates clean, realistic image stitching resulting in immersive 3D 360 video.

Stitching, then and now

Today, we’re introducing an option in Jump Assembler to use a new, high-quality stitching algorithm based on multi-view stereo. This algorithm produces the same seamless 3D panoramas as our standard algorithm (which will continue to be available), but it leaves fewer artifacts in scenes with complex layers and repeated patterns. It also produces depth maps with much cleaner object boundaries which is useful for VFX.

Let’s first take a look at how our standard algorithm works. It’s based on the concept of optical flow, which matches pixels in one image to those in another. When matched, you can tell how pixels “moved” or “flowed” from one image to the next. And once every pixel is matched, you can interpolate the in-between views by shifting the pixels part of the way. This means that you can “fill in the gaps” between the cameras on the rig, so that, when stitched together, the result is a seamless, coherent 360° panorama.

Optical-flow based view interpolation.gif

Optical-flow based view interpolation
Left: Image from left camera. Center: Images interpolated between cameras. Right: Image from right camera.

Using depth for better stitches

Our new, high-quality stitching algorithm uses multi-view stereo to render the imagery. The big difference? This approach can find matches in several images at the same time. The standard optical flow algorithm only uses one pair of images at a time, even though other cameras on the rig may also see the same objects.

Instead, the new, multi-view stereo algorithm computes the depthof each pixel (e.g., the distance to the object at that pixel, a 3D point), and any camera on the rig that sees that 3D point can help to establish it’s depth, making the matching process more reliable.

standard vs high-quality stitching.jpg

Standard quality stitching on the left:Note the artifacts around the right pole. High quality stitching on the right: Artifacts removed by the high quality algorithm.

standard vs high-quality stitching b&w.jpg

Standard quality depth map on the left:Note the blurry edges. High quality depth map on the right:More detail and sharper edges.

The new approach also helps resolve a key challenge for any stitching algorithm: occlusion. That is, handling objects that are visible in one image but not in another. Multi-view stereo stitching is better at dealing with occlusion because if an object is hidden in one image, the algorithm can use an image from any of the surrounding cameras on the rig to determine the correct depth of that point. This helps reduce stitching artifacts and produce depth maps with clean object boundaries.

If you’re a VR filmmaker and want to try this new algorithm for yourself, select “high quality” in the stitching quality dropdown in Jump Manager for your next stitch!