Tag Archives: Google VR

Daydream Labs: Bringing 3D models to life

Blocks is a tool that lets anyone create beautiful 3D objects in virtual reality, with no prior modeling experience required. You can use the objects or characters that you make for many applications, like VR games or augmented reality experiences. Check out the Blocks gallery for some fantastic examples of what people are creating—we’ve been blown away by the quality of what we’ve seen so far.

As we explored all these quirky creations, we imagined how great it would be if the models could come to life. Right now, even the best creations are still static, and our team at Daydream Labs took that as a challenge. So, during a one-week hackathon, we prototyped ways to make Blocks scenes feel dynamic and alive. Here’s what we came up with:

introdancing

Animating 3D models for use in virtual reality or augmented reality is a three-step process. First, you need to set up the model so it can moved. Then, you have to figure out how to control it. Last, you need a way to record the movements.

Step One: Preparing the Model

Before animating a character in Blocks, some prep work is required to get it ready. We explored two methods of doing this: inverse kinematics and shape matching.

Inverse kinematics is a common technique for animating characters in video games, and it’s even used in other fields like robotics. At a super high-level, the character automatically positions its body based on where you want the hands and feet to go. So if you raise the character’s hand over its head, the elbow and joints will be realistically positioned thanks to some nifty calculations done by inverse kinematics. Instead of posing every part of the character, you just move a hand or a foot, and the rest of the character’s body position adapts. 

KeyedWalkCycleLoop

This makes inverse kinematics great for characters with rigid “skeletons,” such as humans, animals and robots—but shape matching is a new technique for characters with less well-defined physiques, such as a sentient blob or a muppet. Shake a character’s foot, and its leg wiggles around like rubber. The jiggly quality of the movement adds character and playfulness to things like a chair or a boombox with legs. Best of all, it works with objects of any shape.

Jiggly

You can check out the specific shape-matching algorithm we used here. Our current prototype requires you to spend a minute setting up an object for shape matching, but the process could eventually be automated. Then, you’d be able to get a creation wiggling without any additional work.

Step Two: Controlling the Model

Once the model is prepped and ready to go, VR helps you move it using three techniques: direct control, grab points and posing.

You can directly control a character by connecting its hands and head to the user's headset and controllers. This is similar to the performance technique used by other VR creativity apps such as Mindshow.
MotionControl

You can also place Vive trackers on your feet to control the character’s legs. Look at that move!

FootControl

Alternatively, you can control the model by grabbing specific points and manipulating them, sort of like how you’d make a teddy bear wave by grabbing its arm and moving it. Here, someone is flapping Chairy’s gums.

ChairGums

In testing, this even worked with multiple players—you and a friend could wiggle characters in a shared environment. It was neat to be moving a character together, almost like playing with toys.

For humanoids, you can directly pose the character’s skeleton, similar to posing an action figure or art mannequin. In VR, spatial awareness and control allows armatures to be posed much more intuitively than in traditional apps. This is great when precise control of all parts of a 3D model is important, such as setting poses for keyframed animation.

ToffPose

Each of these control schemes have their strengths. People loved “becoming” the object when in direct control—many would role-play as the character when using this interface. When more precision is required, inverse kinematic posing is a good option that's more intuitive in VR than a traditional desktop environment. We found the rubbery shape-matching effect to be particularly interesting. The stretch and jiggle makes this technique less precise than posing a skeleton, but definitely more playful.

Step Three: Recording Motion

Lastly, we experimented with two techniques to record and play back the movements: pose-to-pose and live-looping.

Pose-to-pose animation is similar to current 3D animation techniques and works for complex movements like jumping into a chair. You set a pose, take a “snapshot” (or keyframe), and then repeat the process to create a sequence of poses. When you play this, the character moves between those poses. VR makes the process more intuitive, allowing people to create expressive animations without needing to learn complex animation software.

JumpFast

For simpler animations, live looping lets you record an object’s movements in real-time and then play them back as a repeating loop. Press the record button, move, press the button again, and you’re done—the animation starts looping. We got these two characters dancing in under a minute.

FinalBlocks

Live looping is easy and great for quickly creating rough animation, whereas pose-to-pose is better for more precise or complex movements.

Mapping your movements to any Blocks creation is magical, and as this prototype demonstrates, technically feasible. A person with no animation experience can easily breathe life into one of their 3D models. The only limit is imagination.

Introducing an original VR video series by MLB and Daydream

Virtual reality helps filmmakers tell stories from a new perspective, bringing you into the action. The Daydream team works directly with creators of all types—movie studios, TV networks, musicians and YouTube Creators—to help bring their awesome ideas to (virtual) reality.

We’ve partnered with Major League Baseball on some exciting VR experiences for Daydream, including the MLB.com At Bat VR app and the MLB.com Home Run Derby VR video game. Today, we’re debuting our latest collaboration: “On the Verge,” an original VR video series that provides an up-close, behind-the-scenes look at the lives of young MLB stars around the game.

The first episodes of “On the Verge” will take you on the field, inside the batting cage, in the clubhouse and to more places with young MLB stars Josh Bell (Pittsburgh Pirates), Mookie Betts (Boston Red Sox), Manuel Margot (San Diego Padres), and Jose Berrios (Minnesota Twins). These four episodes are available today in the recently released MLB.com At Bat VR Daydream app, which combines live video streaming and real-time stats for a complete live game sports experience in VR. They'll also be available on MLB’s official YouTube account soon.

We worked closely with MLB to tell these stories from a new perspective amidst unique access points around ballparks with Jump, Google’s platform for VR video capture that combines high-quality VR cameras and automated stitching. Because the Jump cameras don’t take up a ton of space, it allowed MLB to capture memorable moments on and off the field, ultimately producing fun stories of what it’s like to be big leaguers.

btsmlb

“On The Verge” joins a number of videos and series already created with Jump, including The New York Times’ Great Performers collection, Within’s “The Possible” series and Wevr’s “Internet Surfer” video.

Grab your Cardboard or Daydream View and check out the first handful of episodes of “On The Verge” today. Additional episodes will drop at key moments throughout the 2017 MLB season.

Daydream Labs: Teaching Skills in VR

You can read every recipe, but to really learn how to cook, you need time in the kitchen. Wouldn't it be great if you could slip on a VR headset and have a famous chef walk you through the basics step by step? In the future, you might be able to learn how to cook a delicious five-course meal—all in VR. In fact, virtual reality could help people learn all kinds of skills.

At Daydream Labs, we tried to better understand how interactive learning might work in VR. So we set up an experiment, which aimed at teaching coffee making. We built a training prototype featuring a 3D model of an espresso machine which reacts like a real one would when you press the buttons, turn the knobs or drop the milk. We also added a detailed tutorial. Then, we tasked one group of people to learn how to pull espresso shots by doing it in VR. (At the end, we gave people a detailed report on how they’d done, including an analysis of the quality of their coffee.) For the purpose of comparison, another group learned by watching YouTube videos. Both groups were able to train for as long as they liked before trying to make a coffee in the real world; people assigned to watch the YouTube tutorial normally did so three times, and people who took the VR training normally went through it twice.

othercoffeegif
A scene from our coffee training prototype

We were excited to find out that people learned faster and better in VR. Both the number of mistakes made and the time to complete an espresso were significantly lower for those trained in VR (although, in fairness, our tasting panel wasn't terribly impressed with the espressos made by either group!) It's impossible to tell from one experiment, of course, but these early results are promising. We also learned a lot of about how to design future experiments. Here's a glimpse at some of those insights.

othercoffeegif2
Another scene from our coffee training prototype

First, milk coffee was a bad choice. The physical sensation of tamping simply can't be replicated with a haptic buzz. And no matter what warning we flashed if someone virtually touched a hot steam nozzle, they frequently got too close to it in the real world, and we needed a chaperone at the ready to grab their hand away. This suggests that VR technology isn’t quite there when it comes to learning some skills. Until gloves with much better tracking and haptics are mainstream, VR training will be limited to inputs like moving things around or pressing buttons. And if the digital analog is too far removed from the thing it's simulating, it probably won’t help all that much with actually learning the skill.

We also learned that people don’t follow instructions. We see this in all of the prototypes made in Daydream Labs, but it was especially problematic in the trainer. Instructions on controllers? People left their hands by their sides. Written on a backboard? They were too busy with what was right in front of them. Delivered as a voiceover? They rushed ahead without waiting. We even added a “hint” button, but people thought that it was cheating—and forgot about it after a step or two anyways. We ended up needing to combine all of these methods and add in-scene markers, too. Large green arrows pointing at whatever the user was supposed to interact with next worked well enough to allow us to run the test. But we’ve by no means solved this problem, and we learned that lots more work needs to be done about incorporating instructions effectively.

Finally, we discovered that it was too difficult to track all the steps a person took. Every choice we gave a user led to an exponential growth in the number of paths through the tutorial. Worse, people didn't always follow our linear “railroad-style” path, so we had to model all kinds of situations; for example, letting the user steam the milk before grinding the coffee. In the end, it was much easier to model the trainer like a video game, where every object has its own state. So instead of the trainer keeping track of all the steps the user did in order ("user has added milk to cup", we had it track whether a key step had been achieved ("cup contains milk").

Despite these challenges, we consider this prototype a success: people learned something new in VR and enjoyed the process. Some of them even came back to use the espresso trainer again after they’d tried to make a real coffee. In fact, once they had the real-world experience, the virtual training had more context and was more meaningful to them. It may be that VR is a useful way to introduce people to a new skill, and then can help them practice and consolidate once they’ve tried it in the real world. One thing’s for sure—there’s a lot more to learn about learning in VR!

Adventures abound: Explore Google Expeditions on your own

Google Expeditions makes it possible for teachers to take their classrooms on virtual reality field trips to amazing places like the Taj Mahal or Machu Picchu. Today, we’re starting to roll out a new solo mode of Expeditions for Android, so that anybody can explore more than 600 different tours on their own. Just download the Expeditions app (coming soon for iOS), drop your phone into Cardboard and get ready for an adventure.

For the past two years, Expeditions has been a tool to extend learning inside the classroom, helping students to see and experience the world in new ways, visit college campuses, gain exposure to new career paths and role models, and learn about various social impact initiatives happening around the globe. During this time, we've heard from students, teachers, and even our friends, that they'd love to explore and learn from Expeditions outside the classroom .

SGE_Body1

Self-Guided Expeditions let anyone explore anywhere. Students can go on tours at home and share the experience with their family. Teachers can assign tours as homework to complement in-class work. What better way to round out textbook reading about the Founding Fathers than an Expedition about the Hamilton-Burr duel narrated by Lin-Manuel Miranda? And of course, anybody who loves to learn and explore can experience all the tours for themselves.

SGE_Body2

It’s easy to use. All you need is your smartphone, Google Cardboard and the Expeditions app. If you have a Daydream-ready phone, it also works with Daydream View. Simply launch the app, pop your phone in your viewer and you’re ready to go. You can take tours as either an Explorer or a Guide. As an Explorer, you experience the tour on your own, and you’ll see points of interest highlighted with more information about the incredible sights you’re seeing. Guide mode is especially handy if you’re a teacher and you want to preview a tour before leading your students on it.

We’ve also heard from teachers that they want more tools to help explain and highlight things within Expeditions panoramas and environments. The new “Annotations” tool lets a Guide draw within a scene using their finger or a stylus. Each of the connected Explorers will instantly see that same annotation in the scene.

SGE_Body4

To get started with Self-Guided Expeditions, check out the Seven Modern Wonders of the World, or dive into the beautiful and fragile Great Barrier Reef. Or, if you love baseball, check out one of the game’s great cathedrals with a tour of Oriole Park at Camden Yards. Wherever you choose to go, there’ll be something amazing to see.

Tilt Brush Artist in Residence: Meet Estella Tse

Editor’s note: Tilt Brush lets you paint in 3D space with virtual reality. Earlier this year, we launched the Artist in Residence (AiR) program to showcase what’s possible when creative artists experiment with this new medium. The resulting works of art have been amazing, and you can check some of them out on our website, or right in the Tilt Brush app itself.

In this series, we go deeper into these artists’ process, explore their creative influences, hear about their experience using Tilt Brush and share any tips they have for aspiring VR artists. Want more? Check out our previous posts on Steve Teeps and Isaac Cohen.

As an artist in residence, Estella Tse created Metamorphosis, which celebrates the beauty of our individual journeys of growth, transformation and self-discovery. We caught up with Estella to hear more. 

Walk us through your creative process in Tilt Brush. How do you use it?

I got comfortable with Tilt Brush immediately! I felt like I could summon light out of my fingertips. And it's so intuitive. Ideas flow out of me.

My VR painting technique isn’t very different from designing an illustration on paper. I start with fast, loose and long lines. Then I tighten up and work on details, going from big to small and general to specific.

I usually have an idea of the mood or aesthetic I want to create in VR. I like to design with intent. Everything from shape to scale to color, all elements serve the mood and feeling of my pieces. Every mark counts. I want my viewer to feel inspired when they step into my pieces. I want them to feel the magic.

How is Tilt Brush different from working in other mediums?

It's almost as if I'm working with a whole new dimension! The vastness of seemingly infinite space is exhilarating, and also too much at times. I've been making skyboxes to close off my space.

Tilt Brush is not like any other art form. It's kind of a hybrid between drawing and sculpting. I liken it to sculpting with line. It's so easy to wireframe and plan out a scene, making it a great tool for quick prototyping. For the first time, we can sketch in 3D without having to use a complex modeling software. Thinking and working in 3D has never been more intuitive and natural.

One of the most fascinating things about Tilt Brush is that this is the first time we as humans have ever been able to fully immerse ourselves in hand-drawn paintings—you can look around and through my paintings. From an art history point of view, this is incredible.
EstellaBodyImg

What inspires you?

On a high level, I'm really interested in exploring the potential of creating a new art form in VR, similar to how Walt Disney and his team iterated over and over to learn the balance of storytelling in animation. This is just the beginning for VR and AR. I'm excited to experiment with different techniques, and to explore the evolution of art with innovative technology.

In my residence program with Tilt Brush, I used the “playback” feature when loading a sketch as an animation tool. Instead of having my final piece be the piece, the process is the piece. I painted a caterpillar going through the phases of metamorphosis, then blossoming into a butterfly in front of your eyes. I believe growth, process, and the journey are really important aspects of creativity, as well as life.

Try everything. There’s no right or wrong way to do anything right now. There are no rules.

Do you have any advice for other Tilt Brush creators?

Try everything. There’s no right or wrong way to do anything right now. There are no rules. The best part about Tilt Brush is that anyone can draw. It's fun. It's not intimidating. It brings out the childlike sense of wonder that we had as kids. I've seen that childlike spirit of even veteran animation artists come out while using Tilt Brush.

Create things beyond reality. We've been given a very special opportunity to create things that are out of this world, defying the rules of physics. Forget trying to make something look "real." What's next? You've been given the power of magic. What will you do?

Experience Dunkirk in WebVR

It’s World War II. You’re trapped on a beach, deep within enemy territory, with no place to take shelter or even hide. All you can do is wait for the rescue boats to come. Would you escape? 

You can see for yourself with the Dunkirk WebVR game for Chrome, a unique, collaborative VR experience based on the upcoming movie. It transports you back to the siege of the beaches of Dunkirk in June 1940, and it’s available now. Playing with a friend or solo, you can experience the battle as either of two soldiers, both trying to survive. You’re plunged into this moment in history through alternating perspectives—and every choice could mean the difference between life and death. In the end, just like the movie, the game shows the power of people working together in extreme circumstances and how the human spirit can persevere when all looks lost.

Because the game is available in WebVR, instead of as a native app, it’s easier for anyone to experience it. It works across all devices—you can play it in a web browser like Chrome, on a phone, or with any VR headset that supports WebVR, like Cardboard and Daydream.

DunkirkBody

The Dunkirk WebVR game for Chrome is a collaboration between Warner Bros, the Google Chrome VR team, Jam3, and Google Zoo, our in-house creative think tank for brands. Check it out now and catch Dunkirk (the film) in theaters on July 21, 2017.

Exploring virtual worlds with WebVR and Matterport

Editor’s Note: When you build with WebVR, anyone can explore VR experiences with Chrome and Daydream View. Ashish Agrawal is the head of new initiatives and virtual reality at Matterport. In this post, he explains why WebVR was a great tool for Matterport and how the team added WebVR support to their platform.

MP1.5
Matterport tours combines 360º panoramas with a unique Dollhouse View (3D data) to give users a complete view of a space.

Individual 360º photos are great for a quick preview of what a place looks like, but with a Matterport Space, you really feel like you’re actually there. Our virtual tours combine many 360º panoramas along with 3D data to create an immersive, interactive experience.

By combining our 3D camera hardware with custom vision processing in the cloud, we’ve built a platform to create virtual environments from real places. Thousands of people have used our technology to digitize more than 450,000 real-world places into 3D experiences: celebrity homes, museums, canyons, iconic architecture and much more.

While you can access Matterport Spaces through a web browser on desktop or mobile, exploring them in virtual reality takes an additional step. Previously, you had to download the Matterport VR app from the App Store, Oculus Store, or Google Play and switch to the app whenever you wanted to go into VR. Now with Matterport’s implementation of WebVR, no external app download is required—everything you need is right in the Chrome web browser. You can go to a mobile website that has a Matterport Space embedded on it, tap the VR button, put the phone into your Daydream View headset, and you’re ready to explore.

Matterport was one of the first companies to partner with Google to bring VR to Chrome. We started integrating our product with the WebVR API while it was still in beta. We took this early step because we really believe in WebVR as a future platform, and we wanted to be there right when it launched.

Let’s dig a little deeper on how we implemented it. Because our WebGL-based player works on desktop and mobile platforms, you could already explore Matterport Spaces through a browser. We wanted to ensure that we retained this optimal performance with added WebVR support. However, a full optimization, including all the features we’ve added to our WebGL-based player in the last few years, would have taken time. So in the true spirit of keeping it simple, we made a version of the player with the WebVR Boilerplate and a stripped down version of our loading and rendering code, and we optimized everything for performance.

Here are a few of the things we did to optimize performance in VR:

  • To shorten load time, we preload the WebVR version immediately after the user invokes the player rather than after the hand controller calibration.

  • For the initial launch, we removed all the features included in our normal desktop/mobile version.

  • We render only what’s necessary and nothing more.

  • We profile on the actual target device, not on a developer’s computer.

To learn more, read our in-depth case study.

Now when you enter WebVR, we switch to the stripped-down WebVR-based version of the player. And when you exit, we switch back to our normal WebGL-based player. You don’t even notice the change.

Matterport2

‘Hawaii Oceanside Villa’ in WebVR. The white line is the ray-tracing from the controller. Blue circles are locations the user can teleport to.

There’s more to virtual reality than gaming, and we here at Matterport are committed to making it simple to create, discover, and share great VR content. WebVR is helping us accomplish these goals by making it even easier for our users to explore. Check it out for yourself by visiting the Hawaii Oceanside Villa. Just tap the VR button in the corner and pop it into a Daydream View. It’s the next best thing to being there.

Blocks: Easily Create 3D Objects in VR

Today, it takes complex software and a specific skillset to create compelling VR and AR experiences. That software also requires building 3D objects on a 2D screen—something our brains aren’t wired to do. It occurred to us that creating the objects while in virtual reality could make this easier. So we developed Blocks, a VR app for the HTC Vive and Oculus Rift that lets you easily create beautiful, 3D objects in no time.

Blocks is simple enough for anyone to use, even those without any prior modeling experience. It’s designed to feel more like playing with children’s blocks than working with traditional 3D modeling software. Starting with a simple set of shapes, a color palette, and an intuitive set of tools, you’re able to naturally and quickly create almost anything you can imagine, from a piece of watermelon to a whole forest scene.

When your creation is ready, you can export it as an OBJ to use in AR or VR apps you're developing. You can also share it to the web or generate an animated gif. Explore other people’s creations at vr.google.com/objects for inspiration or as a starting point for your own remix.

We’ve already seen amazing creations and use cases from 3D modelers, artists, developers and even people with no modeling experience at all—ranging from a robot that could star in a game, to a skyline meant as a backdrop for a Tilt Brush sketch, to some cacti made just for fun.

Get Blocks for free starting today on Oculus Store and Steam.

How journalists can tell compelling stories using VR

Over the past few years, we’ve seen the rise of a new medium for storytelling in journalism: virtual reality.  From the printing press to radio, from television to the internet, and now VR, technological innovation has changed how journalists gather, report and deliver the news. VR is already making an impression on journalism by immersing an audience in a story, offering unlikely perspectives and creating connections to emotional moments.

At the Google News Lab, we help journalists develop a better understanding of how to tell stories in VR. So, for the past six months, we've conducted a research study that offers insight into what makes VR a distinct storytelling medium, why it’s alluring to people, and what that means for storytellers. We also partnered on this study with a team at Google called ZOO, a creative think tank for brands and agencies.

The study used a method of qualitative research called ethnography, which uses in-field observations and interviews to understand a person’s relationship with an experience. We conducted 36 interviews with a diverse range of participants, observing them as they interacted with their favorite VR pieces and asking them to reflect on how the experience made them feel.

Our study found that VR was distinct from other storytelling mediums in a few key ways. First, it conveys the sense that the viewer is “living the story” as opposed to passively consuming it (“storyliving” rather than storytelling). VR also allows people to dramatically expand their perspective on a story and can leave them with strong emotional experiences, but sometimes that comes at the expense of conveying information.

Participants found VR alluring for a few reasons: viewers can participate rather than simply be immersed in an experience; they can seek out a specific emotion, like happiness, or sadness or fear; and they can  embody someone or something else—a bird, a tree, or a person living on the other side of the world.

Storyliving: a study of how audiences experience VR and what that means for journalists

Storyliving: a study of how audiences experience VR and what that means for journalists

So, what do our findings mean for journalists who want to tell compelling stories in VR? Here are three factors journalists should consider, plus some tips for how you can incorporate VR into your reporting:

  1. VR is effective when it’s focused on conveying an emotional experience
    Given that VR is a medium that privileges storyliving over storytelling, journalists should approach how they structure and frame a story differently than they would with more traditional mediums. 

    For journalists, focus on conveying an emotional impression, rather than telling a story that follows a traditional narrative arch with a beginning, middle, and an end. Consider the emotional state you want the viewer to experience and find the moment within your story that can best deliver that. A viewer will often seek out more information about the subject they have just been immersed in, so it makes sense to package that detail or backstory alongside the VR experience. 

  2. Play with perspective in new ways and create opportunities for participation
    Conveying perspective—or encouraging people to see a story through someone else’s eyes—is critical to good journalism. VR has the unique ability to produce a sensation of embodiment which can be a powerful tool to expand perspective. 

    So journalists should let viewers  choose a perspective. Can you let a viewer experiences a story about a political crisis from a particular side of the conflict? Play the baseball game from the perspective of two teams? See outer space from the inside of an astronaut’s helmet?

  3. Consider the heightened vulnerability of subjects when telling a story
    VR can leave viewers in a state of vulnerability, both physically and emotionally. A person can feel surprised or shocked when entering the virtual experience or re-integrating into reality at the end of an experience.

    That means journalists should consider the ethics (both pitfalls and advantages) of making your viewer feel vulnerable when constructing a story about an emotionally sensitive topic. Journalists take this into account when constructing stories in a traditional medium, but the vulnerability is more pronounced in VR. 

    You should also signal to a viewer when they’re entering a story and when they’re exiting from it (similar to how movies begin with a title and end with the credits). This is especially important at the end of a VR story since viewers typically piece together their understanding of the story after it’s over. 

VR creates an opportunity for journalists to tell stories in a new way. Insights from our study can help journalists use VR to expand perspectives, create strong emotional connections to a story, and spread knowledge that matters. Go ahead, immerse yourself.

Tilt Brush AiR: Isaac Cohen

Editor’s Note: As part of his residency, Tilt Brush artist Isaac Cohen—aka Cabbibo—used the Tilt Brush Toolkit to create a VR picture book titled “Delila's Gift,”  which tells the story of a small sea creature named Delila and her struggles to comprehend the darkness around her and what it means to belong.

“Delila’s Gift” is free to play on Steam. We caught up with Isaac to hear more about what inspires him and what it’s like working with Tilt Brush.

1. Walk us through your creative process in Tilt Brush. How do you use it?

The Tilt Brush interface is really intuitive, so it’s easy to get started. For me, it’s the first step of the creative process. After painting in Tilt Brush, I take my paintings and use them as the basis for a simulation that is coded in Unity. I end up with these tiny little sprites that recreate the form of the painting.

Because I'm recreating the painting with a limited number of particles, the paintings need to be simple. This is hard for me because I really can't draw, so making simple paintings means repainting each page many times until it feels right.

ICDG_Pic

2. What inspires you?

My inspiration comes from my personal experiences, as well as from nature and reading about fantastic natural phenomena and creatures—things like nudibranchs, jellyfish, redwood trees, the reflection of light on the water, prisms, iridescent beetles, and the clouds rolling in over the edge of a mountain.

For “Delila’s Gift”, it was an experience I had one day when I was biking, and a car almost hit me. The driver started yelling at me and continued to follow me. I was angry, and I entered fight or flight mode, which wasn’t helpful. I believe that had I taken a deep breath, I could have had a meaningful interaction with that person and showed them love. Taking a deep breath in moments of fear and loneliness can help calm our minds and remember how miraculous it is to be alive. It's a gift; hence the name.

I think that there is a simplicity to the way that I can paint with Tilt Brush that makes it more humane, genuine, and approachable. It helps to have paintings that feel “handmade,” like they are made by a real person from the heart. I wanted to reflect that in “Delila’s Gift.”

ICDG_Pic2

3. Do you have any advice for other Tilt Brush creators?

Tilt Brush is the equivalent of my VR journal. I use it as a place to sketch and think, to imagine and inhabit the space that I am about to create.

I’d suggest that any time you create a new project in VR, whether it’s a game, a narrative, or some other weird experience, you should spend some time in Tilt Brush, mocking up what the scenes, characters and interfaces look like. It’s such a special way to get used to the space you are about to inhabit, and allows for dramatically quicker iteration in terms of understanding scale, size, physical aesthetic, and practicality.