journey through memories

 

Fast Prototyping and UX development for ongoing Virtual Reality narrative experience.

 

 

"Heart of a survivor" (working title) is a demo for a still-unreleased virtual reality experience that takes the participant in a journey through the heart and memories of a real-life survivor story.

In 2019 I was hired together with technical director Nicolás Escarpentier, to adapt a movie script into an immersive and interactive virtual reality experience. Since the final version is still unreleased as of the month of September 2019, I will not disclose details about the plot, focusing instead on the creative process and techniques used.

 
One of my early concept arts illustrates the idea of a heart full of memories. The participant would embody the owner of the heart as both try to make sense of the tragic, real life, ordeals that unfold.

One of my early concept arts illustrates the idea of a heart full of memories. The participant would embody the owner of the heart as both try to make sense of the tragic, real life, ordeals that unfold.

Ideation

The creative process was highly iterative, focusing on rapid prototyping of concepts together with the director. For this we had to explore several different visualizations and storyboarding tools, often having to expand them to comply with the needs of Virtual Reality.

Who are you?

When creating an immersive narrative, the first question I posted was, “who is the viewer (or participant)?” Unlike most traditional media, there are no walls separating the viewer from the story being told around they. Usually, the first thing immersive media viewers do when wearing a headset is to look at their hands and body - even if there are none, in order to “anchor” their self in this new environment.

Being a biographical story, the straightforward approach would be to tell the story through the main character’s eyes and let the participant just agree to this. But we decided to play with the ambiguity of viewer/participant presence and deliberately invite they to step in and out of the main character’s point of view. Even the world where the story happens would be an extension of the main character’s body - his heart and memories as a place to be explored by the participant and himself.

I was really excited with the opportunity to turn embodiment into the core narrative mechanic and not just a problem to be solved.

VR naturally turns the viewer into part of the story - a participant. Even when there are no interactions, the participant is still inside the scene and shouldn't be seen as a passive camera. This makes for many novel storytelling opportunities, but demands rethinking what it narrative means when in Vr.

My idea was to embrace embodiment and use the user's self-perception as a narrative mechanic. For this, the team had to be immersed in VR as soon as possible.

I created sketches and storyboards on paper, then worked with Nicolas to test them inside the Unreal game engine using placeholder characters and objects.

We recorded the dialogues ourselves and added to an animated 2d storyboard and then to the 3D blocking inside Unreal using rudimentary animations. This allowed the director to see have a better view on how to further develop the story for VR.

storyboard_camp_001.jpg

There is a gap between script, screen, and VR, that makes it hard for the team to visualize the final result. To solve this I made heavy use of existing VR modeling tools like Oculus Quill and Medium. These allowed me to sketch 3D models and entire environments in minutes and quickly have the director experience them inside VR headsets.

These VR sketches could be visualized in the modeling editor themselves.

Or we would recreate them in Unreal to try more complex interactions.

This is my first reendition of a heart chamber using Quill. VR modeling tools allowed to bring the team inside VR from day one of the project.

This is my first reendition of a heart chamber using Quill. VR modeling tools allowed to bring the team inside VR from day one of the project.

my goal was to have the team sharing our work inside vr as quickly as possible .

softwares like oculus medium and quill allowed me to create vr sketches in minutes instead of days.

 

world building

For the characters, several technologies were tested, including 3d modeling, motion capture as well as several types of volumetric capturing, including, full 3D mesh and point capture, Depthkit as well as hybrid solutions.

Concept art I created for one of the characters. The script demanded a more naturalistic approach to the characters, but the art direction used a painterly effect not only to avoid “uncanny valley”, but also to give a poetic approach to the painful subject matter.

Concept art I created for one of the characters. The script demanded a more naturalistic approach to the characters, but the art direction used a painterly effect not only to avoid “uncanny valley”, but also to give a poetic approach to the painful subject matter.

 

Exporting to game engines

One downside of using VR modeling programs like Quill and Medium, is that they don't generate efficient game engine ready models. This is not a problem for small objects, but for a big environment like this heart scene, I had to rework their topology using a series of traditional 3D software like Zbrush and Autodesk Maya.

Again, speed and iterations were key, so I've made full use of the latest remeshing tools available - downsizing a 3D model with millions of triangles into more manageable thousands in a fraction of the time necessary if made by hand in Maya.

 

shooting 360 with professional actors.

To further understand character development and dialogue in VR, we were joined by Chris Hall, an award-winning producer of 360 VR experiences. We used real actors to act out the scenes around a 360 Ricoh Theta camera. The insights generated were used to create the motion script and to find out the best way to direct the participant's attention. It was also a lot of fun to see professionals actors adding real emotional depth to scenes I've been working on for days.

Motion Capture or Volumetric Capture

After the 360 shooting we realized we had to find the best way to transmit complex human emotions and how to best capture the actor’s performance.

My 6 year old daughter’s motion capture performance at a 15 camera Optitrack studio.

The initial idea was to capture body animations using motion capture solutions like Optitrack or standalone suits. This would allow us to create character models from scratch and to move them freely in the virtual environment; a pipeline commonly used in video-games. The actor would have to act twice though - once on a motion capture studio wearing a mocap suit and then a second time standing still on a facial capture rig. Solutions for doing face and body at the same time exist but are prohibitively expensive.

After the virtual body is modeled, the facial animations and the motion capture would be joined. Although it provides more creative freedom, motion capture was deemed too labor-intensive, as well as challenging for the performers and the director.

Volumetric capture technologies, on the other hand, allows for “full performance capture” of the actor’s, registering their bodies, animations, texture and tridimensional data on a single take. This system appeals to directors coming from video and film, since it shares many of the procedures of a traditional movie set, as well as allowing for more natural and emotional performances.

The cons of volumetric capture, besides costs, is that performances are limited by the area covered by the cameras. Also, the animation data generated is usually “baked” on the 3d model, making alterations difficult.

hybrid solutions with depthkit and azure kinect

One of the most promising solutions we explored was to use Depthkit in conjunction with the Azure Kinect - the latest version of Microsoft’s Kinect, aimed toward developers. Together, they generate a depth field point cloud capture of the area right in front of the camera. The setup is easy, requiring only a tripod for the Kinect, lights and a green screen, making it a much more portable solution. But this simplicity comes with a cost of not fully capturing the entire scene. If you move around the 3D captured object, you’ll see there’s nothing there.

To bypass this, we experimented joining the Depthkit capture with fully rigged 3d models in Maya. In one experiment we separated the facial performance and used it as a “mask” on a 3D character, whose body was running a different animation file.

Another experiment consisted of using volumetric data from the front and back of the actor, and then “baking” it’s texture and volume to a template 3d character. This generated a 3D character with the realistic texture and volume of the actor, while on a simplified and game-ready 3d model.

The process and both examples are shown in the video bellow.

Conclusion

The project was just recently submitted to the film festival circuit. The work created here has proven key to creating a better understanding of the affordances of virtual reality and it is now being used to streamline the onboarding of the bigger production teams who will be working on the final piece. Excited to see what’s ahead for this beautiful project.

 

Thank you to all involved in prototyping this amazing project

Victoria Bousis - Director

Nicolás Escarpentier

Todd Bryant and the R-Lab NYC

Misha Zabranska

Mathew Niederhauser and John Fitzgerald - Sensorium

Chris Hall

All of the studios, professionals and friends who opened their doors and offered their insights and support