Virtual Reality Filmmaking: Techniques & Best Practices for VR Filmmakers

Virtual Reality Filmmakingpresents a comprehensive guide to the use of virtual reality in filmmaking, including narrative, documentary, live event production, and more.Written by Celine Tricart, a filmmaker and an expert in new technologies, the book provides a hands-on guide to creative filmmaking in this exciting new medium, and includes coverage on how to make a film in VR from start to finish. Topics covered include: The history of VR; VR cameras; Game engines and interactive VR; The foundations of VR storytelling; Techniques for shooting in live action VR; VR postproduction and visual effects; VR distribution; Interviews with experts in the field including the Emmy-winning studios Felix & Paul and Oculus Story Studio, Wevr, Viacom, Fox Sports, Sundance's New Frontier, and more.

99 downloads 4K Views 37MB Size

Recommend Stories

Empty story

Idea Transcript


Virtual Reality Filmmaking Virtual Reality Filmmaking presents a comprehensive guide to the use of virtual reality in filmmaking, including narrative, documentary, live event production, and more. Written by Celine Tricart, a filmmaker and an expert in new technologies, the book provides a hands-on guide to creative filmmaking in this exciting new medium, and includes coverage on how to make a film in VR from start to finish. Topics covered include: • The history of VR; • VR cameras; • Game engines and interactive VR; • The foundations of VR storytelling; • Techniques for shooting in live action VR; • VR postproduction and visual effects; • VR distribution;

• Interviews with experts in the field including the Emmy-winning studios Felix & Paul and Oculus Story Studio, Wevr, Viacom, Fox Sports, Sundance’s New Frontier, and more. Celine Tricart is a filmmaker and founder of Lucid Dreams Productions, whose work was showcased at the Sundance Film Festival and gathered numerous international awards. Her past credits include “Stalingrad” and “Transformers: Age of Extinction.” Celine produced and directed Maria Bello’s “Sun Ladies,” a VR experience about the Yazidi women fighting ISIS in Iraq. She has previously published another book, 3D Filmmaking: Techniques and Best Practices for Stereoscopic Filmmakers (2016).

Virtual Reality Filmmaking Techniques & Best Practices

for VR Filmmakers Celine Tricart

First published 2018 by Routledge 711 Third Avenue, New York, NY 10017 and by Routledge 2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN Routledge is an imprint of the Taylor & Francis Group, an informa business © 2018 Taylor & Francis The right of Celine Tricart to be identified as author of this work has been asserted by her in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data A catalog record for this book has been requested ISBN: 978-1-138-23395-9 (hbk) ISBN: 978-1-138-23396-6 (pbk) ISBN: 978-1-315-28041-7 (ebk) Typeset in Times New Roman and Optima by Keystroke, Neville Lodge, Tettenhall, Wolverhampton

Contents

Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii List of Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

Introduction and Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Part I  Theoretical and Technical Foundations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1 History of VR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 with Eric Kurland

2 Live-Action VR Capture and Post-Production . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3 Game Engines and Interactive VR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4 VR Headsets and Other Human–VR Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . 65 Part II  Virtual Reality Storytelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 5 VR: A New Art? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 6 VR as a Storytelling Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 7 Make a Film in VR from Start to Finish . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

vi Contents The Future of VR/Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 List of Interviewees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 About the Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

Acknowledgements

I wish to thank everyone who helped and supported me during the writing of this book. In particular, thank you to all the interviewees, who devoted considerable time to answer my questions and review my transcripts. The VR community is incredibly generous and supportive. I feel grateful to be part of it. I would also like to thank Carrie Shuchart, Matthew Blute, and Buzz Hays for proofreading this book, as well as my mentors who guided me and supported me throughout my career, Yves Pupulin and Steve Schklair. Finally, I warmly thank my editor, Simon Jacobs, for his patience and support to make this first-ever book on VR filmmaking happen.

Abbreviations

2D two-dimensional 3D three-dimensional AAA games with high production budgets and extensive promotion ADR additional dialogue recorded AR augmented reality AV augmented virtuality CAVE Cave Automatic Virtual Environment CEO chief executive officer CGI computer-generated imagery CRT cathode ray tube DP director of photography FOMO fear of missing out FoV field of view FPS first-person shooter fps frames per second HD high definition HFR high frame rate HMD head-mounted display IMU inertial measuring unit IP intellectual property ITD initial time delay

LARP LCD LED LiDAR MR NFL OB OLED PCM POI POV QR R&D RGB S3D SDK SVOD TOF TVOD VFX VOD VR

live-action role-playing game liquid crystal display light-emitting diode light detection and ranging mixed reality National Football League outside broadcast organic light-emitting diode pulse-code modulation point of interest point of view quick response (code) research and development red, green, blue stereoscopic 3D software development kit subscription video on demand time of flight transactional video on demand visual effects video on demand virtual reality

Introduction and Definitions For most people, virtual reality (VR) is first and foremost a gaming medium. Is it true that numerous studies show how massive and important this market is about to become with the release of the PlayStation VR headset as well as the HTC Vive with its dedicated VR-ready Steam interface. For others, VR will revolutionize healthcare, architecture, education, etc. However, what about storytelling? This book is specifically about “VR filmmaking,” which means it is about how to create virtual reality experiences for entertainment purposes, whether it is a fiction or documentary, and whether it is shot with a VR camera (this is called “live-action VR”) or created in game-engine software (“game engine-based VR”). The book is divided into two parts: Part I is devoted to theoretical fundamentals and techniques. You will

find there a summary of VR history, an in-depth exploration of the current technologies for both liveaction VR (cameras, workflow, sound, etc.) and game engine-based VR (software, photogrammetry, volumetric capture, etc.), and a chapter dedicated to the VR headsets and other human-VR interfaces. In Part II, we focus on the storytelling aspect of VR in order to provide a comprehensive and global description of it. We compare VR with other existing arts; explore what VR brings to the table in terms of storytelling, and how to use it. Finally, the last chapter describes the making of a VR experience from start to finish, from script to distribution. You have in hand two books in one and I warmly encourage you to focus on the chapters that interest you the most, even if it means skipping some others if necessary.

Shari Frilot, Founder/Chief Curator of New Frontier at Sundance Film Festival I never really thought about this but I had a flashback moment where I remember starting Sundance’s New Frontier VR selection and one of the reasons why I think we were able to succeed was because we were able to make the case with YouTube becoming a new venue for filmmakers and YouTube also becoming the second largest search engine after Google. You were starting to see films evolve in their role in society. They were not just films that we watched and were entertained. They weren’t even things that we learned from, but they were things that we were using to communicate to one another. We would send shorts back and forth to describe and communicate things, and that has very much come to bear in terms of how we communicate with one another using the moving image as the media landscape and the infrastructure. The media architecture, the story architecture has enabled the role of cinematic storytelling to evolve. VR presents the same kind of thing. We’ll see how it’s adopted on the consumer market and how many people get into headsets, but certainly the technology of being able to go online and share VR is here and will get exponentially better. VR as a gadget that delivers an experience is really compelling but it can also change our storytelling culture profoundly. Not only how we tell stories but why. The role of storytelling as a communications platform.

2  Introduction and Definitions First and foremost, let’s define what virtual reality really means as well as other terms that are used in this book. “Virtual reality” is an ensemble of visuals, sounds, and other sensations that replicate a real environment or create an imaginary one. A person using virtual reality equipment like a VR headset is able to “look around” the virtual world and sometimes interact with it (see below).

making than gaming, and interactive VR is the other way around. In a cinematic VR experience, the participants are immersed in a 360° sphere and the only agency they have is to look around. In an interactive VR experience, the participants can sometimes interact with objects and characters, choose various outcomes of a situation, and move freely in the environment. Cinematic VR is akin to traditional storytelling, where the audience is invited to “sit and relax” and witnesses

Virtual Reality and 360° Video There is an ongoing debate regarding the difference between the terms “virtual reality” and “360° video.” Some like to say that VR only applies to game enginebased experiences as it allows for freedom of movement. The most common opinion – and the one adopted in this book – is that we use the term “VR” when the content is watched in a VR headset and “360° video” when it is watched on a flat screen, using a VR player. This means that the same content can be both VR and 360° depending on which platform is used to display it. VR platforms include (but are not limited to) Steam, Oculus Store, or Samsung Gear VR. 360° platforms include YouTube 360, Facebook, etc.

Figure 0.1  VR vs 360°

In VR, the audience has a more active role. Or should we say the players? Virtual reality as an art form seems to be the missing link between gaming and traditional filmmaking, and the audience is neither exactly a spectator, nor a player. This is the reason why we will use the term “participant” to describe the viewer/user/audience from now on. Cinematic VR and Interactive VR Virtual reality is divided into two categories: cinematic VR and interactive VR. Cinematic VR is closer to film-

Maxwell Planck, Founder Oculus Story Studio, Producer of “Dear Angelica,” “Henry” I believe that cinematic VR and interactive VR are two different media. The visitor feels dramatically different when they can only look around vs. when they can move around in a space. Both have amazing potential for art, but I personally find interactive VR to give the visitor a stronger feeling of presence which I find most compelling in VR. The challenge of interactive VR is that our visitors have such a strong feeling of presence that they want agency as well. It’s much easier to tell a story where the storyteller is in full control and the audience is a passive participant. But, even though it’s an incredibly hard problem, I believe that if you can create an experience where a visitor feels like they are part of a story, and not simply listening to a story, it can have a memorable and powerful impact. We’ve started exploring how to tell a story and reward agency at Oculus Story Studio by making stories that allow our visitors to have different views of the story since they can move around, sit down, lean forward, etc. We’ve also played with having story moments that would wait for the visitor to look in a particular direction, or get close to a character or set piece. And on “Dear Angelica,” we actually change how the illustration behaves depending on how close our user gets to the line work. I’m a big believer that we’ll be able to get to storytelling with agency in VR, but it will take time and lots of experimentation. And the best way to experiment is to create many experiences, innovating with each project.

Introduction and Definitions  3 the story being told. Interactive VR is akin to gaming, where the audience must have a more active role. This passive/active separation does not necessarily mean cinematic VR is third-person only. Some cinematic VR experiences are indeed first-person: The participant is a character in the story; he/she is acknowledged but cannot interact with the story. Conversely, an interactive VR experience can very well be third-person. AR, MR, and AV Virtual reality is only the tip of the iceberg when it comes to new immersive technologies and the impact they will have on our daily life. Augmented reality (AR) is projected to become a much bigger market than VR, especially in industries such as defense, architecture, engineering, and health. Augmented reality overlays digital imagery onto the real world with the use of dedicated glasses like the Microsoft HoloLens, Google Glass, or Magic Leap. MR means “mixed reality.” As of today, the definition of what MR really is remains unclear. Some say it is the merging of real and virtual worlds where physical and digital objects co-exist and interact in real time. The term

Figure 0.2  A mixed reality system

“mixed reality” is also used when a participant in VR is filmed and composited (comped) in the virtual world he/she is in. This technique is sometimes used to advertise VR headsets and apps. Last but not least, AV means “augmented virtuality,” which is a form of mixed reality where physical objects are dynamically integrated into, and can interact with, the virtual world in real time. This book focuses solely on the art and craft of virtual reality, and not AR, MR, or AV. Let’s dive in!

Part

I

Theoretical and Technical Foundations

Chapter

1

History of VR with ERIC KURLAND

Eric Kurland is an award-winning independent filmmaker, past president of the LA 3-D Club, Director of the LA 3-D Movie Festival, and CEO of 3-D SPACE: The Center for Stereoscopic Photography, Art, Cinema, and Education. He has worked as 3D

director on several music videos for the band OK Go, including the Grammy Award-nominated “All Is Not Lost.” He was the lead stereographer on the Academy Award-nominated 20th Century Fox theatrical short “Maggie Simpson in ‘The Longest Daycare’,” and served as the production lead on “The Simpsons VR” for Google Spotlight Stories. In 2014, he founded the non-profit organization 3-D SPACE, which will operate a 3D museum and educational center in Los Angeles. While virtual reality is a relatively new innovation, the state of the art is greatly informed by the many forms of immersive media that have come before. For practically all of recorded history, humans have been trying to visually represent the world as we experience it. Primitive cave paintings, Egyptian hieroglyphs, and Renaissance frescos were early attempts to tell stories through images, and while these would not be considered true representations of reality, they do illustrate the historical desire to create visual and sensory experiences. Third Dimension

Figure 1.1  Hugo Gernsback

Virtual reality as we know it today has some of its earliest roots in the 19th century. In 1838 scientist and inventor Sir Charles Wheatstone theorized that we perceive the world in depth because we have two eyes, set slightly apart and seeing from two different points of view. He surmised that the parallax difference between what our eyes see is interpreted into depth, and proved this by designing a device to allow the viewing of two images drawn from different perspectives, one for each eye. His device, which he called

8  Theoretical and Technical Foundations the stereoscope (from the Greek meaning “to see solid”) proved his theory to be correct. Wheatstone’s stereoscope, a cumbersome device using mirrors to combine the two views, was the first mechanical means of viewing a reproduced three-dimensional image. With the invention of photography in the decade that followed, we finally had a method of capturing multiple still images from real life and creating stereograms: immersive images for stereoscopic viewing. Figure 1.4  Holmes’ stereoscope

Figure 1.2  Wheatstone’s stereoscope

Another scientist, Sir David Brewster, refined the design of the stereoscope into a handheld device. Brewster’s “lenticular stereoscope” placed optical lenses onto a small box to allow the viewing of pairs of backlit glass photographic slides and front-lit photo prints. His viewer was introduced to the public at the Great Exhibition of 1851 in London, England, where it became very popular, helped greatly by an endorsement by Queen Victoria herself. An industry quickly developed to produce stereoscopes and 3D photographic images for viewing.

Throughout the latter half of the 19th century and until the 1920s, stereoscopes became a ubiquitous form of home entertainment. Companies such as the London Stereoscopic Company, the Keystone View Company, and Underwood & Underwood sent photographers around the globe to capture stereoscopic images, and millions of images were produced and sold. Stereo cards depicted all manner of subjects, from travel to exotic locations, to chronicles of news and current events, to entertaining comedy and dramatic narratives. Stereo viewers were also used in education, finding their way into the classroom to supplement lessons on geography, history, and science. Much of the appeal of these stereoscopic photographs was the ability of a 3D image to immerse the viewer and virtually transport them to faraway places that they would never be able to visit in person.

Figure 1.3  Brewster’s stereoscope

In the United States, author and physician Oliver Wendell Holmes, Sr. saw a need to produce a simpler, less expensive stereoscope for the masses. In 1861 he designed a stereoscope that could be manufactured easily, and specifically chose not to file a patent in order to encourage their mass production and use.

Figure 1.5  Stereoscopes in use in a classroom, 1908

History of VR  9 Immersive Presentations Another popular form of immersive entertainment in the 18th and 19th centuries was the panorama. The term panorama, from the Greek meaning “to see all,” was first used by artist Robert Barker in the 1790s to describe his patented large-scale cylindrical paintings which were viewed from within and surrounded the viewer. Barker constructed a rotunda building in London in 1793 specifically for the exhibition of his panoramic paintings. The popularity of Barker’s panorama led to competition and many others being constructed throughout the 1800s. Historically, a panorama (sometimes referred to as a cyclorama) consisted of a large 360° painting, often including a three-dimensional faux terrain and foreground sculptural elements to enhance the illusion of depth and simulated reality, and building architecture designed to surround the spectator in the virtual environment. The grand panoramas of the period created the illusion for the audience of standing in the middle of a landscape and scene, while the depicted events were happening. These paintings in the round served both to entertain and to educate, often depicting grandiose locations or great historical events. Panoramas proved to be very successful venues, with over 100 documented locations in Europe and North America. Some notable installations include the Gettysburg and Atlanta Cycloramas, painted in 1883 and 1885, which depicted scenes from those American Civil War battles, and the Racławice

Figure 1.6  A panorama

Figure 1.7  Illustrated London News. “Grand Panorama of the Great Exhibition of All Nations,” 1851

Panorama in Poland, a massive painting 49 feet high and 374 feet long, painted by artists Jan Styka, Wojciech Kossakover, and a team of assistants over the course of nine months from 1893 to 1894 to commemorate the 100th anniversary of the Polish Battle of Racławice. During their heyday in the

Figure 1.8  Cinéorama, invented by Raoul Grimoin-Sanson for the Paris Exposition of 1900

10  Theoretical and Technical Foundations Victorian period, the panoramas saw hundreds of thousands of guests each year. The post-Industrial Revolution brought about a new age of technological advances. The birth of cinema in the 1890s brought to the public a new form of media – the moving image. The apocryphal story of early filmmakers Auguste and Louis Lumière’s 1896 film “L’arrivée d’un train à La Ciotat” sending audiences screaming out of the theater, believing the on-screen train was going to hit them, may be an often-told myth, but it still demonstrates the sense of reality that early motion picture attendees reported experiencing. Throughout the 20th century, cinematic developments such as color, sound, widescreen, and 3D added to the content creation toolset, as inventors sought new methods and technologies, first analog then digital, to build realistic immersive experiences. One early attempt was the Cinéorama, devised by Raoul Grimoin-Sanson for the Paris Exposition of 1900, which combined a panorama rotunda with cinema projection to simulate a ride in a hot-air balloon over Paris. Footage was first filmed with ten cameras mounted in a real hot-air balloon, and then presented using ten synchronized projectors, projecting onto screens arranged in a full 360° circle around the viewing platform. The platform itself was large enough that 200 spectators were able to experience the exhibit at the same time. The 1927 French silent film “Napoleon” by director Abel Gance also used a multi-camera widescreen process. Gance wanted to bring a heightened impact to the climatic final battle scene and devised a special widescreen format, given the name Polyvision, which used three stacked cameras to shoot a panoramic view. Exhibition required three synchronized projectors to show the footage as a triptych on three horizontally placed screens to ultimately display an image that was four times wider than it was high. While the impact of such a wide-screen picture was dramatic, it was technically very difficult to display properly, as projector synchronization was complicated, and there was no practical method to hide the seams between the three projected frames. In 1939, at the World’s Fair in New York, filmmaker and special effects expert Fred Waller introduced an immersive theater for the Petroleum Industry exhibit, called Vitarama, which used an array of 11 projectors to project a giant image onto a dome-like spherical screen, designed by architect Ralph Walker.

In 1941, Waller took elements of his Vitarama immersive theater and invented a multi-projection simulator for the military. Called the Waller Flexible Gunnery Trainer, its purpose was to train airplane gunners under realistic conditions. They would learn to estimate quickly and accurately the range of a target, to track it, and to estimate the correct point of aim using non-computing sights. To create the footage for the machine, five cameras were mounted in the gun turret position of a bomber, and filmed during flight. Combat situations were depicted by having “enemy” planes fly past the cameras. The Waller Flexible Gunnery Trainer itself used a special spherical screen designed by Ralph Walker, and five projectors to create a large tiled image that surrounded four gunner trainees. It featured a 150° horizontal field of view and a 75° vertical. The trainees were seated at mock gun turrets and engaged in simulated combat with moving targets. In addition to the visual feedback on the spherical screen, the trainees also received audio via headphones, and vibration feedback through the dummy guns. A mechanical system kept score of their hits and misses and provided a final tally. Waller’s system was used by the US military during World War II, and the first installation was in Honolulu, Hawaii, following the Japanese attack on Pearl Harbor.

Figure 1.9  Waller Flexible Gunnery Trainer

History of VR  11 Following World War II, Waller devised another multi-camera/multi-projector system for entertainment, and named it Cinerama (for cinema-panorama). Similar to the projection used for Gance’s “Napoleon,” Cinerama used a triptych of three projectors, projecting a seamed three-panel image onto a giant curved screen. A special system was designed to shoot for Cinerama, with three cameras mounted together at 48° angles to each other. The interlocked cameras each photographed one-third of the full wide scene, which filled a 146° arc. The Cinerama theaters used three synchronized projectors spread out across three projection booths, each projecting one-third of the full image onto a curved screen made of forward-facing vertical strips. A mechanical system of vibrating combs in the projectors was meant to limit the light output in the overlapping areas of the three panels, effectively blending the three images into one. In practice, this method did not prove to be overly effective, as parallax differences at the seams still made them apparent. Cinerama was also one of the first theater experiences to utilize stereo surround sound, with a total of seven tracks – five behind the screen and two in the back of the auditorium.

Figure 1.11  The View-Master

stereo reels consisted of all manner of subject matter, from travel and scenic wonder, to popular culture and entertainment. Thanks in part to its small form factor and ease of use, it proved to be a very popular consumer product. The View-Master also found a market in education, being used extensively in medical training where it could provide realistic views of human anatomy and disease. View-Masters have remained in production continuously since their launch, and while the size and shape of the viewers have changed over the years, the design of the reels themselves has changed very little, allowing nearly 80 years of content to continue to be viewed even in modern viewers. In 1936, science-fiction writer and inventor Hugo Gernsback (after whom the Hugo Awards for sci-fi

Figure 1.10  Waller’s Cinerama

The First Head-Mounted Displays (HMDs) The View-Master was conceived by organ maker and stereo photographer William Gruber as a 20th-century redesign of the Victorian stereoscope. The ViewMaster provided a sequence of seven stereoscopic slide views on a backlit circular reel that could be manually advanced by the viewer. The View-Master was put into mass-production by the Sawyers Company of Portland, Oregon, which also hired photographers to create content. As with the stereocards that preceded it, the

Figure 1.12

12  Theoretical and Technical Foundations literature were named) conceived of what he called Tele-Eyeglasses. He described the future device as a pair of eyeglasses utilizing two miniature cathode ray tubes (CRTs) to provide television images to each eye. His device was intended to be connected by wire to a television set, which would receive and feed the signals to the wearer. Gernsback created mock-ups of his invention, but never produced a working prototype. Inventor Morton Heilig may have been the first person actually to use the term “virtual reality,” which he used to describe “a simulation of reality.” Heilig designed a head-mounted CRT-based viewing system, which he patented in 1960, and he went a step further than Gernsback, actually building working prototypes of his invention. His device, which he named the Telesphere Mask, offered stereoscopic viewing of moving images by positioning two miniature televisions in front of the viewer’s eyes. Heilig used a special lens arrangement to bend the light from the TV tubes to be visible in the viewer’s peripheral vision, and effectively provided a 140° angle of view both horizontally and vertically. The Telesphere also included built-in headphones to deliver binaural sound, and it had a pair of air discharge nozzles which, according to Heilig’s patent, could be used to “convey to the head of the spectator, air currents of varying velocities, temperatures and odors.” Heilig also developed and built an immersive experiential device he called the Sensorama. Rather than a wearable viewer like the Telesphere, his Sensorama was essentially an immersive theater for an individual viewer who would be seated inside the machine. Again, Heilig relied on optics to fill the viewer’s field of view with a moving 3D image, multiple speakers to provide surround sound, and air valves to simulate

Figure 1.13  Morton Heilig’s invention

Figure 1.14  Morton Heilig’s Sensorama

wind and deliver scents. The Sensorama also added the ability to vibrate the seat for a tactile element. Heilig designed his own 3D camera to capture content for the Sensorama, and produced five demonstration films, four of which were ride films depicting the point of view from a bicycle, motorcycle, go-kart, and helicopter. The fifth demo film was an interaction with a human subject, a belly dancer, and featured a fragrant perfume smell as she appeared to approach the viewer. Heilig’s intention was that his devices could be used for training and education, similar to how Waller’s flight simulators had been utilized, but Heilig was never able to make his inventions commercially viable. Ivan Sutherland is often referred to as “the father of computer graphics” thanks to his development of the first graphical user interface, Sketchpad, programmed while earning his PhD at the Massachusetts Institute of Technology in 1962. By 1968, Sutherland was an associate professor of electrical engineering at Harvard University, and he and his student Bob Sproull constructed what is considered the first true VR head-mounted display. Nicknamed the “Sword of Damocles” because the weight of the helmet required that it be suspended from a ceiling-mounted arm, Sutherland’s system was able to track the wearer’s head to determine where they were looking and then render a real-time stereoscopic image as a computer-

History of VR  13 generated vector wireframe. The HMD was somewhat see-through, so the user could see some of the room, making this a very early augmented reality system as well. Sutherland went on to teach at the University of Utah, where he partnered with another professor, David Evans, to found the computer graphics company Evans and Sutherland, which specialized in creating high-end computer-generated imagery for, among other purposes, flight simulators and planetariums. The company is still one of the largest distributors of media for full-dome immersive presentations.

Figure 1.15  The “Sword of Damocles”

in 1981 offered a stereoscopic imager that used a head-mounted spinning mechanical shutter to display stereoscopic graphics on its vector display screen. Sega released a stereoscopic option for its gaming system which used liquid-crystal active shutter glasses. In 1993 Sega announced, but never actually released, an HMD-based VR system for its home video game console. Nintendo also released the ill-fated Virtual Boy game system, an attempt at a portable standalone stereoscopic VR game which ultimately failed due to its limited graphics and gameplay capabilities. 1991 saw a first attempt at a VR arcade with the Virtuality system and its game “Dactyl Nightmare.” Some 350 of these coin-operated game systems were produced and placed in shopping malls. Customers would pay their fee and step up onto a large round platform, surrounded by a magnetic ring. The player would don the VR visor and gun, and was required to wear a heavy belt at their waist, containing the wires that tethered the devices to the computer controller (either an Amiga 300 or 486 computer). The graphics were very primitive polygons, but the tracking worked rather well for its time. Sega’s VR-1 motion simulator arcade attraction was introduced in 1994 in SegaWorld amusement arcades. It had the ability to track head movement and also featured stereoscopic 3D polygon graphics. In 2011, Nintendo released the 3DS handheld game, which included a built-in stereoscopic camera, positional tracking, and a glasses-free autostereoscopic screen for AR gaming. Theme Parks

Figure 1.16  Nintendo’s Virtual Boy

The 1980s saw the rise of the video game, and several manufacturers ventured into early digital attempts at interactive immersive games. The Vectrex system

Theme parks and entertainment venues have also made attempts at bringing VR to the public. The Disney VR lab developed a prototype of Aladdin’s magic carpet ride in 1994 and installed it at EPCOT in Orlando, Florida. They refined their HMD and in 1998 opened DisneyQuest, a theme park specifically for VR which featured the Aladdin VR. Circarama (later renamed Circle-Vision 360°) was a 360° panoramic theater, introduced at the Disneyland theme park in 1955, which used nine projectors and nine huge screens arranged in a circle around the audience. Disney theme parks have also used stereoscopic films and moving theater seats to create immersive movies, such as their “Honey I Shrunk the Audience,” which simulated the entire theater changing in scale.

14  Theoretical and Technical Foundations dome screen. Recently Six Flags amusement parks, in a partnership with Samsung and Oculus, have even introduced VR rollercoaster rides to their parks, where the riders wear Samsung Gear VR HMDs to experience an interactive simulated adventure while actually riding in a moving rollercoaster car.

Figure 1.17  Circle-Vision 360°

The “Soarin’ over California” ride (now remade as “Soarin’ over the World”) at Disney’s California Adventure Park and EPCOT features large-format projection onto a giant dome screen to simulate flight for riders who are suspended in front of it. That attraction also releases various scents into the audience to enhance the realism of the environments. Universal Studios theme parks also used stereoscopic film and immersive practices with many of their attractions. Their “T2 3-D: Battle across Time,” directed by James Cameron, combined live performance, environmental effects, and 3D film. The show culminated in a climactic scene projected in 3D across three giant surround screens. Universal’s “Back to the Future” ride, later replaced by “The Simpsons” ride, is another ride that suspends the audience in front of an 85-foot IMAX

Figure 1.18  A CAVE system

Computer Power and Tracking By the 1990s, workstation computers were becoming powerful enough to render computer graphics in real time, and researchers found new ways to track users’ movements to create interactive immersive experiences, utilizing both HMDs and immersive rooms. In 1977, Daniel Sandin, Tom DeFanti, and Rich Sayre designed the Sayre Glove, the first wired glove which allowed hand gesture control of data by measuring finger flexion. The CAVE (Cave Automatic Virtual Environment) is an immersive room environment invented in 1991 by Sandin at the Electronic Visualization Laboratory at the University of Illinois, and developed by Sandin, David Pape, and Carolina Cruz. The CAVE is a cubic room using rear-projection material for walls. Real-time computer graphics were projected on the walls, and sometimes also on the floor and ceiling. The user would wear 3D glasses to

History of VR  15 Figure 1.20  Fakespace’s “BOOM” system

Figure 1.19

view the stereoscopic projections, and the glasses were tracked within the space so that the projected images could be manipulated to match the viewer’s perspective as one moved around. This gave the illusion of actually standing in a real environment, with objects appearing to be floating in the virtual room. The tracking information was also used to create three-dimensional audio through multiple surround speakers. A control wand allowed the user to virtually interact with objects and to navigate through larger environments. A more recent iteration, the CAVE2 developed in 2012, uses liquid crystal displays (LCDs) instead of projection. One of the most advanced immersive VR rooms is the StarCAVE, installed at the University of California, San Diego, under the supervision of Tom DeFanti. It utilizes 17 individual screens to create a pentagonshaped five-walled space, each wall being three screens high, and includes two screens of floor projection. Some 34 individual projectors provide high-resolution left and right stereoscopic video for the screens. The upper and lower wall screens tilt toward the viewer at 15° angles, and the entryway wall rolls into place to completely enclose the user for a true 360° immersive VR experience. Further development of data gloves was carried out by Thomas Zimmerman at the company VPL Research. VPL was founded by Jaron Lanier, who popularized the use of the term virtual reality in the

late 1980s. VPL’s data glove added the ability to track hand position in addition to finger flex. This led to the development of the DataSuit, a full-body tracking system for VR. VPL became the first company to sell a commercial head-tracking HMD, and the DataGlove was licensed to Mattel in 1989 and released as the PowerGlove accessory for the Nintendo Entertainment System home video game console. In 1988, Mark Bolas, Ian McDowall, and Eric Lorimer founded the company Fakespace to engineer VR hardware and software for government and scientific use. Fakespace’s innovations included their own version of a data glove, the Pinch Glove, and a VR imaging and tracking system called the BOOM (Binocular Omni-Orientation Monitor), which placed a high-resolution computer monitor inside a stereoscope on the end of a boom arm. The user would look through the wide-angle optics and view stereoscopic real-time computer-generated imagery (CGI), and the arm sensors tracked the stereoscope’s position through six axes of motion. The “VR for Everyone” Revolution A “do-it-yourself ” handheld VR viewer, called the FOV2GO, leveraged the computing and graphics power of available smartphones, and was developed in 2012 at the University of Southern California (USC) by a team of students and their instructor, multimedia innovator and educator Perry Hoberman, in conjunction with USC’s Institute for Creative Technologies. The FOV2GO led directly to Google’s open-source design for their Cardboard VR viewer, an inexpensive, paper-based VR headset that is compatible with practically any Android or iOS phone. As computing power has continued to increase and computing devices have become miniaturized, the feasibility of a self-contained handheld device with high-resolution displays and a full complement of positional and rotational tracking sensors has become

16  Theoretical and Technical Foundations Figure 1.21

a reality. A new wave of public and corporate interest has spawned a new period of heavy development in virtual reality. Companies like Sony and HTC have released their own proprietary virtual reality systems. Oculus Rift, a consumer head-mounted display VR device based on inexpensive cellular phone parts and originally developed in 2012 through a crowdfunding campaign on Kickstarter by its developer, Palmer Luckey, was purchased by Facebook for billions of dollars. It seems that new immersive devices and experiences, from personal VR HMDs to dedicated VR theaters, gaming centers, and theme parks, are being announced every day. None of these new inventions and innovations would be happening if it were not for

Figure 1.22

History of VR  17 their predecessors. In many ways things have come full circle: Google’s Cardboard as an inexpensive massproduced VR viewer, is almost identical in purpose

and design to its analog ancestor, Holmes’s patent-free stereoscope from over 150 years ago.

Shari Frilot, Founder/Chief Curator of New Frontier at Sundance Film Festival The first time I experienced VR, it was Nonny de la Peña’s “Hunger in Los Angeles” in 2011. I saw this piece on a really expensive headset. It was like a $50,000 headset on track with the development of VR since the ’90s. When I selected “Hunger” for New Frontier, the owners of the headset were like, “Well, you are not taking our $50,000 headset to Sundance and putting hundreds of people inside it!” So Nonny and Palmer Luckey, who was her intern at the time, worked together to patch up a prototype that would eventually become the first prototype of the Oculus Rift. They showed up at Sundance with a headset that was basically duct tape and a cell phone and it worked fine. That was the only VR that had been shown anywhere since the ’90s really. To my knowledge, it was the first time in a context like a film festival. In 2014, I invited five VR works. That’s when we really started to get lots and lots of lines. In 2015, because of that level of engagement, I made the decision to put together a show that was primarily VR work. The works were coming from a lot of different places. There was a performance artist from Chile, there was an American filmmaker, a music video, there were a lot of different artists and imaginations and storytellers, a lot of different kinds of practitioners creating work for this technology, and it was really after that year, 2015, that the scene exploded at the festival. It spiked our attendance and just general interest in the showcase. In the meanwhile, Google is making Cardboard, Oculus Rift is coming up with the DK2, and The New York Times is giving away a million Cardboard viewers to their subscribers. There’s this momentum that was starting to happen but everybody was asking, “Well, where’s the content?”

Chapter

2

Live-Action VR Capture and Post-Production Basics of VR Workflow There are two methods to capture live-action 360° footage: one is to shoot with an array of cameras facing all directions; the other is to use only one camera (or a 3D pair of cameras) and rotate to cover the whole 360° environment. The latter is called the “nodal technique.” When shooting 360° live action, it is vital to use the exact same cameras, lens, and settings to facilitate the process of creating the final 360° sphere. This process is called “stitching.” During stitching, footage from all

the cameras (or the various takes of the same camera in the case of the nodal technique) is put together to recreate a full representation of the surrounding environment. In the example in Figure 2.3, footage from seven cameras (five all around, one facing the sky, and one facing down) is stitched together. Once stitched, the final shot can be rendered in various formats and played either in a VR headset or on a flat screen with the use of a VR player.

Figure 2.1  360° capture

Figure 2.2  Nodal technique

Note: The pictures above show a top view of the two different techniques but are not exactly accurate: the number of cameras varies as well as the way they are put together. Most VR cameras also feature top and bottom cameras.

20  Theoretical and Technical Foundations size of the camera, the weight of the camera, the number of camera modules, and the quality of the camera modules. Is the final project mono (2D) or stereo (3D)? The workflow can sometimes influence the choice of camera, as stitching is currently the most difficult/ expensive part of creating VR. Just like when shooting in 2D, you are not going to use the same camera for every situation. At first, let’s study the most important technical specs of VR cameras impacting the quality of the final product. Frame Rate

In cinema and television, we traditionally shoot and project/screen content at 24 frames per second (fps). At this frame rate, camera move­ment and movement within the frame typically result in a good amount of motion blur. In 3D films, this frame rate and resulting blur can cause a visual discomfort known as stroboscopy. In order to avoid this issue, the capture and projection frame rate must be higher, either at 48 fps or 60 fps, in order to reduce the motion blur. This technique is called HFR for high frame rate. Experienced directors have already chosen to shoot in HFR for their 3D films: Peter Jackson filmed the entire “The Hobbit” trilogy at 48 fps and James Cameron is

Steve Schklair, Founder, 3ality Technica and 3mersiv

Figure 2.3  The stitching process Note: Certain VR rigs do not have individual cameras facing up and down. In this case, it is called cylindrical VR versus spherical VR.

Production Hardware VR Cameras

A number of VR cameras are currently available to buy/rent, from amateur-level to the most high-end rigs. Technical specifications (specs) constantly change and can be confusing and misleading, and it is often difficult to determine which camera is best to use. There are many considerations when it comes to choosing the right VR camera for your project, including the

From my point of view, anything that does not shoot 60 frames a second is not a camera I really want to use, as frame rate has a huge effect on perception. Motion artifacts have a deep effect on viewer comfort and perception. For me every­ thing needs to be at 60 fps or above, which eliminates so many of the available systems. Other criteria really depend on what you’re trying to shoot. Camera sensitivity is a big factor. There is often a need for camera sensors that produce quality images in low-light situations. That narrows the field even more. A lot of content is being shot without any artificial lighting because this is a medium where we see everything and a lot of projects do not have the budget to paint the lights out later. It’s kind of limiting.

Live-Action VR Capture and Post-Production   21 considering shooting the “Avatar” sequels in 60 fps or higher. In VR, things are different because of the use of a VR headset to display the content. It is common industry knowledge that the higher the frame rate, the better, so that it matches the display’s refresh rate for a heightened realism and immersion. Entry-level headsets like the Samsung Gear VR have a 60Hz refresh rate, while high-end headsets like the PlayStation VR can clock in at 120Hz, depending on the game/application in use. In the best-case scenario, the frame rate of the 360° capture matches the end device’s refresh rate but very

few VR cameras are capable of shooting at such a high frame rate. For future-proofing of your content, it is advisable to shoot at 60 fps and above if possible. Resolution

Resolution takes on a different meaning when it comes to VR. Indeed, each camera that composes the VR rig has its own resolution, but stitched together, the final resolution is not a simple addition of all the pixels. A good stitch requires overlap between the cameras, hence shooting with four high-definition (HD) cameras will not add up to a final 4K file, but most likely a 2K file. While 2K might sound good enough, once again, things are different in VR. What really matters is the number of pixels composing the field of view of the participant. Let’s take the example of the Oculus Rift CV1: • Horizontal field of view: approx. 90° (1/4 of the full 360° sphere). • Resolution: 2160x1200, hence 1080x1200 per eye. To achieve an acceptable total resolution, the horizontal resolution of the final output should be at least 4 x 1080 = 4320 pixels. Of course, this number is going to change rapidly as new HD headsets arrive on the market. 8K headsets have already been announced. Industry standard as of 2017 is to deliver a final 4K output (4096x2048), and if possible a 6K output (6144x3072) for future-proofing the content. Sensor Size

The main effect of sensor size is on depth of field. Large sensors favor shallow depth of field, but the wider the lens, the less sensor size impacts it. There is no depth of field on a fisheye lens, which is the most commonly used lens in VR. That being said, large sensors have some undeniable benefits, like their increased ability to catch photons, hence a better low-light performance. Dynamic Range

Figure 2.4  Final output resolution vs headset resolution

Dynamic range in photography describes the ratio between the maximum and minimum measurable light intensities (white and black, respectively). Dynamic range is measured in F-stop by determining the number

22  Theoretical and Technical Foundations Figure 2.5  Comparison of dynamic ranges

of stops a certain sensor can “see” with details between the blacks and the whites. The F-stop scale is exponential. In photographic terms, a stop is simply a halving or doubling of light. So if you want to increase your exposure by one stop, you can either double the length of your exposure or double the size of your aperture. The reverse is true if you want to reduce your exposure by a stop. As a reference, human vision has a dynamic range of about 15 stops, and the high-end professional cameras like the ARRI Alexa or RED Weapon are measured around 14 stops. Good dynamic range is vital when capturing 360° video as many outdoor situations will have large differences in brightness, and lighting VR in a traditional way is challenging as everything is in the frame. Compression

Compression codecs are used to encode the signal coming from the sensor into a file that is smaller and optimized compared to recording the “raw” signal. The quality of compression codecs varies and can sometimes limit greatly the dynamic range of the camera by “crushing” details in the shadows or highlights. Considering the difficulty of controlling the lighting in VR, it is best to shoot in RAW format or with the lightest compression possible. This will facilitate the color grading, especially the camera-matching process during stitching. The downside of using a RAW workflow

is increased post-production complexity and cost due to the size and nature of the files which must be transcoded into a more manageable format. VR Rig Design

A VR camera being made of a certain number of sensors/cameras, the quantity and positioning of these sensors is a very important factor in the quality of the final stitched image. For example, a VR camera made of only two sensors will be much easier to stitch than one made of 14 sensors (in this case there is technically only one stitch line to adjust), but it will have a much lower resolution/optical quality. In the example in Figure 2.6, the left rig design is much more difficult to stitch than the right design, but the resolution of the final output is four times better (if the sensor is the same in each situation). Also, the optical quality of the extreme fisheye lenses needed in the case of the right design is often degraded compared to longer lenses, especially on the periphery of the field of view, resulting in a blurry effect around the stitch line zone. Design of VR cameras is truly an art. Many manufacturers try to improve the quality/complexity ratio and also reduce the minimum acceptable distance to the camera.

Live-Action VR Capture and Post-Production   23

Eve Cohen, Director of Photography, “The Visitor VR” Usually the choice of the VR camera comes down to where the final delivery of the project is going to be, whether that’s in a certain kind of headset or whether that’s just 360° video. And then the budget, ultimately. Usually the creative decision for the camera is pushed on the back burner after delivery and budget. After delivery and budget are figured out, then creative elements come into play depending on how close you need to be able to get the camera to a certain object, or what you have to be able to do with the camera, whether that’s going to be movement, or a POV [point of view]-style camera. It’s similar to choosing lenses, I guess, in standard cinematography. I kind of think of each different VR rig as a different lens. I want to have as much control over the image as I can as a cinematographer so that I can really make sure it’s the right image for the project. To me it comes down to how much control I am going to need to have now versus how much control I am going to rely on later in post-production. I was just grading something recently from a GoPro rig and none of them are matching. Like not even the same cameras match between them, and I have zero control over that. That’s not ideal, but it might have been the ideal camera for that shoot knowing that I’d have to put in more time later to kind of help match those up. So when it comes to choosing a camera and what things I would look for, I guess it would be the kind of setting that it’s in. So if it was a daytime exterior, it might be different than if it was a night-time interior with very little light. There isn’t one array out there that can really cover everything. I prefer to use multiple cameras within a project. Again, kind of looking at each camera as if it’s a lens and choosing the best lens for that shot.

Figure 2.6  Examples of VR camera configurations

Minimum Acceptable Distance to the Camera

Due to the design of VR cameras, there are blind-spot zones around the stitch lines. This occurs between the cameras themselves and where the optical axes of two adjacent lenses cross. If objects or actors were to cross or stand in these zones, the stitching would appear broken. Figure 2.7 shows how different configurations can reduce or reshape the blind-spot zone. The third example shows two pairs of cameras arranged on top of each other. This configuration is very efficient at

reducing the blind-spot zone but introduces a vertical disparity between the cameras which is very difficult to fix during stitching. This type of configuration is therefore not commonly used. As a general rule, the bigger the overlap between the lenses is, the easier the stitch. On most current VR cameras, it is recommended not to have anything come closer than 5 feet from the camera around the stitch line zones. The minimum distance does not apply when standing right in front of one lens: in this case the minimum focus distance is the limiting factor.

24  Theoretical and Technical Foundations

Figure 2.7  Different VR camera configurations have different minimum acceptable distance to the camera

Figure 2.8  Stitch lines, blind spot zones, and minimum focus distance

The general rule when working with a VR camera is to identify where the stitch lines are and avoid staging anything important there. Nodal Technique

Another way of shooting 360° is to use a nodal head and rotate a single camera around its nodal point. The nodal point is the center of the lens’s entrance pupil, a

virtual aperture within the lens. This specific point is also known as the “no-parallax point.” This allows for a perfect and easy stitch as there is no parallax between the various takes. As the various slices of the full 360° sphere are shot separately, the blocking and staging are limited to the frame of each take. It is not possible to have actors walk around the camera, for example. However, some VR filmmakers use a nodal head and pan the camera

Live-Action VR Capture and Post-Production   25

Alex Vegh, Second Unit Director and Visual Effects Supervisor There’s a couple of different approaches to solving the parallax issue. One approach is optical, where you’re trying to get the cameras as nodal as humanly possible. The closer the cameras are to nodal, the closer the objects can be to camera. The other one is more of a software-derived solution where depth information is acquired from cameras and used to create geometry which photography is projected upon. Our solution was to try and use as few cameras as possible to have as few stitch lines as possible. In a four-camera solution, each camera has to have a field of view of 180° to get the correct amount of overlap. We looked at many different lenses including lenses from a company that specializes in back-up cameras for cars, to a 6mm Nikon made in the 1970s. They’re huge – they look like a giant pizza plate. They have a 220° field of view so they see behind themselves. We ended up with a Canon fisheye zoom 8–15mm which was fairly sharp – not necessarily the sharpest, but quite even across the entire lens. You wouldn’t have softness on edges and sharpness in the center. That was very important when blending the images from each camera.

We tried a two-camera, three-camera, or fourcamera solution. We decided on a four-camera rig to protect for resolution. That’s the other side of the story. A lot of the pre-made consumer-grade VR solutions that existed at the time did not have the resolution or color depth. Also the rolling shutter presents a large issue. We decided to go with four 6K RED Dragons for image quality and resolution. The cameras were mounted sideways due to how the lenses projected. When the lens would project that image onto the chip, the top and bottom would get cropped a little bit, but the sides would get the full image. So we rotated them sideways because we wanted more overlap. Not as much overlap was needed on the top because you’d been seeing the rig anyway (and it would be painted out). So we adjusted the camera rigs to maximize the overlap. Our primary concern was when a character crosses frame. We knew they would be our critical moments and we wanted to give as much overlap as humanly possible. Excerpt from “fxpodcast #294: Making the 360 degree short HELP,” Mike Seymour, Sydney University, Fxguide.com

to follow moving elements around, and then re-map the footage onto the final 360° sphere. This requires some visual effects (VFX) skills, but is an interesting way of shooting VR.

Figure 2.9  The Mill VR camera rig designed for “Help” Figure 2.10  A nodal head for panoramic photography

26  Theoretical and Technical Foundations 2D vs. 3D

Last but not least, the choice between 2D and stereoscopic 3D VR is an important factor to determine the right VR camera to use. Based on the principles of human vision, a ste­ reoscopic film is made of two separate views shot from two different points of view (one for the left eye, one for the right eye). A number of adjustments and alignments must be carefully executed in order to guarantee a good 3D result. If these adjustments are not made, the resulting 3D will likely cause fatigue and headaches. 3D stitching is particularly difficult as two different types of parallax come into play: the horizontal parallax between two cameras in a 3D pair (left eye, right eye), and the parallax between the various pairs composing the VR camera. This results in 3D artifacts that appear around the stitch lines when stitching all the left cameras together and then all the right cameras together. Fixing these artifacts is a tedious process that requires a VFX pass.

Steve Schklair, Founder, 3ality Technica and 3mersiv To me, real VR needs stereo. People call this virtual reality but a 2D 360° image is far from virtual reality. It’s just a flat image projected onto a sphere that you can view in 360°. You at least need good stereoscopic images to come closer to the idea of VR. I have seen a lot of content that purports to be in 3D, but most of it is not good 3D so it’s subtractive more than additive. It’s distracting. For me there’s delineation between 360° video, 2D, 3D, and actual room scale where you can move around in the picture. So, 60 frames per second and 3D is the minimum requirement for 360° video in my opinion.

delivers better results. The algorithms compute the left–right eye stereo disparity between the cameras and synthesize new views separately for the left and right eyes. It is similar to the method used to create alternative points of view between two camera positions, or doing time interpolation in editing. Optical flow for 3D VR remains an open research area since it presents a lot of artifacts caused by occlusions – one camera not being able to see what an adjacent camera can see. VR systems using the optical flow technology feature sphere-looking cameras where the sensors are arranged on a circle, instead of left/right pairs.

Figure 2.11  The complexity of stitching 3D pairs of cameras Figure 2.12  360RIZE 3DPro and 360RIZE 360Orb

More and more VR companies are opting for a different method to obtain stereoscopic 3D VR with the use of optical flow algorithms. Optical flow is mathematically trickier than other stitching solutions but

List of Existing VR Cameras

As a reference, Figure 2.13 lists the current VR cameras and their main specs. By the time this book is

Live-Action VR Capture and Post-Production   27

Figure 2.13a and b

28  Theoretical and Technical Foundations

Figure 2.13b

Figure 2.13c

Live-Action VR Capture and Post-Production   29

Jessica Kantor, VR Director, “Ashes” I switch up my camera quite frequently depending on the project. The big choice for me is if you choose to do stereoscopic, making sure the team has the skill and production has the budget to properly process. It’s much harder than monoscopic, and if not done well it is not worth the extra headache. Bad stereoscopic can also break presence but good stereoscopic is amazing, so it is all part of the budget tradeoff.

published, new cameras will probably be announced, and I encourage every aspiring VR filmmaker to do some additional research and consult with professional VR directors of photography and camera rental houses. Live View

In these early days of VR, only a few camera systems can deliver a live VR signal for monitoring purposes. It increases the difficulty of staging and directing VR as the crew is usually hiding in a different room. The creative team (and the client/agency in the case of a VR commercial) needs to see the shots in VR as they

Alex Pearce, VR Director and Producer There’s a lot of affordable VR cameras out there. Theta S – great tool to get your feet wet, or for previz [previsualization]. The resolution and image quality isn’t really fit for professional work, but a fun camera nonetheless. Gear 360 – great quality for the price. Control app works great with a Samsung phone. Overheats way too easily and doesn’t work for Mac/iOS without a hack. Too unreliable for me to depend on it, unfortunately. Kodak 4K – I have a love/hate relationship with this camera. It’s very versatile and can be modified to do many things. The image quality and auto exposure is hit or miss, and the app to control it doesn’t work that well. That being said, I have used it in places that most cameras at the time would not have worked. Their small form factor and super-wide-angle lenses make it a good choice for run and gun-type filming, or when you need to “set it and forget it.” With the batteries out, external power in, I once shot for over three hours continuously with four Kodak rigs. This was for Hans Zimmer live in concert in Europe. The stage manager wouldn’t let us use the GoPro rigs we had because they were too big.

Figure 2.14  Hans Zimmer covers Purple Rain in Tribute to Prince in Oberhausen, Germany. Shot with the Kodak Pixpro 4k

30  Theoretical and Technical Foundations

Figure 2.15  Director Andrew Wilson using the Nokia live monitoring system – ©ReelFX

are being shot to be able to make improvements. Some of the most affordable VR cameras like the Samsung Gear 360 can be monitored live on a compatible smartphone but the live stream cannot be recorded and replayed. The Nokia Ozo was the only professional camera to be released with an integrated live view solution, although in low resolution.

The Jaunt One has recently been upgraded and up to eight modules can be seen live in the Jaunt Controller software. Unfortunately, the live view shuts down when recording. When working with other cameras that do not have an integrated live VR monitoring, some companies have decided to build their own system.

Gawain Liddiard, Creative Director, “Help,” The Mill As soon as we started to get into the camera rigs and how this was going to work, it became very clear that we wanted more than just sitting there looking at these very warped fisheye lenses to give Justin (Lin, director, “Help”) something that was more tangible. So the key to the “Mill Stitch” was to figure out how to do a rough stitch in real time and pulling together our understanding of hardware. We also wanted to give the director a nice way of controlling it, so we gave him a little Xbox controller to look around with. We also gave him a touch screen so he could really jump from looking one direction to jumping all the way over to another. It was sort of a really bulky version of that, because it had to have a wireless transmitter on it, a motion detector and all these things duct-taped together. But it worked really well to be able to stand next to the camera rig on set and look in any direction. Excerpt from “fxpodcast #294: Making the 360 degree short HELP,” Mike Seymour, Sydney University, Fxguide.com

Live-Action VR Capture and Post-Production   31

Figure 2.16  The “Mill Stitch” live stitch monitoring tool

Teradek offers a live monitoring device called the “Sphere.” It provides real-time compositing of up to eight 1080p camera feeds, which can be broadcast live to multiple iPads. The stitched video can be recorded and replayed at will. The quality of the stitch/video signal is quite good but does not/should not replace the traditional high-quality post-production workflow.

Figure 2.17  Teradek Sphere – ©Teradek

Eve Cohen, Director of Photography, “The Visitor VR” Live view is essential in VR otherwise you have no way to know what you’re getting. You can’t just stand behind the camera and kind of watch and see how it’s going to unfold. The ability to live monitor also gives your director and your client the opportunity to really see what it’s going to look like once put together. It’s really hard for your brain to translate what these fisheye lenses are doing, so being able to actually watch it, either pre-stitched or in quadrant mode, or in just every single lens, makes it feel like you know what you’re doing versus guessing and watching imaginary lines that you otherwise wouldn’t be able to see. If the VR camera I’m using doesn’t have live view, I usually put a Gear VR or some kind of GoPro setup with a Teradek Sphere on top of the camera. I hope that as VR cameras progress, a live feed while you’re recording becomes standard as part of the build.

32  Theoretical and Technical Foundations Post-Production Solutions VR post-production is undoubtedly the most costly and challenging part of the workflow. Stitching can be particularly expensive depending on the chosen camera system. The rest of the post workflow stays more or less the same as “traditional” 2D as most post software solutions are now compatible with VR. Keep in mind that VR is a fast-evolving medium and it is vital to test your proposed post workflow before starting production. Ask your camera rental house for sample footage from the VR camera you are planning on using and “post-produce” it from start to finish. You will learn some valuable lessons that could have cost you a lot of money later. Stitching

During stitching, footage from all the cameras (or the various takes of the same camera in the case of the quadrant technique) is put together to recreate a 360° sphere. The process may be complex depending on the number of cameras to stitch, whether the deliverable is 2D or 3D, and the proximity of foreground objects/ characters.

Figure 2.19  Control Point Editor in AutoPano Giga

Figure 2.18  Synchronization tool in AutoPano Video Pro

Live-Action VR Capture and Post-Production   33 The first step of the stitching process is to synchronize all the cameras. Most stitching software has a semi-automated “sync” tool (see Figure 2.18) but often it does not work and the footage must first be processed in editing software. Adobe Premiere Pro, for example, has an excellent auto-sync tool using sound to align all the cameras in a multi-camera sequence. To ensure an easy sync of your footage, slate all the shots properly, if possible at the beginning and at the end. Stitching software can also detect movement in the frames and use the motion blur as a sync tool. If the camera is on a monopod, a quick rotation at the beginning of the shot does the job. Once properly synced, the footage can now be stitched. The most common method is to let the stitching software do an automatic pass first which can sometimes produce quite a good result depending on the complexity of the shot. The stitch can then be improved “by hand,” linking specific points in one frame to their corresponding area on a different camera (Figure 2.19). The fine-tuning of the stitch makes all the

Figure 2.20  Before/after the color-match pass

difference between an amateur and a professional final product. Once the footage is stitched, the last step is to color-match all the cameras so that the exposure looks seamless around the stitch lines. Once again, most stitching software has a color-match tool (Figure 2.20), but some prefer to do it in dedicated software before the stitch to preserve quality and dynamic range. Stitching 3D footage uses a similar method: synchronization, rough stitch, fine-tuning, and colormatch. Usually, each eye is stitched separately and part of the fine-tuning process is to fix the artifacts that arise when a stitch line is different from one eye to the other. The stitching software usually “understands” 3D and lets the operator choose whether a specific camera belongs in the left eye or the right eye (Figure 2.21). To achieve a perfect 3D stitch is a complicated process that very few have mastered. A more VFX-oriented method is usually preferred, including tools like Nuke and experienced stitchers, which is a costly and long process.

34  Theoretical and Technical Foundations online, and dedicated workshops are available in most big cities in the United States. Editing Tools

VR footage can be cut using traditional editing software such as Avid, Premiere Pro, and Final Cut Pro. Premiere Pro CC 2017 is now equipped with a set of basic VR tools including a VR player. It is best to edit proxies as the final VR files are usually too large to be played real time. Common industry practice is to do a 2K rough stitch of the footage first, edit it, and then online it with the final high-resolution stitch. This is a good way to save time and money as quality stitching is expensive and should therefore only be done on the locked cut.

Paul Raphaël, Co-Founder/Creative Director, Felix & Paul Studios Figure 2.21  3D selection tool in AutoPano Video Pro

When using a cloud-based optical flow solution, the camera manufacturer usually provides filmmakers with access to a designated cloud folder where the footage is uploaded. Once the 3D stitch is ready, the user can then download the 3D VR files at the desired resolution. Even though the optical flow technology is not seamless yet, it is by far the easiest/most cost-effective solution for live-action 3D VR. The main stitching software are Autopano Video Pro + Giga, Video Stitch, PTGui, and The Foundry’s Nuke. Step-by-step tutorials can easily be found

Alex Pearce, VR Director and Producer I shot a 15-minute narrative short film called “Max and Aimee,” for which we used an optical flow workflow. We finished shooting on a Monday, had the proxy footage to cut with on Tuesday, made the rough cut by the following week, and had the high-resolution stereoscopic picture lock less than two weeks after we shot! If we used a rig that needed stereo stitching, it would have taken months and a team of experienced stitchers, compositors, etc., to get similar results.

The edit changes constantly from the first 2D version to the pre-stitch version that we look at in the headset, to the better stitched version, to the even better stitched version, etc. We are constantly getting a different feel for the film, for the experience, as it becomes closer to its true form. There’s a lot of back and forth, as much as we try to lock things down as early as possible. We often do the first edits ourselves as well, and usually we start with a sort of a 2D previz. We do a first version, then we do a rough 360° version, look at it, and then go back and forth.

It is recommended to use VR-specific plug-ins to be able to rotate the VR sphere, add effects like glow or motion blur, do transitions between shots, and enable the editor to watch the edit directly in a headset like the Oculus Rift. Dashwood’s 360VR Toolbox™ (now available for free in Final Cut Pro) and Mettle’s Skybox 360/VR Tools are the most commonly used VR plug-ins. They are great tools to enhance the end VR piece and the next piece of software to buy after the stitching software and editing software.

Live-Action VR Capture and Post-Production   35

Duncan Shepherd, Editor, “Under the Canopy,” “Paul McCartney VR” In terms of putting together a VR experience in post-production, it’s pretty obvious that you have to view your work in context. This means at least a head-mounted display, and if possible, the ability to edit and monitor in surround sound. But crucially important is the maintenance of editorial flexibility, to respond to ideas, and make changes in structure and visual design. Editing has always been about manipulating narrative, pace, and juxtaposition in order to inject the viewer with a sense of immersion in the director’s vision. We must not get bogged down in the technical difficulties and complications involved in stereo 3D VR. The way I do this is using an editing system I’m fully conversant with, that has the power and tools to operate VR in real time, and can collaborate with co-workers and suppliers from the shoot, all the way to delivery. Personally I edit in Final Cut Pro X for its unbridled flexibility; however, it is only currently supported by the Dashwood set of plug-in tools for VR. However, one advantage of Apple’s ecosystem is the ability to build VR plug-ins and tools myself, in Apple Motion. (It’s not too hard with a bit of practice.) I also use After Effects with Mocha for stabilization and horizon re-orientation, which is vital to prevent nausea in the viewer. Tim Dashwood has published a great tutorial on how to do this. I also use the Mettle plug-ins in After Effects for spherical correction of 2D “flat” imagery to appear flat in VR, although I still manage the left eye/right eye offsets in FCPX for flexibility.

The key to good VR editing is flexibility. As the technology progresses, new software and plug-ins are released that improve the workflow and reduce the amount of back and forth between software. For more information on the art of editing for VR, see the section on “Editing” in Chapter 7. 2D to 3D Conversion

2D to 3D conversion is to recreate a “second eye” from 2D, to extrapolate another point of view. The conversion methods are diverse and depend mainly on the time and budget available. In the best-case scenario, the original 2D shot goes through a virtual reconstruction of the space through software like Maya or 3DSMAX. The original textures are then applied to the reconstructed volumes, and the missing parts filled by hand. Then the shot is “re-filmed” with the use of a virtual 3D rig and the assistance of a stereographer for the creative aspect of stereoscopy. This very time-consuming and expensive technique was used for Tim Burton’s “The Nightmare Before Christmas” 3D conversion by ILM studios. The process took 19 weeks, with almost 80 people working on the project. Other less complicated and less expensive 2D to 3D conversion techniques do exist, such as the technique of using a “depth map.” A depth map is a type

Figure 2.22

Figure 2.23  Depth map

36  Theoretical and Technical Foundations of grayscale version of the shot, where each gray value corresponds to a given horizontal offset of the associated pixel, thus placing it in depth. In the example from “The Lion King” by Disney Studios shown in Figure 2.22, the stereographer (Robert Neuman) first analyzed the shot and roughly divided the space, indicating offset values in pixels for each zone. Then the “depth map” was drawn by an automatic bor­der recognition algorithm, and refined by hand afterwards. The obtained depth map is then used in combination with the original shot to automatically render the sec­ond point of view. The offsets applied to the original shot create “holes” in the image that must then be filled auto­matically or by hand. The amount of automation and the quality of the algorithms employed will determine the final quality

Gawain Liddiard, Creative Director, “Help,” The Mill For “Help,” we wanted cameras that we were more familiar with, more reliable, and fewer scenes. Specifically for this project, as well, we were looking at shooting it as a mono. It was not designed as a stereoscopic experience for something like the Oculus Rift headset, so that also pushed us more in that direction where we were just considering how to stitch this into that perfect, single equirectangular image. I think the not having stereoscopic aspect to it was certainly a blessing. I think that would have been a huge extra complication, even though when we were stitching for this project, it was much more of a deconstruction in a 3D sense of projecting all the footage out and reconstituting the plates in a 3D environment and then filming that back through a virtual camera, and that process would lend itself to some kind of a stereoscopic conversion. So even if we went back to square one and the mandate was that this needed to be stereoscopic, I would probably choose the same camera rig and probably opt for a more post-stereoscopic conversion. Excerpt from “fxpodcast #294: Making the 360 degree short HELP,” Mike Seymour, Sydney University, Fxguide.com

of the conversion. In the best case, 2D to 3D conversion can be as flawless as native 3D; however, in the worst case, the foreground objects and backgrounds are roughly positioned in space but show no internal roundness at all, and the resulting 3D shot suffers from a variety of visible aberrations and artifacts. 2D to 3D conversion is a great alternative when shoot­ing native 3D is too expensive or technically challenging. Native 3D and conversion are often used in the same film; a couple of converted shots within a native 3D scene go unnoticed most of the time. This is called “hybrid 3D.” However, the 2D to 3D conversion of VR footage is more challenging than traditional “flat” footage. The process must take into account the distortions of the 360° sphere and render much bigger files and resolutions, which makes it time-consuming and expensive. It is often less expensive to shoot in 3D and go through the difficult 3D stitch process than convert from 2D to 3D. VFX

Most VFX software is format-agnostic and can process VR files just like any other file. However, there are some VR-specific issues that require special attention. One of them is due to the nature of the super highresolution equirectangular format which is then wrapped onto itself to form a sphere. Gawain Liddiard, creative director, and Alex Vegh, visual effects supervisor, on Justin Lin’s “Help” VFX-heavy film describe the challenges of VR VFX work.

Alex Vegh, Second Unit Director and Visual Effects Supervisor One of the first challenges was the amount of data. 6K RAW files, four cameras shooting multiple takes. It was a tremendous amount of data to manage and deal with. There’s a lot of information you have to push and pull around. When you review a traditional shot from a film, you watch through them several times and make comments. But with VR, you sit in a room in an office chair, turn off the lights, you play it in one direction, over and over, rotate. Play it in the next direction, over and over, rotate. Then again. It would take me two or three hours to review just one section

Live-Action VR Capture and Post-Production   37

of it. You have to look in every single direction – up, down, all over the place to make sure you’re hitting every single note. With this there are so many notes, it’s exponential. Where normally you might take a matte and pull it off frame, it’s OK, but with this there is no off-frame, so you can’t do that. That was interesting. The trickiest thing for me in post was to keep the live action tied into the CGI. There was a lot of detail work and resolution that you wouldn’t normally do. In a traditional film, you may have a CGI object added for a few frames and then leave. In 360° you can see the CGI object coming towards camera and going away from camera. Excerpt from “fxpodcast #294: Making the 360 degree short HELP,” Mike Seymour, Sydney University, Fxguide.com

You need to take into consideration that you no longer do 2D just like a lot of problem-solving going on, which is all so slow and tedious. Basically a compositor might be used to a 20-minute render of a comp. You know that’s a pretty long render and now we’re talking days before you can review it yourself. You’ve rendered for two days and then you see what you did wrong or you missed. Also because in “Help,” it’s a continuous camera shot and I don’t know any compositor that actually likes working on two and a half-minute shots. Rendering two days and you realize what you missed in all four directions, that sucks. Excerpt from “fxpodcast #294: Making the 360 degree short HELP,” Mike Seymour, Sydney University, Fxguide.com

The VFX are also made more complicated by the fact that everything is in the frame in VR. We see everything, in every direction, which multiplies the number of effects and their duration considerably. When you stop to look at the continuous nature of the shot and the fact that you cannot escape your framing, you do not get to wipe a simulation like the spaceship crash in “Help.” There is no way of wiping it off the frame. It must resolve in an elegant manner. Things had to work in a much more realistic way. The same goes for everything: assets must hold up to this constant viewing. If the viewer just stares in one direction, does the image maintain its realism? Does it keep going? Does it hold up throughout all those different angles?

Due to the extreme distortion of the fisheye lenses and the geometrical modifications of the stitching process, tracking and solving a shot accurately can be almost impossible because there is no off-the-shelf de-lensing of the fisheye.

Gawain Liddiard, Creative Director, “Help,” The Mill It robbed us of a lot of the cheaper ways of making environments. We did use a lot of matte painting, but it was slightly more distant objects. Just because you can’t have that feeling of cards. You can’t have that feeling of a traditional directional view where it’s quite 2D. But then, having said that, I consider the way that the 2D team worked was quasi-3D really. We used all kinds of tools.

Gawain Liddiard, Creative Director, “Help,” The Mill For “Help,” we went through a lot of interesting iterations of trying to find a way of measuring this distortion more practically, mounting a camera nodally and then rotating it and watching how a point tracks across the screen, but the mechanics of it just weren’t accurate enough. So R&D here used the same sort of maps libraries that we had used for something like photogrammetry. We came up with the idea if you do what we termed the “lens dance” where we wave a smaller lens grid around in front of the fisheye field of view and we track how that’s distorting and shifting in that field of view, that’s what we use as sort of a baseline to unwrap into the equirectangular view. Our method was much more than a stitch. We took the fisheye, we de-lensed it into an equirectangular format, and then picked a rectilinear

38  Theoretical and Technical Foundations

crop from that. That rectilinear crop, I wanted to give it some real-world basis, so I based it off the camera that we were using, the RED Dragon film back, and then gave it a virtual lens of about a 4.5 millimeter, so that you came out with this sort of hugely wide, but still rectilinear, image that we could track very accurately. We had to track each plate individually then sort of weight them together to get the tracking software to solve it as a combined camera rig as opposed to having cameras that move independently. And then once we had a very solid camera track, we had taken LIDAR information about every location and our 2D guys had also figured out the position of all our characters. Excerpt from “fxpodcast #294: Making the 360 degree short HELP,” Mike Seymour, Sydney University, Fxguide.com

Not everyone has the luxury of having an entire R&D team who can code VR-specific scripts for VFX software. The “DIY” (do it yourself) method is often preferred, which can lead to great VR effects if the teams understand the challenges of virtual reality workflow. The Foundry’s Nuke now has a complete set of VR tools known as “Cara VR” for stitching, compositing, and other effects in 360° videos. Live Broadcast in VR There’s strong potential for VR when it comes to live broadcast: sports, concerts, and news can benefit greatly from the sense of presence, of “being there,” offered by 360° filmmaking. One of the pioneers of live VR is the company NextVR which has built live solutions for Fox Sports and others, but it is often limited to a 180° field of view instead of 360°. So far, VR live broadcast is limited to the “big events” like the Olympic Games or the Super Bowl. NBC Universal covered the Rio Olympics in VR through their app NBC Sports. They provided more than 100 hours of VR programming including the Opening and Closing Ceremonies, the men’s basketball final, gymnastics, track and field, beach volleyball, diving, boxing, and fencing. Fox Sports has experimented with VR broadcasts for a variety of sports including boxing, American football, and golf.

Ted Kenney, Director Field and Technical Operations, Fox Sports When I came here at Fox Sports, VR was a high priority for us. Two years ago Fox, with high interest in live VR, we decided to test and shoot with NextVR which at the time was providing Stereoscopic (3D) 180° cameras that we could shoot live. At the time we also had different options NextVR could provide that we could pass on to the fans. For instance, when we shot the US Open in Chambers Bay, we gave users the option of seeing a line cut where we moved you around from hole to hole, or you could watch individual feeds and stay in one place. For this first big test the event was broadcasted live to a VIP tent onsite where we could see what people’s reactions were. In the early days of VR, it was more experimental. Now everyone is getting onboard and pushing boundaries. Once you have companies pushing each other as well as manufacturers launching new products, then everything starts to get better. For us clearly resolution and frame rate are the biggest things, both on the camera and headset side of things. In the early days, we worked with 4K 3D cameras with very crisp images. At least when you start with a sharper image, no matter how you compress it, it’s going to be sharper at the end as well. Resolution and frame rates matter a lot in VR, especially for sports. Golf, for example, how will you see a little ball traveling away from you into a blue and white sky traveling over 100 MPH [miles per hour]? Also, if you just have a straight VR shot, a single seat at a golf event, it’s going to be kind of boring for you. You’re not going to be able to see where the ball lands. Implementing augmented reality, like graphics or a ball tracker, is something we’re looking into, so that you see the flight of the ball and where it lands, but also giving people more entertainment value by having multiple cameras on a golf course.

Live-Action VR Capture and Post-Production   39 Figure 2.24  Fox Sports VR app

VR can become a second screen 2.0. It is not about getting people away from their TV or watching a game that is produced by the best producers, the best directors, and the best audio mixers. That is what people want to watch, so how do we supplement VR into that world? Fox Sports VR features a downloadable app containing past and present VR content, as well as a VR theater to watch “flat” content on a big screen in a virtual room. There are many technical challenges when it comes to shooting live VR. First, very few VR cameras can actually do live broadcasts. The Nokia Ozo is one of the few, as well as the Teradek Sphere system. Bandwidth

is another issue, as most live-broadcast channels are now optimized for an HD signal of 1920x1080 pixels which is not high enough for VR. Finally, the lack of tools to do replay, slow motion, and long lenses is a roadblock for the development of live VR on a larger scale. Depending on the sport and depending on the rights, the live-produced feed can also be comped in directly into the virtual suite. For example, for a NASCAR event, first the participant sees the track from the high view. Above that, the 2D feed is comped in and the participant can choose to listen to that audio or not. This method is putting together the best of both

Ted Kenney, Director Field and Technical Operations, Fox Sports Our VR app uses the LiveLike system which features an augmented reality suite surrounding the viewer in 3D and the live broadcast streamed live in 180° 2D. So you are in a 360° world and we’re giving the viewer a lot of options like VOD [video on demand] content, short-form content. Anywhere between one-minute to threeminute pieces, behind the scenes, announcers, graphics, etc. We basically want to allow the user to choose their experience, give them as much entertainment as we can in one headset. Let’s say you have ten cameras on an event. You’re looking from a high camera and you can see all nine cameras out there and you can jump to any one of them. So that’s your live option. If you look down, there’s a dashboard, and that’s where you can put in the player stats, the leaderboard, everything they want that evolves around that game that you’re watching. Now when you turn around, we have branding, we have an option to possibly purchase a shirt or other products. You can have VR features that we pre-shot in VR, such as highlights or even replays. And then there’s another section where you can go and look at a short piece on your favorite player. We’re giving our audience all this entertainment value in one blast, so for one person there’s a lot to look at.

40  Theoretical and Technical Foundations

Ted Kenney, Director Field and Technical Operations, Fox Sports Our setting for the Super Bowl 2017 was as follows: first, a camera up in the announcer booth and you’re looking out in the field, so you’re seeing a wide picture of the presentation of the Super Bowl. That was 2D, 180° but once again, you’re in a 360° virtual suite. From there you can jump to any number of cameras. We had a camera on each goalpost, so on the Patriots’ goalpost as well as the Falcons’ goalpost, so the players would be coming towards you and also when they are in the end zone, you’re right there with them. And then we had one camera each on the cart cameras. The cart cameras are the Chapman cars and they follow the play of the game along the sidelines but they are always looking down the line. Then we also had a camera on a rail in front of the cheerleaders on the 20-yard line of the Patriots’ sideline. We also did shoot 360° separately around the stadium. For this, we used different cameras from Samsung to Black Magic for Facebook, for YouTube, etc. It’s again short form, quick, throw it up on YouTube, Facebook, and get an immediate response. As far as our content that’s live, everything goes through the Fox Sports VR app. Our 2D 180° cameras were connected to our control room and we were going live with that. So ultimately if you’re watching Fox Sports via our app, you would go on there and you would see it populate with new clips. It wasn’t the traditional OB [outside broadcast] van situation because we were cutting just clips. One of the big differences for us in VR is static cameras where you jump around and choose where to go, but also the option of having those static cameras as well as another feed which is a produced feed where we take you around.

worlds: being immersed in a VR environment and benefiting from the sense of presence while watching or listening to the produced 2D feed with its replays, slow-motion shots, etc. It is unlikely that live sports, concerts, events, or news will be compelling in VR until live streaming quality dramatically improves, which possibly has more to do with data speeds and the impending move to 5G networks than it does with live VR capture and production. Real-time stitching also must improve, and it will. Watching an NFL game on a 4K TV is often a better experience than being there live, as you can see and understand more of what is going on in the game, with incredible camera angles, close-ups and commentary. VR must bring something new to the table to become an interesting medium for live. Companion pieces, enhanced and augmented experiences, as well as interactive “choose your own camera” settings seem to be the most promising avenues. VR Sound The de facto standard audio format for VR sound is called “spatial.” Spatial audio features a 360° sound sphere that matches the visual sphere, where directional sounds link to specific visual objects in the sphere. When done well, spatial audio helps to

immerse the participant in the VR world and make it believable and compelling. Sound Recording

A generally accepted approach to recording sound for a VR project would be to use a combination of an ambisonic microphone at the camera position, and lavalier mics to record principal dialogue. Sometimes additional mics are hidden in the scene to capture any desired specific sounds. Ambisonics

Ambisonics is a sound recording and reproduction technique. An ambisonic microphone is in fact four separate mics in a specific tetrahedral configuration: one omnidirectional and three directional (one for the left–right axis, one for the front–back axis, and the last in the up–down axis). This system captures more information than a traditional stereo mic. It also allows for the re-creation of a very realistic sound sphere that sound editors and mixers are able to refine in post-production. An ambisonic recording captures a full spherical audio image, which makes it possible to choose what portion of that image one would like to listen to at any given moment. This can be very useful

Live-Action VR Capture and Post-Production   41 when attempting to immerse the viewer in an enveloping soundscape. Here are some of the commonly used ambisonic mics for VR: • Soundfield: Soundfield has three different ambisonic mics of excellent quality, but can be expensive for smaller productions. Additionally, these mics can sometimes be too large for VR productions where the mic needs to be hidden underneath or above the camera, or when shooting with Steadicam. • Sennheiser Ambeo® VR: The Ambeo is a good compromise between size and quality. It delivers A-format, a raw four-channel output that has to be converted into a new set of four channels, the ambisonics B-format. This is done by the specifically designed Sennheiser Ambeo A–B format converter plug-in, which is available as a free download.

Figure 2.25  TetraMic (small one) and Soundfield SPS200 ambisonic microphones

• Core-sound TetraMic: Very light, very small, and easy to hide. The TetraMic also delivers an A-format/ four-channel output. Any sound recorder would work for VR sound but when the ambisonic mic is hidden underneath the camera it is recommended to use a small-footprint recorder. For example, the Tascam DR-701D is small, lightweight and can be powered by a USB portable charger.

Figure 2.26  Jaunt One with a TetraMic and a Tascam recorder

42  Theoretical and Technical Foundations Lavalier Microphone

Any lavalier (“lav”) mics that are used for “flatties” work perfectly for VR. However, in VR the audio recordist is often hidden away, far from the actors, which increases the risk of audio transmission issues. The Zaxcom system has a micro SD card at the emitter which redundantly records while the transmitted signal is recorded at the recordist position. This redundancy serves to protect the dialogue from any possible transmission losses. This detail can be a lifesaver as boom mics are typically not used in VR as in normal 2D productions, making the lav recordings the single point of success or failure. Binaural Sound

Binaural audio is a sound capture technique that takes into account the characteristics of our internal and external ear as well as those of our skull to deliver sounds that our brain would interpret as “real” in terms of direction and distance. It creates the illusion that sounds produced in the headset actually emanate from specific directions. But because binaural sound only works through headphones, and not speakers, it is incompatible with traditional theater audio playback. However, it is perfect for VR as the participants usually wear headphones alongside the HMD.

Figure 2.27  Neumann KU 100 Dummy Head Microphone

To record binaural sound, two microphones are placed in the ears of a “dummy head” designed to imitate human anatomy, which allows extremely realistic capture of sound fields. It is also possible to convert “normal” sound into binaural with the help of postproduction plug-ins. Sound Editing/Mixing

Paul Raphaël, Co-Founder/Creative Director, Felix & Paul Studios We usually combine traditional sound-recording methods with 360° audio recording. We have lavs on actors. In the case of interviews, we’ll usually have a boom that we plate out in port. You’re not taking chances when you have a 20-minute interview with Barack Obama; you want to get the shot with perfect sound quality. We used to use a binaural mic, but now we’re using ambisonics. The problem with using a binaural mic is that it bakes your audio. You have little to no control over the individual sounds in the scene. You get a better sound out of the box from the binaural mic, but with ambisonics we have much better control over individual sounds and direction.

It is not currently possible to connect a VR headset directly to Pro Tools, so sound editing has to be done with an equirectangular file (although some smart DIYers have managed to find a way to check the edit in a headset with speakers arranged around the room to create the sound sphere). Dolby has recently released a VR player that plays in sync with Pro Tools and allows the editor to view equirectangular or 3D video on an Oculus headset, while sending positional data for head tracking back to the VR renderer. There are two main spatial audio formats for VR: • ambiX format, an open ambisonic format (compatible with YouTube, Facebook, Samsung Gear VR, Jaunt, Littlstar, and others). • Facebook ambisonic format, .tbe (for “Two Big Ears,” the company that created this format and was then bought by Facebook).

Live-Action VR Capture and Post-Production   43 Facebook 360 has its own plug-in to make .tbe spatial audio files directly from Pro Tools. Dolby has audio for VR tools as well, to create spatial audio for any of the major platforms apart from Facebook 360. It is called Dolby Atmos for VR and includes the following: Dolby Atmos Panner Plug-in for Pro Tools (lets you place audio objects in 3D space and generates object metadata that is authored with the final con-

tent), VR Renderer (takes the audio and metadata from Pro Tools, and creates the mix in the Dolby Atmos® environment, returning a binaurally rendered mix that is encoded into standard Dolby Digital Plus™ format), and Monitor Application (provides signal metering and a dynamic view of all mix objects, so you can see where each object is placed in the 3D space).

Jean-Pascal Beaudoin, Co-Founder, Headspace Studio We often talk of or see visual representations of 3D or spatial audio as a sphere of sound enveloping the viewer. Perhaps it is because this image contrasts so perfectly with the one of horizontal (planar) surround sound, and also because it mirrors the traditional equirectangular layout used for 360° videos. Although it is an efficient and conceptually sound metaphor, this representation seems to suggest limits or boundaries to the sphere, as if the viewer’s point of audition was located at the center of a spherical dome. And by so doing, it obfuscates what is for me one of the most interesting – and sometimes challenging – dimensions of designing and mixing spatial audio for VR: depth of sound field. Anyone who has ever opened a spatialization plug-in knows that sounds (objects, sources, emitters – pick your preferred semantic flavor) are positioned according to parameters of azimuth, elevation, and distance. The initial time delay (ITD), ratio of direct sound to reverberant sound, and motion parallax between sounds in relation to the point of audition all contribute to our perception of distance. Of course, distance alone does not make depth; it requires a succession of layers perceived to be at significantly different distances.

Figure 2.28  “Nomads” ©Felix & Paul Studios

44  Theoretical and Technical Foundations

The effect of depth can be dramatically enhanced by taking advantage of visible limits in the experience such as inside/outside, obstruction, and/or occlusion, thereby revealing events unfolding beyond the reach of the viewer’s gaze or outside of the immediate field of view. Depth is an element that I have been exploring further along my spatial audio journey. The interior ger (Mongolian yurt) scene that can be seen in both “Nomads: Herders” and Oculus’ “Introduction to Virtual Reality” (Felix & Paul Studios) is a good example. The recorded diegetic audio in that scene was already so rich and immersive that I could have easily gotten away with leaving it untouched except for some volume leveling. But having been on location, I knew that their livestock – in this family’s case dozens of cows and hundreds of sheep – were meters away from their ger. Although you can hear the livestock faintly in the scene’s original location audio, we decided to enhance their presence around the ger to highlight the close physical proximity of the herders with their cattle. If you watch the experience now, knowing this, you will notice how subtle this effect is. It is likely that you would think it was originally recorded that way, which, in the case of a subjective documentary such as “Herders,” is exactly what you are looking for. One of the collateral advantages of creating more depth with this additional and separate layer of audio is that we often use them to bring more continuity in scene transitions. These transitions – typically consisting of a fade out to black and fading back into picture – are particularly fragile moments in regards to the viewer’s sense of presence, which can be lost in an instant. I have to confess that I spend an unusual amount of time crafting these transitions. An example of this is the transition between the same interior ger scene to the next one – a breathtaking exterior of a remote plain of Outer Mongolia with a large group of horses. On top of the sonic similarity between the livestock outside the ger and the horses, a distinct sound effect of a bird “fly-by” was added – from roughly -90° in the ger and ending its course in the opposite direction in the next scene. It was sweetened to be gradually perceived as being located outside the tent to being in the open air in the next shot.

Conclusion

Stories are both visual and aural experiences. In VR, it does not matter where you are looking, you can always lean more heavily on audio to tell your story because to some extent you can hear the tracks no matter which direction you are looking in. The adage is that in movies, audio is 50% of the experience, so maybe we have to rely a little more heavily on audio for key story points than we are used to in traditional content. Many of the basic tenets of doing good sound work for traditional feature films also hold true when doing good sound work for virtual reality experiences. Storytelling, intelligible dialogue, appropriate music score/composition, Foley, additional dialogue recorded (ADR), sound effects design, supervision, mixing are but a few of the traditional disciplines required in feature film sound work that are also required in VR. It is in the execution of these disciplines that differences begin to appear – some slight, and some more significant. One could look at these differences as a function of the final product, and what demands are put on process in order to achieve that final product.

Tim Gedemer, Sound Supervisor, Owner Source Sound, Inc. One important thing to note about doing VR sound work is that the energy of scenes is not concentrated in rectangular areas like all of our other media viewing. This means that we are accustomed/conditioned to pushing a lot of energy into a space that would only represent a fraction of the full 360° VR space. Even in feature films, and despite modern immersive audio systems like Dolby Atmos, most of the soundtrack’s energy is coming from the physical space inhabited by the screen. So, when working in full 360°, you find that the energy of scenes needs to be defined in the context of the full sphere. To make a scene

Live-Action VR Capture and Post-Production   45

Figure 2.29

Deliverables and Distribution in 360° VR have as much perceived energy as the same scene in traditional rectangular playback, you need to boost/bolster the “negative space.” By negative space I’m talking about all the areas of the 360° soundscape that are not the focus of the track as a whole in any given moment. This, of course, is a very dynamic thing if your VR experience is a good one and is taking advantage of the full 360°. You will have to constantly, moment to moment, make adjustments to the energy of the negative space in order to make the experience feel “right” or “complete” or “like real life.” Immersion can be achieved in a lot of different ways, but bolstering negative space energy levels can be a good place to start.

The ambisonic and binaural technologies combine specific microphones and new post-production techniques, and constitute the first steps towards what we can call a fully spatial soundscape for VR storytelling.

Diffusion Standards and Formats

The standard format for VR live action is the equirectangular format which is a 2:1 ratio rectangle containing the entirety of the 360° sphere. The equirectangular format (also named lat-long) has been used for centuries; its invention was attributed to Marinus of Tyre in ad 100. The meridians of the sphere become vertical straight lines of constant spacing. The parallels become horizontal straight lines of constant spacing. The equirectangular format is therefore heavily distorted and is not an accurate representation of the sphere. It is widely used to show planet Earth and leads to confusion regarding the actual size of countries and continents. In 2017, in the US state of Massachusetts, Boston Public Schools became the first public school district in the United States to adopt a different projection, the Gall–Peters maps, as their standard. This projection maps all areas such that they have the correct sizes relative to each other (the final result is still distorted but is more accurate geographically). The same applies to VR: the equirectangular format is not an accurate representation of the future 360° sphere and the distortions complexify post effects even

46  Theoretical and Technical Foundations

Figure 2.30

Figure 2.31  “Marriage Equality VR” directed by Steve Schklair and Celine Tricart ©3ality Technica

more. The equator region appears compressed while the poles are dilated, which means that there are more pixels allocated to the top/bottom of the sphere than the equator (if you look at the example in Figure 2.30, the country of Greenland appears much bigger in the equirectangular format than it really is). It is a problem as most people are focusing on the center of the sphere when watching VR content and not the poles, therefore a lot of resolution is wasted when using the equirectangular format. The Montreal-based company Felix &

Paul uses its own proprietary format and player which separates the top/bottom from the rest of the sphere to optimize the quality. Jaunt VR is using a similar method, as seen in the example in Figure 2.31. This format uses the traditional “top/bottom” format for 3D, which means the left eye is positioned on top of the right eye and rendered together. The uniqueness of this format comes from the fact that the top and the bottom of the sphere are separated and placed together on the right side of the file. This way, the

Live-Action VR Capture and Post-Production   47 resolution is optimized for the equatorial region of the sphere instead of the poles. Unfortunately, most of the mainstream VR players, like YouTube 360, currently only accept the traditional equirectangular format which has become the norm for 360° content. To make an equirectangular export, the aspect ratio must be 2:1, otherwise a black bar might appear in the final render. For example, acceptable resolutions are 4096x2048 pixels for 4K, 6144x3072 pixels for 6K, etc. In the case of a 3D project, place the left eye on top of the right eye and the ratio hence becomes 1:1 (4096x4096, 6144x6144, etc.). While your master should be in the highest resolution possible with very little compression, the deliverables for the most common distribution platforms are usually capped at 4K. Very few VR headsets/ VR players can play real time anything bigger than 4K, especially at high frame rates such as 60 fps. The go-to codec is usually H264 MP4, but more and more players are becoming compatible with the new H265 codec. This is good news, as the H265 codec considerably reduces the size of files, which helps VR streaming performance. Each platform and each VR headset has different optimum requirements when it comes to deliverables (and these are constantly improving!). Table 2.1 is an example of the recommended codec/ resolutions for specific headsets. Be mindful that these numbers might have changed by the time this book is published. Some camera manufacturers have created their own proprietary formats and VR players. For example, the Nokia Ozo records “.mov wrapped OZO Virtual Reality” with eight channels of raw video and eight channels of PCM audio that can only be played by the Nokia Presence player. Jaunt VR is using a similar strategy. Felix & Paul Studio releases their films wrapped in downloadable apps that contain the player. It’s the Wild West.

Gawain Liddiard, Creative Director, “Help,” The Mill One of the key examples of what we had to take on was that we brought our own version of essentially what the final app was going to be. Google was sort of waiting for the project to be finished to optimize it for the chosen platform. And we had their earlier version of their encoders right from the outset of the project, but it took days to encode just 45 seconds of footage to get it to play on the phone or to get to view it in some kind of way that resembled what the final device would be. There’s a lot of debate at the very start of the project about what resolution are we going with. I think where we ended up at a 4K by 2K equirectangular file, which is sort of a nice sweet spot of what’s achievable, what you can actually manage. I didn’t think much above that would fit through an effects pipeline. And it works really well, like as we started to review it on tablets, it looked great. Excerpt from “fxpodcast #294: Making the 360 degree short HELP,” Mike Seymour, Sydney University, Fxguide.com

360° and VR Diffusion Platforms

There are a lot of available VR platforms providing quality content to the growing audience. The main ones are listed in Figure 2.32, as well as whether they are VR-compatible (can be browsed in a VR HMD) and/or 360°-compatible (can be browsed outside a VR HMD, for example, on a laptop or smartphone). Most of the listed platforms are for curated content only in order to guarantee a top-notch experience for the participants. The two main non-curated platforms

Table 2.1

Headset

Codec

Resolution

fps

Average bit rate

Samsung Gear VR Google Cardboard (Android) Google Cardboard (iOS) Oculus Rift

h.265 h.264 (Baseline, level 4.2) h.264 (Baseline, level 3.1) h.265 / h.264

3840×2160 3840×2160 1920×1080 4096×4096

30 30 30 60

10–20Mbps 20–30Mbps 10–14Mbps 40–60Mbps

48  Theoretical and Technical Foundations

Figure 2.32  360° and VR diffusion platforms

for VR content remain YouTube 360 and Facebook 360, which are already hosting tens of thousands of 360° videos. (For game engine-based VR projects, the deliverables and platforms are listed in the next chapter.)

Figure 2.33  HTC’s Viveport Arcade

Location-Based VR/Arcades

Last but not least, more and more “VR arcades” are opening all around the world, featuring high-end VR solutions such as the HTC Vive. These room-scale systems and compatible signature pieces are still fairly

Live-Action VR Capture and Post-Production   49 expensive and therefore not accessible for the general public. Location-based VR is the ideal solution for the masses to enjoy high-end virtual reality experiences. The US arcade gaming market has not been viable for some time now, in contrast to the strength it still exemplifies in China. In terms of VR theme parks, there are currently two major operators, The Void and Zero Latency. The Void, a US company, was the first in the United States to have a VR arena location with their flagship location in Utah. The Australian company Zero Latency currently has two locations in the United States for location-based VR experiences. For VR installations, there have been some minor, short ones associated with movie opening tie-ins, with the latest being the Assassin’s Creed VR Experience,

which is viewed via a kiosk. Another company focusing on location-based VR is the video game company Starbreeze and its joint venture with IMAX. VR experiences are expected to range between 5 and 15 minutes, at US$1 per minute. The first IMAX VR Experience Center, which is viewed as a test facility, opened in Los Angeles in mid-January 2017, with another opening soon at the ODEON & UCI Cinemas Group’s Printworks multiplex location in Manchester, United Kingdom. Other test facilities in the works are located in China (Shanghai), Japan, the Middle East, and Western Europe. Many of the interior walls of the facility were made movable by design to accommodate upcoming, planned multi-player game-playing capability.

Chapter

3

Game Engines and Interactive VR Game engines play an important role in the VR industry, not only for games, architecture, or education purposes, but also for narrative VR. The subject of this book being specifically VR filmmaking, this chapter therefore focuses on how game engines can enhance a VR experience featuring some kind of live action. Whether the project was shot more or less in 360° and then projected into a game engine, or if advanced techniques like volumetric capture were used, the

Anthony Batt, Co-Founder and Executive Vice President, Wevr At some point, we don’t really think about the underlining engine. We really just ask ourselves: Is it cool? What audience is it for? At the end of those questions, then we think about how to best make it, and then that starts making some decisions regarding game engines or live action. I think all those distinctions will go away. I think they’re slightly irrelevant in a sense because you really don’t think about the pot you’re going to cook in to determine what food you’re going to make or how good the food is going to be. You just make a wonderful meal. You use the pots and pans that are available to achieve the goal. In the future, those distinctions won’t be something people will discuss. We use all sorts of craft tools that are available to honor the created intent of the story. We use all the colors of the rainbow. We use certain colors when necessary to tell a part of the story.

following pages describe how the two worlds of live capture and game engines can meet and improve the art of VR. Introduction to Game Engines A game engine is software that displays a 3D environment in which 3D assets and effects can be placed and animated. Scripts using various coding languages like C# (pronounced C-sharp) are written to command how each asset moves/reacts/interacts depending on various stimuli. The experience can then be “played” to test the scenes and scripts, modified, tested again until finished. The final experience is exported as an executable file named a “build” compatible with a specific platform like a cell phone, PC, or any other gaming device. Functionalities typically provided by a game engine include a rendering engine (“renderer”) for 2D or 3D graphics, a physics engine or collision detection, sound, scripting, animation, and may include video support for cinematics. This video support is what comes into play when projecting live-action VR content into a game engine. A well-known genre in VR games is the first-person shooter (FPS), in which the player is seeing the environment through the eyes of the game character. Due to the nature of VR, where the participants have agency (at least when it comes to deciding where to look within the VR sphere), an FPS character/camera is created in virtual reality experiences as an interface/avatar for the participants. A lot of different game engines exist and many of them are becoming compatible with VR. However, most of the VR game engine-based experiences are

52  Theoretical and Technical Foundations currently being made in one of the two following engines: Unity or Unreal. These two engines jumped on the VR bandwagon early on and are regularly updated. Some VR companies swear only by Unity and its enhanced compatibility with all the VR headsets, while others prefer Unreal and its great light-render performances. At this point in time, the final quality of the experience relies first and foremost on the quality of the programmers, designers, and coders rather than the engine itself. How to Build a VR Experience with Unity Here is a short and easy tutorial to create a basic VR experience in Unity compatible with the Oculus Rift or HTC Vive: • Download and install Unity Personal (Free). • Open Unity. • Select “New Unity Project.” Welcome in Unity! The interface is split up into different sections: the Scene tab, the Game tab, the Asset Store tab, the Inspector, the Toolbar, the Hierarchy tab, the Project tab, and the Console tab. For the purpose

Figure 3.1  Unity main interface

Maxwell Planck, Founder, Oculus Story Studio, Producer, “Dear Angelica,” “Henry” At Oculus Story Studio, we used Unreal Engine 4. We’ve found it has better fidelity and performance out of the box. Furthermore, since it’s an engine that’s being built by a team that is also creating content, their roadmap matches our needs because they need the same tools we do. Finally, the big reason we use UE4 is that we have full access to the source code. We are a team that is technical enough that if we need a new feature or to fix a bug we found, we can do that ourselves and we don’t have to rely on convincing the engine’s team to do the work for us. We’re too early in the lifetime of creating interactive VR experiences to say we have a typical workflow. For the four projects we’ve worked on so far, we’ve changed our workflow dramatically each time, as we’re trying to evolve towards a mature process, and each project has been unique enough that the pipeline has had to be reinvented.

Game Engines and Interactive VR   53 of this tutorial, we will focus on the Scene tab where objects (assets) can be placed and a 3D environment can be built. How to Move in the Environment

The use of a mouse is recommended in Unity. In order to move within the 3D environment, use the shortcut “Q” to activate the hand tool. Click-drag in the scene tab to drag the camera around, right click-drag in the Scene tab to look around.

• Your skybox is now ready to be used. In the Project tab, go to “Assets” and locate your skybox. • Click on the Scene tab and drag your skybox file directly onto the Scene. It will load the skybox directly. (Note: There might be a lot of different files in your Assets folder, so make sure to select the full skybox one and not the individual sides, which cannot be dragged onto the Scene.) Terrain

Let’s now create a terrain/ground for your environment. Skybox

Let’s create a sky for our VR environment. If you do not have assets already installed, you can download them directly in the Unity interface in the Asset Store tab. • In the Asset Store tab, write “skybox” in the search bar. • Select a skybox you like (many are available for free). • Click on download. • Once downloaded, a pop-up window will open. Select all the elements and “import.”

Figure 3.2  Adding a skybox in Unity

• In the main menu bar, select GameObject – 3D Object – Terrain. • Look at the Inspector tab. A Terrain layer has appeared. • In the Inspector tab, use the different tools to sculpt the terrain to your liking (raise/lower terrain to create mountains). • Go to the Asset Store and download/import terrain textures. • In the Inspector tab, select the brush tool – edit texture – add texture. Select the textures you have downloaded and “paint” your terrain with it. Do not

54  Theoretical and Technical Foundations

Figure 3.3  Adding a terrain and assets in Unity

hesitate to mix different textures to make it look more realistic. • You can download/import trees and more from the Asset Store and add them to your terrain. Create a First-Person Shooter Character

• In the main menu bar, select Asset – Import Package – Characters. • A pop-up window opens. Import everything. • In the Project tab, browse to: Assets – Standard Assets – Characters – FirstPerson – Prefabs – FPScontrol. • Drag FPScontrol onto the Scene tab directly on your terrain. Make sure the object is placed above the ground, not buried inside a mountain. • Click on the play button just above the Scene tab (or CTRL+P). You are an FPS in your own world! You can look around using the mouse and walk using the arrow keys. • Click on the play button (or CTRL+P) to stop the game preview. • Select the FPScontroller in the Hierarchy tab. The Inspector tab now contains all the options and vari-

ables for your FPS. You can change the speed of walking, running, and change the gravity multiplier settings so that you can make giant jumps as if you were on the moon! • To deactivate the “Mouse Look” option, go in Mouse Look – X sensitivity and Y sensitivity, and set them both to 0. This way, only the VR headset can look around in the environment, and not the mouse. Make It VR and Build

• In the main menu bar, select Edit – Project Settings – Player – Other Settings – check “Virtual Reality supported.” • In the main menu bar, select File – Build Settings – choose PC, Mac & Linux – Build Unity creates an .exe file. If you have a VR headset like Oculus Rift or HTC Vive connected to your computer, the build will automatically start in VR. Congratulations! There are countless Unity and Unreal tutorials online or in books. It is relatively easy to learn how to make a very basic VR game and improve the VR experience you have just created.

Game Engines and Interactive VR   55

Figure 3.4  Build as seen with an HTC Vive connected to the computer

Robyn Tong Gray, Chief Content Officer and Co-Founder, Otherworld Interactive We at Otherworld use the game engine Unity – it’s great for smaller teams and offers a ton of flexibility when working on multiple platforms. We create for as many different head-mounted displays (HMDs) as possible, and Unity makes it really easy to port projects to basically any major platform. Unity is heavily supported by the various HMDs and updates/patches really frequently.

yaw, and roll), hence the “six” degrees. Live-action VR has three degrees of freedom only (the rotation of the head); game engine-based VR has six degrees of freedom.

Live-Action versus Game Engine VR Live-action VR limits the agency of the participants, who can only choose where to look in the 360° sphere. Game engine-based VR opens up a lot more possibilities ranging from branching storytelling to “room-scale” games. The ability to move physically in the real world as well as the virtual world is called “six degrees of freedom.” There are three different types of translation achievable (forward/back, up/down, left/right), and three different types of rotation (pitch,

Figure 3.5  The degrees of freedom

56  Theoretical and Technical Foundations

David Liu, Creative Director of Virtual Reality at Viacom NEXT The types of experiences I find people going back to are the ones that don’t just sell the visual illusion of a space, but also meet the expectations and affordances of that space. I suspect that there are both physiological and psychological reasons for this. If you show someone a three-dimensional space, they would want to move or walk in it. They would move their heads and lean to peek around objects. If you gave them some sort of hand presence, they would try to reach out and grab the objects lying in it. As designers, we should aim to build worlds that meet these affordances – particularly six degrees of freedom positional tracking for the head. If these affordances aren’t met, there is a clear break in how much they feel that they are actually in this world. For some, their vestibular senses are particularly less forgiving, and protracted leaning and shifting in their seats leads to a violent hurling of the morning’s breakfast. This is one of the reasons why we are predominantly working on headsets like the HTC Vive and the Oculus Rift for all our experiences. It isn’t because we want to cater to an elite subset of guests with high-end computers, but rather because we feel that it is critical that your head is positionally tracked. Our bar isn’t between mobile or tethered – in fact we feel that mobile headsets with positional tracking are coming soon, and once that happens, it’s likely that we’ll be developing for those headsets too. I’m not sold on completely passive experiences with only three degrees of freedom. When you can’t lean or crouch or stand up, it’s almost like being gagged and paralyzed from the neck down. You can scream but no one will hear you. It’s frankly a little terrifying. You’re being teased with a spatial world but all you can do is look around, wide-eyed. When the novelty of telepresence wears off, people will be chasing after other experiences where they have at least the agency to move in the space; or go back to their 2D screens where they don’t have to strap a hot, suffocating pair of goggles on their face, and where there’s a clear, authored progression of sound and images via age-old motion-picture practices we call direction, cinematography, and editing.

Figure 3.6  “Slices of Life” – ©Lucid Dreams Productions

Game Engines and Interactive VR   57 Figure 3.7  Fox Sports VR app

Figure 3.8  One of “Miyubi” hidden scenes ©Felix & Paul Studios

Six degrees of freedom is indeed achievable in a game engine environment, but it requires playback on specific VR HMDs with positioning tracking. For example, the Oculus Rift, the HTC Vive or the StarVR HMDs are capable of six degrees of freedom. Is it currently not compatible with the Samsung Gear VR, Daydream or any other smartphone-based headset. The extraordinary freedom of movement as described by David is indeed the holy grail of VR, and many filmmakers are trying to figure out how to bring it into the world of live action. One possibility is to project live-action content into a game engine environment. For the VR project “Slices of Life,” scenes were shot in “traditional” live-action 3D and placed in a game engine scene. Each live-action scene shows a specific memory of a woman who is dying and remembering her life (see Figure 3.6).

Another option is to project “traditional” footage onto a virtual theater screen so that the participants feel like they are watching it in an IMAX cinema or the VIP suite of a sporting event (Figure 3.7). The next step is to figure out a way to use the game engine to create interactive experiences with the live-action footage. For example, it is possible to project live-action VR content in a game engine and use the “gaze-select” interaction to activate different storylines. Indeed, the game engines can detect where in the scene the participants are looking and change scenes accordingly. This is called “branching storytelling.” In Felix & Paul Studio’s “Miyubi,” there are three objects hidden in the film, three “Easter eggs” that can be activated if looked at. When the participants find all three objects, a secret scene is unlocked (Figure 3.8).

58  Theoretical and Technical Foundations

Maxwell Planck, Founder, Oculus Story Studio, Producer, “Dear Angelica,” “Henry” I’m very excited for giving the visitor more affordance for interaction, but it does make it that much more difficult to deliver a compelling story when the user has choice. Games have been working with this problem for a long time, and I believe there are some very compelling examples where the interaction amplifies the story experience. One of the reasons I’m excited about VR is that the interface for interaction is intuitive for everyone and not just those of us who grew up knowing how to use joysticks and keyboards.

Unfortunately, there are lots of examples of how interaction has ruined the ability for an audience to suspend disbelief. When making a film, having full control over every single frame the audience sees makes it easy to control the experience. However, if we do find ways to solve the problem and have interaction and story work together, the end result can be more powerful than a film. Skybound Entertainment’s “Gone” is an 11-episode interactive VR series. Participants can explore the live-action scenes, looking for active clue points as the story unfolds around them. Finding these hotspots (which are only active for a limited time) allows them to see events at different times and from different perspectives. In short, exploring the environment will shape the way they experience the story. Miss a hotspot and they might miss an aspect of the story.

Jessica Kantor, VR Director, “Ashes” What makes virtual reality completely unique is that the art comes alive as it is experienced. The relationship to the user experience is unique and can be designed to be as obtuse or subtle as the director/designer would like. For instance, as the participant looks at something in a space, a reaction could happen unconsciously based on their gaze which would be a subtle interaction, or the user could use controllers and make a conscious decision that will move the experience forward. Another factor is the users themselves. They will have different degrees of boldness in how they explore the experience and a great experience is fun, both passive and active. As the collective of artwork in the space grows, there will most likely be a rating system on how active or passive the game will be, which will better manage the expectations of the participants.

Photogrammetry Photogrammetry is a technique using photography to recreate environments at a high level of realism. The chosen environment is photographed and then a 3D model is calculated from the photos using dedicated software. More pictures from more angles make better photogrammetry. The most detailed environments are often recreated from hundreds of photos, although two photographs from different angles will identify visual similarities and, using math, figure out where these similar points are located in space. This does mean it is limited to static scenes containing opaque, non-specular surfaces. The most common photogrammetry software tools are RealityCapture, PhotoScan, Autodesk ReMake

Figure 3.9  “Gone” hotspots mechanics – credits: http://the-digital-reader.com/2016/03/21/samsungmilk-vr-gone/ ©Skybound Entertainment

Game Engines and Interactive VR   59 (formerly Autodesk Memento), 3DF Zephyr, and Regard3D. The software will take the photos and automatically calculate all the camera positions if there are enough common features between different photos. This is akin to an automatic stitch for a panorama. The software generates a point cloud from which a mesh is extrapolated. The photos are used to create textures for that mesh. VR creators use photogrammetry to create environments in a game engine that are just as detailed and real as if they were live action. This allows the participants to move freely within this environment and achieve the six degrees of freedom. However, this technology is limited to static environments and specific types of surfaces. The most successful examples of VR photogrammetry are “The Lab” by Valve and the “Realities” app on Steam. Volumetric Capture Volumetric capture is a technique similar to the “Bullet Time” system invented for the movie “The Matrix.” An actor or object is placed in the middle of multiple cameras and recorded through as many different

Shari Frilot, Founder/Chief Curator of New Frontier at Sundance Film Festival I believe the real power and wonder of VR is realized in animated environments that you can experience and interact with. On the other hand, The New York Times and National Geographic and the Discovery Channel have a commitment to the documentary form, and live action is so critically important to that. There are and will continue to be advances in live-action capture, whether it’s volumetric or light field capture which lets you interact with the live video. What you can achieve with animation and a headset like Oculus or Vive is something that live action yearns for. There’s technology that’s being developed as we speak; it’s really exciting. It is also so hard to answer the question of where this is going because it is happening in real time. It would be a fool’s errand for me to actually say where it’s going to land because I’m just trying to keep up with it. This field takes a quantum leap every three months.

Figure 3.10  A stunning example of photogrammetry – “Realities” ©Realities.io

60  Theoretical and Technical Foundations

Figure 3.11  The volumetric capture studio of the company 8i – ©8i

Figure 3.12  “#100humans” experience showcasing a volumetric capture gladiator – ©8i

Game Engines and Interactive VR   61 angles as possible. The footage is stitched together to create a three-dimensional model that can seamlessly be placed in any environment. While live-action VR is commonly described as “inside-out” (the cameras are facing outwards), volumetric capture, on the other hand, is “outside-in.” Because the camera must “see” every angle of the filmed subject, overlap and occlusions can become an issue. If two actors are facing each other, the volumetric render and texture of the front of their bodies is compromised. However, the technology is progressing rapidly thanks to the successful rounds of venture capital financing for the companies working on volumetric capture, like 8i or Uncorporeal. The company HypeVR is working on volumetric video-capture technology. They not only capture liveaction 360° footage, but also the volumetric data of the scene for each frame so that when the world is played back, the information is available to enable the user to move inside the video. The combination of highquality video capture and depth-mapping LiDAR opens up a whole new world of possibilities for VR (light detection and ranging, or LiDAR, sensors measure distances by measuring the time of flight (TOF) that it takes a short laser pulse to travel from the sensor to an object and back, calculating the distance from the known speed of light). The texture data from the video are fused with the depth data to create 60 volumetric “frames” of the scene per second. That means

you will be able to see waves moving or cars driving, but still maintain the volumetric data that give users the ability to move within some portion of the capture. Six degrees of freedom is achieved in a cinematic “live-action” environment.

Figure 3.13  HypeVR camera + LiDAR system for volumetric videos

Figure 3.14  Light field technology – ©Lytro

Light Field Technology Last, but not least, light field technology shows promising progress that would make live-action + six degrees of freedom achievable and accessible to the general audience. The leading company in light field cameras is Lytro, which commercialized its first pocket-size light field photo camera in 2012. A light field camera (also known as a plenoptic camera), captures information about the light field emanating from a scene, instead of light intensity only in the case of a normal camera. This includes the intensity of light in a scene and also the direction in which the light rays are traveling in space. Light field cameras usually use an array of micro-lenses placed in front of an otherwise conventional image sensor to sense intensity, color, and directional information. Multi-camera arrays are another type of light field camera. The core principle in both cases is that the light field capture system needs to be able to record the path of light rays from multiple viewpoints. A captured light field consists of rays of light traveling in every direction, with their brightness, color, and their path, which is the direction and position of those rays. When light illuminates a scene, light rays

62  Theoretical and Technical Foundations Figure 3.15  Lytro Immerge VR camera – ©Lytro

bounce off objects in the scene, reflecting in every direction, mixing colors of each surface they have hit. Rays bounce off surfaces in their path, eventually dissipating their energy as the light is absorbed gradually with each bounce. Some of those rays are occluded (blocked) to create shadows, while other rays bounce at different intensities, appearing to viewers as reflections or highlights. From any location within that spherical light field volume, every viewpoint of the surrounding 360° scene can be recreated virtually, from the furthest left to right, top to bottom, and front to back, in every direction and from every angle. When viewing a light field volume from within it, the participant is able to enjoy a lifelike reproduction of reality with the full six degrees of freedom. In 2016 the first light field VR experience was released, called “Moon.” According to Lytro, the piece demonstrates several key benefits of light field tech for VR: • Parallax, or the ability to look around objects. • Truly correct stereo. No matter what part of the scene you are looking at or how you have tilted your head, “Moon” displays correct stereoscopic images. In the case of live-action stereoscopic 360° videos, it only really works if the participant’s head is level, and if they are looking at the horizon.

• Seamlessly integrated live-action and film-quality computer graphics. The CG in “Moon” is not bound by the constraints of real-time rendering so it can seamlessly integrate with the live-action elements. • View-dependent lighting effects like reflections. Light field technology can accurately reproduce shiny or mirror-like real-world objects like the astronaut’s helmet in “Moon.” • No stitching artifacts. Because capturing the light field gives an accurate reconstruction of the scene, “Moon” exhibits none of the stitching artifacts common to 360° video. Light field still has to be considerably improved before it can become the go-to technology for virtual reality, especially when it comes to the size of the capture device and the quantity of data it captures. The music video “Hallelujah!” is the latest example of Lytro’s VR light field experience, with a much larger viewing volume than the original “Moon” piece. Deliverables Diffusion Standards and Formats

Game engine-based VR experiences are usually packaged into an .exe file, a program containing both the experience itself and the “player” to explore it. Unity

Game Engines and Interactive VR   63

Maxwell Planck, Founder, Oculus Story Studio, Producer, “Dear Angelica,” “Henry” We are creating what we call 360° video adaptations of our experiences. We say they are adaptations because we believe the 360° video medium is very different from a real-time medium in which you can move around as well as look around. The feeling of presence is dramatically different for the visitor and so the creative decisions also need to be different. As we start having more interaction in our narrative experiences, it requires us to make creative decisions on how we would transfer from a real-time experience. For example, when we made “Henry” for the Oculus Rift, we made specific creative decisions about the pacing and design, since you could lean forward and look around corners. While we adapt “Henry” for Gear VR, we have had to make slight adjustments to how much time we give the visitor at the beginning of the experience. Furthermore, we’ve had to adjust the “default position” of the visitor to be different in 360° video vs. real time. Luckily, we do not need to create the experience from scratch for the adaptation. There is a straightforward way to export a 360° video from our real-time engine just as you might put a 360° camera rig in a live-action shoot. The export process (also known as rendering) can take days, depending on the length of the experience, but once the export is complete, we can do post-production just like you can for 360° camera-captured content.

and Unreal can export experiences directly for the Oculus Rift, the HTC Vive or any other HMD through the use of specialized software development kits. A software development kit (or SDK) is a set of software development tools that allows the creation of applications for a certain software package, software framework, hardware platform, computer system, video game console, operating system, or similar development platform. Developing a VR experience for the Oculus Rift and its Touch controllers will require a different SDK than developing for the HTC Vive. The same rule applies to other game engines,

with varying degrees of complexity to make it compatible with the various HMDs. When a VR experience must be compatible with multiple headsets, it can involve a simple re-build with a different SDK, or it has to be rewritten almost from scratch, making the whole process very time-consuming and expensive. A game engine VR experience can also be compiled into a video file and played on compatible platforms and players as listed in the previous chapter. This technique limits greatly the potential of the piece as it becomes a non-interactive, three degrees of freedom VR film, but the process makes it available to a wider

Figure 3.16  Steam interface (left) and Oculus Store interface (right)

64  Theoretical and Technical Foundations audience and not just the small community who can afford an Oculus Rift or HTC Vive. Game Engine-Based VR Diffusion Platforms

Game engine-based VR experiences can be purchased and downloaded from a number of platforms, the most well-known being the Oculus Store (for Oculus-

compatible experiences) and Steam (for both Oculus and Vive experiences). Once the participants have created an account for one of the platforms, they can browse and install the VR app they want directly onto their VR-ready computer. Other platforms include Transport and Google’s Daydream.

Chapter

4

VR Headsets and Other Human–VR Interfaces In this chapter are listed the current means of interfacing with virtual reality. From VR headsets to controllers and haptic vests, the hardware market is booming, and engineers are on the lookout for new, intuitive ways to feel even more present in the virtual world. The complexity and cost of current VR headsets are often listed as potential threats to the success of the VR industry. It is vital to improve the interface technology to make it more affordable and intuitive. This chapter lists the

most important specs and requirements of the current VR headsets and what to look for in the near future. A Note on Virtual Reality Sickness What is the cause of virtual reality sickness? Our bodies are trained to detect and react to conflicting information coming from our vision and from the vestibular system, commonly known as the inner ear.

Figure 4.1  The human vestibular system and how it detects head movements – source: https://goo.gl/LTeKiK

66  Theoretical and Technical Foundations The vestibular system is the sensory system that provides the leading contribution to the sense of balance and spatial orientation for the purpose of coordinating movement with balance. The brain uses information from the vestibular system to understand the body’s position and acceleration from moment to moment. When this information conflicts directly with the information coming from the visual system, it can create symptoms like nausea and vertigo. A fun yet unverified theory on why this conflict leads to nausea states that a long time ago (a few thousand years), humans were at risk of eating a lethal mushroom. The first symptoms being vertigo and a dysfunctional vestibular system, our body learned to automatically “expel” whatever is in the stomach to get rid of the dangerous mushroom, hence the nausea. The danger is long gone, but our bodies remember and this is why some people are subject to motion sickness. In virtual reality, the field of vision of the participant is completely filled by the VR experience, and when moving in the virtual world while standing/sitting in the real world, the conflict between what we see (we move in VR) and what we feel (we are static in the real world) leads to feelings of vertigo and nausea. This specific type of motion sickness is called virtual reality sickness. How to avoid virtual reality sickness? Moving the camera (or the FPS character in the case of a game engine-based VR experience) while the participant stays still is the leading cause of virtual reality sickness. Here are a few basic rules to follow to prevent it: • Absolutely no rotating/panning the camera. The left–right movements are left to the participants themselves, who can choose to look around or not. • Absolutely no tilting the camera. The up–down movements are also left to the participants themselves, who can choose to look around or not. • Making sure the horizon is leveled. If the horizon is not flat but the vestibular system of the participant detects it should be, it can cause vertigo and loss of balance. • If moving the camera, favor slow motion-controlled movement. The hair in the inner ear can only detect acceleration, not actual movement. Acceleration is a change of speed, so if the camera speed is perfectly constant, the chances of motion sickness are reduced.

• Changes of direction are technically a form of acceleration. It is best always to move the camera on a straight line. In the case of room-scale VR experiences, the participants are free to move physically in the real world and their movements are tracked and matched in the virtual world. If the latency is under a certain threshold, this can be perfectly VR sickness-free as the information coming from both the vestibular and the visual systems matches. For live-action VR, the use of “motion-controlled chairs” inspired by the 4D theaters (with seats programmed to match the exact movement on the screen) is a great avenue to prevent motion sickness. The company Positron offers a motion simulator that matches the camera movement of each VR experience, but this type of solution is unfortunately too big and expensive to become a producer/ consumer (“prosumer”) product and is for now only available at location-based VR events like festivals or arcades.

Figure 4.2  Positron Voyager chair at the SXSW festival in 2017

Head-Mounted Displays (HMDs) A head-mounted display or HMD is a device that is worn on the head or as part of a helmet and has a small display optic in front of the eyes. In Chapter 1, we have seen that the history of VR is linked to and dependent on the development of HMD technology. In fact, the current VR craze was started in 2012 when Oculus successfully raised over US$2 million on Kickstarter for the making of their VR headset, only to be bought by Facebook for $2 billion shortly afterward. The smartphone technology, as well as the

VR Headsets and Other Human–VR Interfaces   67 affordability of high-resolution OLED screens, accelerometers, and gyroscopes finally made the potential of a high-quality, six degrees of freedom VR interface accessible, which kick-started the booming VR industry as we know it today. Basics

A VR HMD is usually made of two basic components: a high-definition screen to display stereoscopic content and head motion-tracking sensors. These sensors may include gyroscopes, accelerometers, cameras, etc. When the participants move their heads to look in different directions, the sensors send corresponding information to the player, which adjusts the view. Virtual reality headsets have significantly high requirements for latency – the time it takes from a change in the head position to “see” the effect. If the system is too slow to react to head movement, then it can cause the participant to experience virtual reality sickness, as explained above. According to a Valve engineer, the ideal latency would be 7–15 milliseconds. A major component of this latency is the refresh rate of the high-resolution

Figure 4.4

display. A refresh rate of 90Hz and above is required for VR. There are two types of VR headset: mobile VR and high-end “tethered” VR. Mobile VR uses a smartphone as both the screen and the computing power for the headset; it is currently only capable of three degrees of freedom. The most well-known mobile VR

Figure 4.3  OSVR Hacker Dev Kit (2015)

68  Theoretical and Technical Foundations solutions are the Samsung Gear VR, compatible with selected Samsung phones, and the Google Cardboard and Daydream. High-end “tethered” VR uses an external computer and cameras/sensors for positional tracking to allow six degrees of freedom. The most well-known tethered VR headsets are the Oculus Rift and HTC Vive, which use a Windows PC and highperformance graphics cards for computing power, and the PlayStation VR which uses the PlayStation 4 and subsequent models for computing power. Figure 4.4 lists the current most-used VR headsets. A third category is worth mentioning: stand-alone VR headsets, which will feature internal computing capabilities and inside-out tracking to achieve six degrees of freedom. Many headset manufacturers are currently working on this technology and we can expect a new generation of VR headset to come to light in the next two or three years. Tracking

When it comes to presence, after the visuals, accurate sub-millimeter tracking is the next most important feature. This makes the visuals in VR appear as they would in reality, no matter where your head is positioned. Different technologies are used separately or together to track movement either in a three or six degrees of freedom environment. In the first case (pan-roll-tilt), it is called “head tracking”; and in the second case (pan, roll, tilt, front/back, left/right, and up/down), it is called “positional tracking.” The IMU, or inertial measuring unit, is an electronic sensor composed of accelerometers, gyroscopes, and magnetometers. It measures a device’s velocity, orientation, and gravitational forces. In virtual and augmented reality, an IMU is used to perform rotational tracking for HMDs. It measures the rotational movements of the pitch, yaw, and roll, hence “head tracking.” The lack of positional tracking on mobile VR systems (they only do head tracking) is one of the reasons why they lack the heightened sense of presence that high-end VR offers. Good tracking relies on external sensors (like the Oculus Rift’s camera), something that is not feasible on a mobile device.

meter at rest on the surface of the Earth will measure an acceleration due to Earth’s gravity. In tablet computers, smartphones, and digital cameras, accelerometers are used so that images on screens are always displayed upright. Accelerometers are also used in drones for flight stabilization. Originally, a gyroscope is a spinning wheel or disc in which the axis of rotation is free to assume any orientation by itself. It is the same system that is used in Steadicams. When rotating, the orientation of this axis is unaffected by tilting or rotation of the mounting, according to the conservation of angular momentum. Gyroscopes are therefore useful for measuring or maintaining orientation. The integration of the gyroscope into consumer electronics has allowed for more accurate recognition of movement within a 3D space than the previous lone accelerometer within a number of smartphones. Gyroscopes and accelerometers are combined in smartphones and most VR headsets to obtain more robust direction and motion sensing. Together, an accelerometer and a gyroscope can achieve accurate head tracking.

Figure 4.5 The components of an IMU: accelerometer, gyroscope, and magnetometer

Magnetometers

A magnetometer is a device that measures magnetic fields. It can act as a compass by detecting magnetic North and can always tell which direction it is facing on the surface of the Earth. Magnetometers in smartphones are not used for tracking purposes (they are not accurate enough), but some developers have repurposed the magnetometer for use with the Google Cardboard: where a magnetic ring is slid up and down another magnet, the fluctuation in the field is then registered as a button click.

Accelerometer and Gyroscope

Laser Sensors

An accelerometer measures proper acceleration in an absolute reference system. For example, an accelero­

Valve’s Lighthouse positional tracking system and HTC’s controllers for its Vive headset use a laser

VR Headsets and Other Human–VR Interfaces   69 mation from the IMUs) that get sent to the PC to be processed. Visible Light

Figure 4.6  HTC Vive’s base station and its laser emitter

system for positional tracking. It involves two base stations around the room which sweep the area with flashing lasers. The HTC Vive headset and SteamVR controllers are covered in small sensors that detect these lasers as they go by. When a flash occurs, the headset simply starts counting until it “sees” which one of its photosensors gets hit by a laser beam – and uses the relationship between where that photosensor exists on the headset, and when the beam hit the photosensor, to mathematically calculate its exact position relative to the base stations in the room. Hit enough of these photosensors with a laser at the same time, and they form a “pose” – a 3D shape that not only lets you know where the headset is, but the direction it is facing, too. No optical tracking required. It is all about timing. The system cleverly integrates all of these data to determine the rotation of the devices and their position in 3D space. High-speed on-board IMUs in each device are used to aid in tracking. This system is extremely accurate: according to Oliver “Doc-Ok” Kreylos (PhD in Computer Science), with both base stations tracking the Vive headset, the jitter of the system is around 0.3mm. This means that the headset appears to be jumping around in the space of a sphere that is about 0.3mm across in all directions though in reality the headset is sitting absolutely still. Fortunately this sub-millimeter jitter is so small that it goes completely unnoticed by our visual system and brain. Although the base stations need to be synced together via Bluetooth (or the included sync cable) and require power, they are not connected to your PC or the HMD. Unlike the camera sensors of the other systems that track markers on the headsets, it is those data collected by the HMD (along with infor-

The PlayStation VR headset uses a “visible light” system for positional tracking. The idea is as follows: the participant wears a headset with nine LED lights on it and the PlayStation camera tracks them. The significant difference is that the PlayStation camera has two cameras for stereo depth perception. The camera can only see items within its cone-shaped field of view and sometimes occlusion or lighting can cause a problem (strong natural lighting in the room, TV screens, and reflective surfaces like mirrors). The PlayStation VR headset only has nine LEDs, but they are shaped in such a way that their orientation can be determined by the camera, and the stereo depth perception can work out their position.

Figure 4.7  The PlayStation VR and its nine LED lights

The PlayStation VR also features an IMU for accurate head tracking. Infrared Light

A series of 20 infrared LEDs embedded in the Oculus Rift headset are what Oculus calls the Constellation Tracking System. These markers – laid out almost like a constellation – are picked up by the Oculus sensors, which are designed to detect the light of the markers frame by frame. These frames are then processed by Oculus software on your computer to determine where in space you are supposed to be. Just like the visible light tracking system, occlusion can become a big issue for infrared light positional

70  Theoretical and Technical Foundations

Figure 4.8  The Oculus Rift’s infrared LEDs (not visible to the naked eye)

Figure 4.9  The Razer Hydra controllers and base station

tracking: if anything gets between too many infrared markers and a Rift sensor, then the infrared markers will be blocked, or occluded, and tracking will become impossible. It is now possible to use up to three cameras/sensors to avoid this issue and make the Oculus “room-scale.” The tracking range of a camera is going to be limited by both its optics and image resolution. There will be a point where the headset is too far away for it to resolve enough detail to track accurately. There is a dead zone of 90cm in front of the camera where it cannot track the HMD, probably because it cannot focus properly on the headset. The headset retains the gyroscope, accelerometer, and magnetometer that track 360° orientation.

conductive materials near emitter or sensor and from magnetic fields generated by other electronic devices. Optical Tracking

Fiducial markers: A camera tracks markers such as predefined patterns or QR codes. The camera can recognize the existence of this marker and if multiple markers are placed in known positions, the position and orientation can be calculated. An earlier version of the StarVR headset used this method.

Magnetic Tracking

Magnetic tracking relies on measuring the intensity of the magnetic field in various directions. There is typically a base station that generates AC, DC, or pulsed DC excitation. As the distance between the measurement point and base station increases, the strength of the magnetic field decreases. If the measurement point is rotated, the distribution of the magnetic field is changed across the various axes, allowing determination of the orientation as well. Magnetic tracking has been implemented in several products, such as the VR controllers Razer Hydra. Magnetic tracking accuracy can be good in controlled environments (the Hydra specs are 1.0mm positional accuracy and 1° rotational accuracy), but magnetic tracking is subject to interference from

Figure 4.10  An earlier version of the StarVR headset with fiducial markers

Active markers: An active optical system is commonly used for mocap (motion capture). It triangulates positions by illuminating one LED at a time very quickly or multiple LEDs with software to identify them by their relative positions (a group of LEDs for

VR Headsets and Other Human–VR Interfaces   71 Figure 4.11  StarVR headset + controller with active markers

a VR headset, another one for a controller, etc.). This system features a high signal-to-noise ratio, resulting in very low marker jitter and a resulting high measurement resolution (often down to 0.1mm within the calibrated volume). The company PhaseSpace uses active markers for their VR positional tracking technology. The new version of the StarVR headset uses this system in combination with an inertial sensor (IMU) Structured Light

Structured light is the process of projecting a known pattern (often grids or horizontal bars) onto a scene.

The way that the light deforms when striking surfaces allows vision systems to calculate the depth and surface information of the objects in the scene, as used in structured light 3D scanners. No VR headset uses this technology, but Google’s augmented reality solution “Project Tango” does. Inside-Out Tracking and Outside-In Tracking

All of the current VR headsets with positional tracking use an “outside-in” tracking method. This is when tracking cameras are placed in the external environment where the tracked device (HMD) is within its view. Outside-in tracking requires complex and often

Figure 4.12  Structured light 3D scanning – photo credit: Gabriel Taubin / Douglas Lanman

72  Theoretical and Technical Foundations expensive equipment composed of the HMD, one or multiple sensors/cameras, a computing station, and connectivity between all the elements. On the other hand, inside-out tracking is when the tracking camera/ sensor is placed on the HMD itself, which then detects the environment around itself and its position in real time, like Google’s “Project Tango.” Most VR headset manufacturers are currently developing inside-out tracking for their HMDs, including Oculus, as announced during the Oculus Connect 3 event in 2016. Meanwhile, outside-in tracking is limited by the area set by the camera/sensors. Figure 4.13 shows a comparison of the positional tracking “play area” of three of the headsets mentioned above. The StarVR uses active marker technology, which means the play area is scalable depending on the number of cameras used to track the headset, up to 30x30 meters. Visual Quality

The different display technologies and optics each headset uses greatly impact image quality, as well as

Figure 4.13

the feeling of immersion. There are four important elements when it comes to high-end VR: display resolution, optics quality, refresh rate, and field of view. Resolution

Both the Rift and the Vive have a resolution of 1200x1080 pixels per eye, while the PlayStation VR is 960x1080 pixels per eye, but its screen’s RGB (red, green, blue) stripe matrix is superior to the Samsung PenTile matrix of the other HMD displays. On a PenTile matrix screen, there are fewer subpixels and more of them are green. This is the reason why the PlayStation VR’s perceived resolution is similar to that of the Vive or Rift. The StarVR headset, however, has a whopping resolution of 5120x1440 pixels but the field of view is also much larger (210°), so the perceived resolution per degree is supposedly bigger (but has not been confirmed by the manufacturer). Human vision has an angular resolution of 70 pixels per degree while current VR headsets usually offer around 10 pixels per degree, which explains why people often complain about “seeing the pixels.” The

VR Headsets and Other Human–VR Interfaces   73

Figure 4.15  HMD lens comparison – source: https://goo.gl/ JY1Jwp

Figure 4.14  RGB stripe (left) vs. PenTile (right)

magnification factor due to the fact that a single display is stretched across a wide field of view makes the flaws much more apparent. For mobile VR, the resolution depends on the smartphone being used in the headset. The Pixel XL (compatible with Daydream) has a resolution of 2560x1440 pixels, while the Samsung Galaxy S8 (compatible with Samsung Gear VR) is 2960x1440 pixels. These numbers can be misleading: mobile VR seems to have higher resolution than high-end “tethered” VR headsets. However, they will not necessarily have the better visual presentation. Even the cheapest compatible graphics card inside the Rift’s connected PC will allow for a much richer graphical environment than you will typically find in Gear VR games or apps.

The Oculus Rift’s lenses are hybrid Fresnel lenses with very fine ridges combined with a regular convex lens which reduces the “I can see pixels” problem. The Rift’s hybrid lenses also have a larger sweet spot and more consistent focus across their visual field, meaning they are more forgiving about how you position the HMD in front of your eyes. Fresnel lenses can help achieve a wider field of view but the ridges of the lens can become visible in high-contrast environments, even more so when the headset is not perfectly aligned with the eyes. The PlayStation VR, on the other hand, has opted for a regular lens and is capitalizing on the quality of their screen instead. VR headset lenses often produce an important distortion and chromatic aberration which have to be compensated in the VR player itself . Refresh Rate

As stated above, a refresh rate of 90Hz is a requirement for virtual reality in order to limit the effects of virtual reality sickness and offer a heightened sense of presence. Unfortunately, most of the mobile VR headsets are limited to 60Hz. PlayStation VR, on the other hand, offers a 120Hz refresh rate. Field of View

Lenses

The choice of lenses for VR headsets determines the final field of view as well as image quality. The screen being placed very close to the eyes, the focal length must be short to magnify it. Some headset manufacturers like HTC and Oculus have opted for Fresnel lenses. Fresnel lens reduce the amount of material required compared to a conventional lens by dividing the lens into a set of concentric annular sections.

There are no discernible differences between the horizontal field of view (FoV) of the HTC Vive, the Oculus Rift, the mobile VR headsets, and the PlayStation VR, which are all announced at approximately 100°, while human vision horizontal FoV is (when including peripheral vision) 220°. This means that VR headsets can only cover half of our normal vision, as if we were wearing blinders, which greatly impacts the sense of presence. However, the StarVR and an upcoming

74  Theoretical and Technical Foundations graphics cards/computer power in the case of highend tethered VR. The only exception is that some VR headsets are compatible with most current smartphones as long as they have an IMU for head tracking – for example, the affordable Google Cardboard.

Steve Schklair, Founder, 3ality Technica and 3mersiv Figure 4.16  StarVR headset with 210 degree horizontal field of view

Panasonic VR headset use a double-Fresnel design allowing for a super-wide field of view of approximately 210° horizontally. Minimum PC/Mobile Requirements

Most of the VR headsets are compatible only with a few specific smartphones in the case of mobile VR and

Figure 4.17  Minimum mobile requirements for mobile VR HMDs

What’s limiting a lot of film and television professionals from getting into VR is simply the quality of the image in the mobile viewing device. Mass market displays look terrible. People who have never worn a headset, after they get over the shock of being able to look around and see everything and think it’s so cool, their first question is why are the images so fuzzy? It’s not fuzzy, there’s just lack of resolution. The fact is, you can see all the pixels. That’s got to go away.

VR Headsets and Other Human–VR Interfaces   75

Figure 4.18  Minimum PC requirements for high-end VR HMDs

Current minimum requirements for VR headsets are shown in Figure 4.18. Controllers and Accessories Controllers and accessories can make virtual reality even more immersive by improving the way we interact with the virtual environment. Hand tracking and

the possibility to easily manipulate virtual objects is of the upmost importance. Treadmills, haptic vests, and wireless solutions for the high-end VR market are booming. Figures 4.20 to 4.24 give a short list of controllers and accessories available as of the first quarter of 2017. The list should grow exponentially as new VR headset solutions hit the market.

76  Theoretical and Technical Foundations

Figure 4.19

Figure 4.20

VR Headsets and Other Human–VR Interfaces   77 Figure 4.21

Figure 4.22

78  Theoretical and Technical Foundations

Figure 4.23

Part

II

Virtual Reality Storytelling

Chapter

5

VR A New Art?

Randal Kleiser, Director, “Grease,” “Defrost” Shooting in 360° is very specific and is best to use when there is a story reason. In the case of my VR series “Defrost,” we want the viewer to feel they are actually experiencing what the character is going through. This means that action can be staged anywhere around the viewer and this needs to be thought out very carefully to be sure that the staging forwards the story.​ ​There are thousands of young film students beginning to figure out VR techniques, and I’m sure they will lead the way on the new language. They have energy and time to experiment.​B ​ ecause this is a new, emerging format, I’m very excited by what I’m learning and figuring out. I haven’t been this energized since I was a student at USC studying film. It’s déjà vu all over again.

How does one know when a technology truly becomes an art, a new language with which we can communicate differently? Some might say it is too early to give virtual reality this honor. Others think it does not matter if it is too early. The latent potential, the promise of something to come, is already there. In this second half of the book, we will focus on the storytelling aspect of VR, not the technical. VR is a new language, but is there somewhere a Rosetta Stone that can help us understand it better? To what other arts can we compare virtual reality in order to decipher it? Cinema? Theater? Dance? In this

Figure 5.1  The Rosetta Stone Discovered in 1799, the Rosetta Stone is an engraved stele dating from Ancient Egypt inscribed with an administrative decree in three different languages: Ancient Greek, Ancient Egyptian hieroglyphs, and Demotic. This stone was the key that unlocked the mysteries of the Egyptian hieroglyphs.

chapter, we will study more well-established arts that can be compared to virtual reality and, based on those understandings, try to define what VR could become once it reaches its full maturity.

82  Virtual Reality Storytelling Virtual Reality vs. Theater and Dance Traditional Theater

One of the most common mistakes made by VR filmmakers is to compare VR to theater. Indeed, in theater there is no frame and the audience is free to look wherever they want. On the other hand, the mise-en-scène for theater is on a stage, facing the sitting spectators. There is a very precise direction of attention (forward), with only very few exceptions (when, for example, the actors enter from the back of the theater, or walk into the audience).

in a 360° environment, as though we were surrounded by stages on all sides. Immersive Theater

One particular form of theater is more akin to virtual reality storytelling than others: immersive theater.

Figure 5.2  The preferred direction of attention in theater

That being said, it is true that the art of theater shares some common characteristics with virtual reality: • There is no camera, and thus no rectangular framing and no composition within that frame. • Editing is challenging. In both theater and VR, the audience has to resettle in the new situation after a cut (or a scene change), which can potentially throw them out of the story if it is not well crafted. • In VR and in theater, the story is usually told with uncut scenes rather than shots. The actors must learn large amounts of dialogue and stay in character even when the spotlight is not on them. • The audience is free to look at something other than the subject of interest, although this is limited to the stage in theater. Theater purports to take a whole, detailed chunk of reality and stage it for us. VR goes further, displaying

Figure 5.3  “Sleep No More,” an immersive theater experience

Immersive theater usually breaks the fourth wall and invites its audience to take a more active part in the story being told. It can be as simple as having an audience member holding a prop, or having a simple interaction with the performers. It can also be more engaging, such as walking freely in a space filled with actors (like the traditional Halloween haunted houses). The most striking example of immersive theater is “Sleep No More” by the British theater company

VR: A New Art?  83 Punchdrunk. Audience members are invited to wear masks and are free to walk at their own pace through a variety of staged situations. “Sleep No More” is based on William Shakespeare’s Macbeth, but has no dialogue: it is up to the audience to understand and immerse themselves in the story that unravels around them.

Eve Cohen, Director of Photography, “The Visitor VR” If I have to give one piece of advice to aspiring VR directors of photography, I think I’d tell them to watch a ton of live-action VR and go to some immersive theater experiences. And instead of talking to the director about the frames, they should talk to their director about perspective and viewpoints into the world that they’re creating versus something that has limitations to it. The view is limitless in VR so that’s the biggest shift I think for DPs [directors of photography] is understanding that there isn’t a frame. You don’t have control over a frame but you have control over the vantage point. I see some great pieces done where somebody makes a frame. They still can’t lose this idea of a frame, so inside of a VR space they create a frame of a window or something where something happens. You start there and then I think you kind of grow out of that space and you’re like, “Okay. Figured that out. How do I make it a little bit bigger? How do I broaden it?” And then immersive theater is really a great connection point to understanding virtual reality.

Escape Rooms

Immersive theater is also similar to “escape rooms,” which have been gaining popularity as a recreational activity. An escape room is an interactive game where the participants are locked in a room and must find their way out using clues to solve riddles and mysteries. Escape rooms sometimes feature performers who interact with the players, either to help them or enhance the immersion.

Live-Action Role-Playing Games

Live-action role-playing games (LARPs) are a blend of role-play gaming and theater. Each participant is playing a more-or-less defined character and is invited to fulfill various objectives. LARPs are similar to escape rooms, but in LARPs each participant is assigned a character with various objectives. Immersive theater, escape rooms, and LARPs are similar to the language of virtual reality as they all provide a strong sense of presence, immersion, and agency. Those terms will be defined in the following chapter. Dance

The art of dance is similar to theater when it comes to comparing it to virtual reality: the audience is free to look wherever they want, but the story is usually playing out on a stage in front of them and there is a clear separation between the participants (dancers) and the spectators.

Jessica Kantor, VR Director, “Ashes” Virtual reality lends itself beautifully as an extension of our current arts, most specifically theater and dance. As with capturing any art form, simply sticking a camera in the space without adapting for the medium inevitably falls flat. But when you adapt the arts, especially dance for 360° video, something incredible happens. Dance captures humanity in a very pure way. At its most simple, dance is a sequence of movement in space. With VR, the participant is experiencing the same space in which the dance is moving, creating an intimate relationship between the participant and the dancer(s). Each gesture moves through the same space and moves what feels like the same universe the participant is experiencing. The proximity of the dance to the participant can raise their heartbeat, and can propel the participant into movement. In each of these examples the audience/participant becomes a player. Even if they are passive, acting simply as a voyeur, they have a place in the virtual world. In creating these experiences the designer/director needs to take all the players

84  Virtual Reality Storytelling

into account. Using the sound, light, action to guide the participant along the journey. They can be completely emotionally connected without having to think about their actions or how to move a story forward. Seamlessly dancing with the dancer, following the actors through the experience as it unfolds around them.

Figure 5.4  “Ashes” directed by Jessica Kantor

Virtual Reality vs. Sculpture There is an interesting similarity between virtual reality and how people generally look at sculpture: people walk around a sculpture, looking at it from every angle until they find the best position from which to admire it or take a photo. If you take a moment to study that process, you will soon realize that most people stop at the same place to take their photo. Despite the fact that a “sculpture in the round” is designed to be seen from every angle, there is often one particular angle that is more striking than the others. Figure 5.5 shows the famous “Venus de Milo,” which you have probably seen countless times. Now ask yourself: do you remember seeing the profile or the back of this statue, or are all the published photos you remember taken from the same direction? There is a strong perspective that dominates. This phenomenon is very interesting for virtual reality and, more specifically, “room-scale” VR where the spectator is free to move physically in the story world. As a storyteller, you lose the ability to frame and contain the action when doing virtual reality, but you can propose a strong perspective “point of interest” that will encourage the audience to follow your guidance organically.

Figure 5.5  “Venus de Milo” statue

VR: A New Art?  85 Virtual Reality vs Cinema This is a big one. You have probably heard many times that “VR is the future of cinema,” but it would be just like saying that photography was the future of painting. When a new art form is created, it does not necessarily mean that it will replace another one; rather, it can spur innovation. For example, when the painter Paul Delaroche discovered Louis Daguerre’s invention of photography in 1839, he declared, “From now on, painting is dead.” However, rather than retreat into the shadows, painters found themselves freed. No longer obligated to copy reality, painters turned to non-representative forms and thus the medium acquired a new creative dimension. The incredible number of styles and the richness of 19th- and 20th-century painting prove this.

In 1895, cinema was invented and we were finally able to capture movement. As the years passed, the technique was perfected, and then sound, color, and the third dimension were added in an increasingly frantic race for the most perfect reproduction of reality. Virtual reality and holography are the next logical steps. The biggest difference between cinema and virtual reality is the notion of the frame. The frame is the limit between what is seen and what is not seen. In cinema, the director sets the frame and the viewer cannot change it. In virtual reality, the frame is now set by the FoV of the headset being used and the viewer itself. The storyteller cannot impose a frame on the viewer, but can influence this choice by X and Y, as we will see in the next chapter. In VR, the storyteller offers a “space” which is set and staged and lets the audience do the rest.

Anthony Batt, Co-Founder and Executive Vice President, Wevr People are often comparing VR to Hollywood movies. It’s a likely comparison, but you might as well compare VR to plastic bottle manufacturing. You only compare Hollywood to VR because there’s a visual image and there is a story, but they’re really different in everything. The three-act structure probably doesn’t exist. We don’t really want to use Hollywood’s framework to describe what simulation-based stories are going to be. I think what is wise is to say that this is a new medium and the creators have to find the best way to communicate what they what to do, and that certainly doesn’t have to follow a three-act story. I believe a museum curator does the same thing. They actually decorate, and design, and curate a show in which the consumer can go through there and enjoy themselves, but they don’t think about necessarily a three-act structure. They just think about, “We’re going to create a space that surrounds you and hopefully delights you, and you’ll have a really lovely experience.” So we really don’t think about Hollywood that much. In fact, we try to actually stay away from it because it tends to bring in a language that has an overreaching effect on everything. What we try and do is say, “This is new.” Then, we look for places to actually borrow stuff from. But we don’t really sit in the shadow of Hollywood very often. VR is more like a simulation. We can give you a feeling without us giving a story – that’s what a simulation is supposed to do, in our opinion. We look for those but we don’t exclusively focus on that. I think that the harder part for people today is to actually work outside of the shadows of old media that influence them. Look at “Gnomes & Goblins” by Jon Favreau. He’s a unicorn in the sense that he is Hollywood at the highest and finest and smartest levels but also, at the same time, he recognizes this is a new medium and he has to think about this thing differently and approach it differently.

86  Virtual Reality Storytelling

Figure 5.6  VR space vs. cinema frame

Paul Raphaël, Co-Founder/Creative Director, Felix & Paul Studios Prior to getting into VR, we were doing these 3D films that were mostly documentaries, and then a couple of fictions. The way we were doing 3D was very VR-like, many of the same principles like long takes, static cameras, perspectives that gave the viewer the impression of being there. Our first VR film was an extension, a continuation of the stuff that we were doing before. “Strangers with Patrick Watson” was a first step into the VR documentary world, and then we shot “Herders” in Mongolia, and then expanded into the “Nomads” series. With the first three episodes, we really wanted to have as radical environments as possible because the environment is really a character in VR. When you’re doing fiction, whether it’s in cinema or in theater, you’re faking things, you’re fooling the viewer into believing something that isn’t true. Depending on the medium, there are better and worse ways of doing that. We have figured out ways of suspending this disbelief in cinema over the course of the last 100 years. We need to figure that out for VR. In documentaries, it’s easier: you need to find a good and interesting subject; you need to find the right position for the camera/viewer, choose the right moment, and edit together these different shots. But the raw material is already real. Of course you need to be careful not to ruin it, as your presence as a filmmaker and the camera can crush reality.

VR: A New Art?  87 Virtual Reality vs Gaming It is no secret that a big part of the VR market is video games. The most used VR platforms to this date are Steam and PlayStation Home. A big percentage of the available VR content are games, and most of the nongame VR experiences are built using game engines such as Unity and Unreal. Most non-VR games are actually VR-ready: their worlds are built in three dimensions and the player can choose to look around using traditional controllers. The use of an HMD is merely an interface update. In video games, just as in interactive VR, the player has agency, the power to control and have an impact on the surrounding environment. Gamers usually have a much easier time learning how to use the VR HMDs and controllers, as well as the “rules” of each VR experience. In virtual reality, just as in gaming, the viewer must take a more active role and “look for” the story rather than being served with it. Conclusion Virtual reality has some elements in common with other art forms, yet it is a new, unique way to tell stories and should be considered as such. The position of the viewer/spectator is completely redefined by the

fact that he/she is wearing an HMD and not facing a stage/screen. With established art forms, the participants can look around, in search of the story being told, and sometimes we can interact with it.

Figure 5.7 Virtual Reality is the missing link between Gaming and Cinema/Theater

The “killer app” for virtual reality is going to be a combination of cinema, gaming, and interactive theater. We all talk about how VR is a new medium, and it is. The last new medium we had was video games, and for the last 20 years, people have been trying to figure out how to tell good stories in video games. How do you have a satisfying narrative while still giving the viewer some control over the situation? A lot of writers, producers, and directors in Hollywood have studied games, but most of them have looked at them as a medium beneath them. Now we have virtual reality, where a lot of these same mechanics come into play and where you need to give some agency and control over your world.

Robyn Tong Gray, Chief Content Officer and Co-Founder, Otherworld Interactive Games are to VR a little bit what theater was to movies. In some ways, VR is like the next evolution of games and we can begin to pull the basics of VR language from them: indirect control, environmental, and interactive design, etc. VR will never replace games – they’re not a one or the other sort of thing, and they’ll continue to fulfill different needs and functions – but games are a great jumping-off point from which we can start figuring out the basics of VR and evolving from there. Games have spent years figuring out indirect control. A good game (in my opinion, since I like my games heavy on the narrative and light on the win/lose states) allows players to sink into the story and guides their actions discreetly. It doesn’t make the player feel like they’re being required to do something; it sets up expectations and allows players free will to play within those expectations. Games are a non-linear medium compared to film. They’ve been working with 360° for the last couple of decades and so they’ve evolved to use visual, audio, and narrative cues to get players through the experience. More traditional AAA titles [those with a high development budget and extensive promotion] may rely on cut scenes to convey story, but the recent flood of indie games has seen a lot of experimentation with cut scene alternatives. Games have also long experimented with environmental storytelling which is a great use of VR. In VR players gain a sense of the space around them. Even AAA titles have made a habit of embedding mini narratives in their environments.

88  Virtual Reality Storytelling

Grant Anderson, VR Executive Producer There’s going to be all kinds of VR experiences. Some of the most interesting ones are the full room-scale VR where you’re moving around and you’re picking up objects and interacting with things. You’re creating your own story, just like in real life – your own narrative that you share with other people. There’s still going to be people sitting in 360° videos and just watching. But there’s also going to be hybrids, and that’s what I’m interested in – the mix of cinema, gaming, and theater, where, people won’t necessarily even know that they’re making a choice and it will be very seamless.

Chapter

6

VR as a Storytelling Tool Since our ancestors have gathered around campfires and told tales, storytelling has been an important part of our lives. Virtual reality storytelling will become more and more important as this technology progresses and HMDs reach the general public. In this chapter, we will look at the narrative aspect of virtual reality. Immersion, Presence and Embodiment The first question storytellers ask themselves when starting a virtual reality project is “Why VR?” Why use this specific medium and not another to tell this story? Virtual reality offers three unique elements to the narrative, no matter if cinematic or interactive VR is used: immersion, presence, and embodiment. These special attributes are the reason one would choose virtual reality over any other art form to tell a story. Immersion

Virtual reality allows the audience to be surrounded by images and sound that create a realistic environment. When done right, VR experiences surround the player’s senses so that the environment is believable and responsive. Presence

Presence is the feeling of actually existing within an environment. It is indeed at the core of virtual reality and is truly achieved when the technology fades away and the player reacts to virtual stimuli as they would in a non-virtual world. Presence has often been conceived as a sign of potential positive transfer of skills or knowledge learned in a virtual environment to the

real world. When achieved, the participants are likely to be able to transfer the skills learned in the virtual environment to a real context.

Katy Newton and Karin Soukup, Filmmakers and Experience Designers VR promises to create virtual worlds so real that the audience feels as if they are physically present in a digital space. That sensation of “being there” is called presence. Presence is partly achieved through the technology  –  the processing power, the graphics, the display, but it’s also achieved through the consistency and richness of the storyworlds we create. It’s up to us to convince the audience to suspend disbelief enough to feel present in mind, body, and soul. Excerpt from “The Storyteller’s Guide to the Virtual Reality Audience,” Medium, https://goo.gl/RE1sZL

Embodiment

Embodiment is the perception that you are physically interacting with a virtual environment. It is most often achieved through the use of an avatar. An avatar creates continuity for the player by providing a virtual body that aligns with the participant’s real body, uses precise one-to-one tracking of his/her every move, and emotes just as realistically as he/she does. Even without a virtual body or avatar, the sense of embodiment can be created if the participants understand “who” they are in the scene. Embodiment can be separated into three subcomponents: the sense of selflocation, the sense of agency, and the sense of body

90  Virtual Reality Storytelling ownership. Embodiment is the most powerful contribution of virtual reality as it tricks our senses into thinking we are physically present in the virtual environment, not just a mental projection. This incredible aspect of VR explains how some scientists are

currently using it to help treat phantom pain in amputees. It is a powerful tool for empathy, although it is difficult to achieve in cinematic VR with no game engine and motion trackers.

Robyn Tong Gray, Chief Content Officer and Co-Founder, Otherworld Interactive I think everyone is naturally excited by the ability of VR to create presence, but I also think it’s time to start moving beyond it and seeing what’s next. I like talking about the idea of emotional presence in VR. It’s the idea of, okay, it’s cool you’re hanging out on this VR beach, but now what? What’s the intriguing hook that makes someone want to be in this world you’ve created? As for embodiment, I can’t wait for it. Right now the technology demands you get creative with any attempts at embodied characters. I think it works but with pretty specific scenarios. VR tech is still incredibly new and, in a few years, we’ll look back and think about how primitive our current headsets are. The project “Café Âme” falls into this weird in-between area – it’s not really a game or a film; it’s mostly a place and a feeling. I wanted to create a place that I’d like to be in, somewhere that felt nostalgic and melancholy and peaceful. I think part of the beauty of VR that people got over way too quickly and massively underestimate is its power to bring you somewhere new and different and to just be in that place. The premise that I imagined for “Café Âme” was this empty, rainy Parisian café on a lonely night and, for whatever reason, I thought of this idea of this robotic shopkeeper hanging out alone with an undrinkable (because robots can’t drink) cup of coffee.

Figure 6.1  “Café Âme” – © Otherworld Creative

VR as a Storytelling Tool  91

It’s a really simple experience with a tiny bit of narrative evolving on the table in front of you. But mostly it’s about being dropped into the body of this robot and being in this place and, as a non-VR experience, it just wouldn’t have the same impact. In VR, you get the sense of space and the sounds of the rain muffled by the window. You get to stare up and gauge just how high the slowly revolving ceiling fan is above you. And most of all you get this magical moment when you decide to glance out the window and you realize there’s someone staring back at you. It’s you! But it’s not you. The moment when our players spot their new identity in the window is always magical. There’s always some kind of reaction – usually a smile or a laugh. Because it’s a window reflection rather than a mirror it’s a subtle confrontation of this new identity that, I think, as a result is more intriguing. We’ve hooked the robot’s body up to the HMD’s depth tracking so if players shift from side to side or stand up a little, the robot’s body follows along. A lot of the magic of the experience lies in how unexpected it is with current technology. I’m not sure we’d be able to keep up the magic in a long-form experience or if people get used to this particular rig. On the other hand, we’ve had people tell us stories about brewing themselves a real cup of coffee and just sitting in the experience for almost an hour! And it’s really not an hour-long experience. Not even close.

Jean-Pascal Beaudoin, Co-Founder, Headspace Studio

Figure 6.2  The three main contributions of virtual reality

Figure 6.2 summarizes how the senses of immersion, presence, and embodiment are all steps leading to a full simulation of reality, embodiment being one of the most difficult to achieve. Why VR?

Before getting started on a virtual reality project, storytellers should ask themselves whether their story can be enhanced by the senses of immersion, presence, and embodiment. Is the physical location of the story of particular importance and will being immersed in it improve the experience tremendously? Will my story

It is not only important to distinguish immersion from presence (form), but also to separate it from emotion (content). Immersion is what the technology delivers from an objective point of view. Presence is a “response” – it is not a given to the medium. It is something we can nurture and deepen, and sound is one of its key components. I believe that creating the conditions for presence to arise is a craft, almost an art. In that regard, the context of the viewer’s role in the experience and whether it is fiction or non-fiction fundamentally influence the way I will approach the sound design. My first ever shot at creating a 360° video VR experience with spatial audio back in 2014 – “Strangers with Patrick Watson,” with directors Félix Lajeunesse and Paul Raphaël (before there was the Studios suffix) – was particularly revealing. It was mid-January and perhaps I simply got inspired by that day’s unusually warm temperature for that time of year. A couple of takes into the shoot, I had the idea of suggesting that we open one of the windows in Patrick’s studio that overlooks a landmark boulevard of Montréal, to let the city’s soundscape enter the scene.

92  Virtual Reality Storytelling

Figure 6.3  “Strangers with Patrick Watson” ©Felix & Paul Studios

I’m glad I followed my instincts because that is one thing people often mention when explaining how immersed or present they felt after trying the experience. What seemed like an almost trivial decision at the time has in retrospect made me realize the presenceinducing power and importance of creating depth in VR in order to heighten the viewer’s sense of presence and emotional impact.

benefit from an experiential setting where the participants will feel present, in the moment? Do I need my audience to feel physically there (and potentially endangered)? If the answer is yes, then virtual reality might be a good medium for the project. Katy Newton and Karin Soukup’s study highlights a fundamental aspect of virtual reality storytelling: when surrounded by a 360º sphere of potential information, participants are less likely to catch the subtleties of the story, especially when those subtleties are delivered through audio. Instead, they find themselves more connected to the characters’ emotions and the tone of the narration. The sense of presence increases empathy compared to traditional media, where the distance between the viewer and the rectangular screen creates an emotional safety net. Presence is a valuable tool for storytellers but also raises the question of responsibility and ethics when it comes to certain extreme VR experiences.

Katy Newton and Karin Soukup, Filmmakers and Experience Designers Over ten weeks, we conducted sets of experiments with over 40 participants and interviewed experts from multiple perspectives, from design-thinking, theater, gaming, architecture, journalism, science, and film. To explore the audience’s experience in VR, we partnered with Stanford’s d.school Media Experiments, the National Film Board of Canada, and independent filmmaker Paisley Smith. To anchor the testing, we used scenes and locations from Paisley Smith’s VR documentary, “Taro’s World.” The documentary explores the death of her Japanese exchange student brother, Taro, and the impact his suicide had on the people around him. “Taro’s World” has been released in 2016 for mobile VR  –  Google Cardboard and the Samsung Gear VR. We mimicked the constraints of VR technology, restricting our participants’ movements and interactions to match the affordances of Google Cardboard. We created “magic goggles” (actually

VR as a Storytelling Tool  93

made of plastic, paper, tape, and a front-facing camera) that limited the audience’s peripheral view while simultaneously recording their head movements. When participants wore the magic goggles, their head movements replicated those of someone in a mobile VR headset, compelling them to “stitch” the scenes of the 360° story-world together themselves. In one of our tests, participants were placed in the center of a room simulating Taro’s bedroom. While wearing headphones with 360° sound, they watched a scene play out in the room. The participants were divided into three groups with three varying degrees of restriction on what they could see: Audiences with a 90° range of vision could recall Figure 6.4  Photo: Karin Soukup nearly every event in the story, whether the information was physically in the room or relayed through the audio. However, audiences in the 360° view recalled fewer details of the story and the environment. For example, in the 90° scene, all of the participants in the debriefing referred to Taro by name. In the 180° scene, Taro was sometimes referred to by name, but was more often given descriptors like “young man.” By the 360° scene, few remembered Taro’s name, instead they referred to him offhandedly as “the kid at the computer.” Much of the story information, including character names, was delivered through the audio. The fact that participants in the 360° scene couldn’t remember Taro’s name (among other story details), suggests that they were focusing less on the audio in 360° than in the 180° or 90° scenes. Perhaps there was too much information in 360° for the audience to process. When telling a story in 360°, we need to consider how to combine audio and visual elements without overloading the audience. BUT: Audiences in the 360° scene were more aware of the tone of the piece, which they attributed to the pacing and shifts in the lighting. They were so attuned to the tone that when asked who was in control of the story, they described the storyteller as the mise-en-scene itself, or used some abstraction, like the storyteller was the “rhythm” of the scene. Audiences in the 360° scene were also more attuned with Taro’s feelings. They could clearly and unequiv­ ocally identify that Taro was feeling “lonely,” and sometimes felt that Taro’s feelings were reflected in the mise-en-scene itself. Whereas those in the 90° and 180° scenes really struggled to characterize Taro, claiming that they did not have enough information to draw conclusions about him. There’s something interesting happening here. It may be that when you feel present in an experience, you are more likely to rely on abstractions and pick up on feelings, and when you are in “detective mode” you are more likely to pick up on story details, but have difficulty accessing feelings. Perhaps being present and retaining story details are fundamentally at odds. With each new bit of information you add to the VR storytelling experience, you should ask yourself, “Does this information lend to feeling present, or will it send the audience into their heads  –  and which mode do I want them in right now?” Excerpt from “The Storyteller’s Guide to the Virtual Reality Audience,” Medium, https://goo.gl/RE1sZL

94  Virtual Reality Storytelling

Shari Frilot, Founder/Chief Curator of New Frontier at Sundance Film Festival In 2010 a journalist, Nonny de la Peña, was working in “Second Life” and creating documentaries that were based in real-life recordings in “Second Life.” It was very interesting and I started to follow her. In 2011 she reached out and she invited me to do a studio visit with her. I went to her studio and they put a helmet on me, some earphones. The room was full of trackers and the helmet was connected to a big backpack that was quite heavy. There was a couple of people behind me with a cord following me around and I experienced virtual reality for the first time. It was her piece called “Hunger in Los Angeles.” I was inside of a very rudimentary animated world that was created in Unity. Characters that were rendered were definitely skirting the uncanny valley but I noticed that after three minutes inside the experience, I became one of those uncanny valley characters and all of a sudden it was a normal thing to be in there and I was walking around and characters were reacting to me. Suddenly, one of the characters passes out right in front of me. It was very emotional. I got down on my knees and the ambulance came. I had never experienced anything like that, ever. So that was the first time I invited a VR piece to New Frontier at Sundance. It was incredible to watch audiences respond to this work, coming out of the headset in tears, time and time again people falling to their knees, trying to help the character in distress in the experience. The look of awe and transcendence that people had on their face, that was unforgettable.

“Where Am I?” The Importance of Location

The first few seconds a participant spends in VR are usually used to look around frantically, trying to understand where he/she has been transported to. The fact that the “where” comes before the “why” and “what” is another important difference between the VR and the cinema languages. In virtual reality, the location (whether real or virtual) is of foremost importance. When scouting for location or designing the environment in a game engine, VR creators must always keep in mind how relevant it is for the audience. A cinematic VR experience named “The Recruit VR” is using this peculiarity in an interesting way:

we first find ourselves sitting in an interrogation room with a hostile guard. After a moment, the interrogator “loads” different environments around us, including the top of a high-rise. In virtual reality, the environment tells the story just as another character might. This requires a special attention to detail. In “Blocked In,” one of the first experiences developed for the Oculus DK1 in 2013, the participants find themselves in a room filled with computers and props from the 1980s. Outside the window, Tetris blocks are slowly falling. There is no story per se, but the room is full of intriguing clues, such as a calendar on the wall with a circled date: 1984. After doing some research, the participants can figure out that they are standing in a reproduction of the

Figure 6.5  “The Recruit” – cinematic VR experience produced by MetaverseVR starring Daisy Betts

VR as a Storytelling Tool  95

Figure 6.6  “Blocked In” VR diorama by Daniël Ernst

workspace of Alexey Pajitnov, the Russian scientist who created Tetris and released it on June 6, 1984. In this narrative, the location is the storyteller and the story itself. The creator of “Blocked In,” Daniël Ernst, uses the term “diorama” to describe his VR experiences. On his website he explains: “Each diorama is a fantastical hand-painted environment in which interaction is used to tell a story and convey a sense of wonder” (www. theshoeboxdiorama.com). Ernst’s dioramas are an interesting way of telling stories in virtual reality as they use the importance of the location and the sense of presence at its best. Another way of using the 360° space creatively is to separate it into different storylines or time frames. In the VR film “Ashes” by Jessica Kantor, three stories happen simultaneously, one on the viewer’s left (two dancers fall in love), one in front of the viewer (the male dancer drowns in the ocean), and one on the viewer’s right (the female dancer mourns her loss). Each of these stories explores memories in the same space and as the piece unfolds. As a viewer, you lose the sense that these events happened linearly as you experience them simultaneously.

Jessica Kantor, VR Director, “Ashes” The story itself could be told in a thousand different ways across many different media, but the way I constructed the story was specifically for virtual reality. Most people who watched the piece found themselves looking around. It was exciting to discover what would compel [the viewer] to look right or left. In the design of the piece, the titles moved across the field of view driving the viewer to explore the space. Once they did that, they moved between the stories at their own pace, in some ways editing the piece how they saw fit. Some people wanted to watch it twice while others were overwhelmed with a fear of missing out. Very subtly the solo cello, which is the key music for the work, is panning through space while the ocean waves are always staying to the relative front. This subtle addition helped guide the viewers to where I (as the designer/director) was hoping the participant would look. Sometimes that worked and other times it failed completely. But I still felt they received a unique experience no matter how the participant experiences “Ashes.”

96  Virtual Reality Storytelling The Camera’s Position

World Building

In creating a sense of presence, it is important to choose the camera’s position (hence the participants’ position) with great care. For example, if shy participants find themselves on stage facing an audience, they will most likely feel discomfort and anxiety. If you want to prevent this discomfort, make sure your participants understand the context of the story and the reason for their presence. To find the best camera position, the director can look at a couple of rehearsals when setting up a scene, and study where he/she would naturally position him/herself to achieve the desired effect. The camera can then be placed at this exact position.

In the moment we experience a work of art, we experience reality through the perspective of the artist. When Cartier-Bresson took a photograph, he was rebuilding our world through his lens. Whether it is a Renaissance portrait, a daytime soap opera, or a child’s doodle, a work of art defines a new world from its perspective. So, as an art form, all VR stories are cases of world building. How then do we create stories that invite behavior? As soon as you give participants the freedom they have with their own bodies, let alone a modified body, you will have behavior. What makes it worthwhile? Having agency, having an impact on the virtual world, validation that your choices make a

Katy Newton and Karin Soukup, Filmmakers and Experience Designers In another one of our tests, some participants were seated in the front row of a classroom. While the goal of the scene was to observe Taro, without any prompting, the participants felt the need to pay attention to the teacher and to decipher a note other students were passing. Based only on the environment and their position within it, participants took on the social script of “student.” When placed at the front of the classroom, they either took on the role of “lecturer” or expressed anxiety, as if they were actually standing in the front of the class. One participant was so uncomfortable she asked to be moved: “Can I stand in the back against the wall?” Being at the front prompted questions like “Can the students see me?” . . . and “Can I stand in the back against the wall?” In the experiment set in Taro’s bedroom, participants stood in the middle of the room, replicating the position of a typical 360° camera. Unlike the classroom, which has strong behaviors associated with it, standing in the middle of a stranger’s bedroom prompts few appropriate social scripts. However, multiple participants ascribed themselves roles, describing themselves as a “voyeur” or like a “fly on the wall.” Others felt vulnerable and ill-at-ease standing in the middle of the room. We suspect this is partially attributed to not knowing how to act in this setting, a setting that is typically private, and partly to feeling physically exposed in the middle of the room. To feel bodily present, these tests suggest, the audience should understand who they are in the scene (even if who they are is a “fly on the wall”) as much as where they are. Excerpt from “The Storyteller’s Guide to the Virtual Reality Audience,” Medium, https://goo.gl/RE1sZL

Figure 6.7  Here we see an audience member in three different positions in a classroom scene with Taro. Photos: Alexandra Garcia

VR as a Storytelling Tool  97 difference, makes that behavior worthwhile. At the barest level of behavior, the viewer turning their head in the scene, the pleasure comes from constructing a sense of the virtual world.

David Liu, Creative Director of Virtual Reality, Viacom NEXT This is going to be an unpopular opinion, but I’m convinced that virtual reality is a spatial medium first. While we can and have told stories with it, the affordances of this medium seem to suggest that it is capable of a whole lot more. I get it. Many of us who are now working in virtual reality are filmmakers where stories took precedence, where the epitome of a pleasurable experience that we can grant to audiences is to tell a “good story.” As such it feels like it’s either we forget to or are afraid to ask ourselves what else this medium can do for entertainment. What we’ve discovered is that virtual reality does an extremely good job of transporting guests to spaces, spaces where you have full control over what they see, hear, and feel. Spaces with unlimited potential, that can shift and react to you. Spaces that you can potentially share with friends. Why, then, are we so fixated with telling stories? Surely the potential of delivering entertainment in ways no other prior medium can do supersedes the need to “tell a story” as we know it. Imagine this: if stories are a linear chain of moments driven by time, consider a form of “story” being instead a chain of moments driven by both time and intrinsically motivated agency. I would rather think of content creators in this space as narrative architects. Our job isn’t just to tell a story, but to craft a world of narrative potential. I often come back to this example – no one created a playground with a specific story in mind, but in many a playground great stories have been created and shared.

“Who Am I?” After answering the “Where am I?” question, VR participants are faced with a dilemma: they wonder if they are present and part of the story being told,

or if they are invisible and unacknowledged. In traditional filmmaking, we would ask, is it first person or third person? In cinema, there is no question – the audience is invisible, unless the fourth wall is clearly being broken. However, in virtual reality, there is no fourth wall as the participants are surrounded by the story world. This is perhaps one of the most important differences between traditional storytelling and virtual reality. When reading a book, watching a movie or a play, one almost never questions his/her involvement in the story. We know we are outsiders. This is the reason why moments when the audience is directly acknowledged are extremely rare and powerful. For example, in the Netflix series “House of Cards,” the main character, Frank Underwood, occasionally addresses the camera directly, creating a powerful moment that sticks in the audience’s mind. In theater, this device is an “aside,” as if talking to the audience is a parenthetical phrase, rather than the main point. In Shakespeare’s Henry V, the opening monologue is an extended apologia for not showing the tremendous battles that are going on in between the play’s scenes.

Figure 6.8  The 4th wall

In virtual reality, things are completely different. The participants have a central position and the story unravels around them. There is no “fourth wall” per se, and therefore the participants are usually expecting to be acknowledged. This is the reason why the “first-person” format is widely used in VR. There are different degrees of agency within the “first-person” realm. The stronger one is when the

98  Virtual Reality Storytelling

Figure 6.9  In VR, there is no 4th wall per se

participant is playing an active and important role in the story. This is the case for most VR games. Then there is “audience-aware” storytelling: the participants are acknowledged but cannot interact. For example, in Randal Kleiser’s “Defrost,” the various characters look at the participant straight in the eyes, but the

Figure 6.10 “Henry” ©Oculus Story Studio

story unravels the same no matter what the participant does. Sometimes, the participants are not acknowledged but can influence the outcome of the experience, such as with branching storytelling (when participants’ choices determine which storyline will play from a set of predetermined options). It is vital to establishing the “rules of the game” early when designing a VR experience, which means deciding whether it will be participant-driven or storyteller-driven. Going from one to the other can be disturbing and throw the participants off, but when carefully planned and executed, it can also be a powerful storytelling tool. For example, there is a switch from observatory to participatory in Oculus Story Studio’s “Henry”: At some point, the lead character looks at the participants straight in the eyes after ignoring them for most of the story (thanks to the use of a game engine which allows the experience to determine the spatial location of the participant and adjust the character’s performance accordingly). The participants thought they were “flies on the wall,” an invisible presence, but suddenly realize that Henry can see them, which engenders a greater sense of empathy for him. Does it mean that participants sometimes have agency, and at other times do not? For some VR creators,

VR as a Storytelling Tool  99

Paul Raphaël, Co-Founder/Creative Director, Felix & Paul Studios I think you can’t not be first-person in VR, at least I haven’t figured out how that’d work in the sense that even if a camera was floating around and you were clearly not a character in the scene, you’d still be first-person in the sense that you are not watching a frame that’s representing a reality that’s completely abstracted from yours. VR is inherently first-person regardless how you decide to tell the story. So far we’ve been leaning into that in making the viewer a plausible presence in the room. In our VR film “Through the Ages” about America’s national parks, no one ever acknowledges your presence. However, a lot of the shots are still very much akin to the shots we’d do in projects like “Nomads” where you are just standing in nature or sitting in nature. In fact, almost everything we’ve ever done has been from sitting height. That’s one rule that we have not really broken yet.

Figure 6.11  “Nomads” ©Felix & Paul Studios

virtual reality is agency. Even if the participants cannot interact with the story directly, they still choose where to look. By doing so, they mentally piece together elements of the story and create meaning of their own. No two participants will see the same things in the same order; no two experiences will be the same. This delicate question of “Who am I?” is a fascinating aspect of VR storytelling that should not be underestimated. The balance between the observatory and the participatory is one of the reasons why virtual reality can be considered the missing link between traditional filmmaking and gaming.

The Point of Interest After wondering where they are and what their role is in the experience, participants can then focus on the story itself. Generally, they look for something interesting on which to focus their gaze. This is the point of interest, or POI. In the example in Figure 6.12, Luke Wilson is the obvious POI as he is the only human visible in this shot. Additionally, he moves and talks directly to the camera. Most of the participants will look first at Wilson, and not the rest of the sphere (“frame” for VR). Later in this shot, Wilson points at something behind the

100  Virtual Reality Storytelling

Figure 6.12  “A 360 VR Tour of the Shinola Factory with Luke Wilson” directed by Andrew & Luke Wilson – ©ReelFX

Figure 6.13  “A 360 VR Tour of the Shinola Factory with Luke Wilson” directed by Andrew & Luke Wilson – ©ReelFX

VR as a Storytelling Tool  101

Figure 6.14  “Austin(s), Texas VR” ©Lucid Dreams Productions

Figure 6.15  “Marriage Equality VR” directed by Steve Schklair and Celine Tricart ©3ality Technica

participant and says, “Check out this skyline behind you.” The new POI then becomes the city of Detroit, located off in the distance behind the participant. In certain cases, there are no obvious POIs and the participants feel free to look wherever they want (Figure 6.14). Sometimes there are a lot of POIs within the same sphere (Figure 6.15). Identifying and understanding the POI is a very useful tool that directors can use to tell their story in a 360º sphere.

Jessica Brillhart, Principal Filmmaker for VR at Google Points of interest (or POIs) are elements within an experience that attract a visitor’s attention. They could be extremely obvious or more nuanced. Now virtual reality being what it is, I can never be 100% certain that you’re going to look somewhere, but I can make some solid bets on

102  Virtual Reality Storytelling

where you’re most likely going to look by evaluating the entirety of the experience [. . .] Cues can be pieces of music, a sound effect, a haptic response, a color shift, an animation, etc. How intensely noticeable these cues are is entirely up to you. Cues aren’t anything new. A slew of other mediums use them for various purposes. For VR they’re particularly useful because you can plant them to create attention spots or reinforce pre-existing POIs to strengthen the kind of experience you hope the visitor will have. As you can imagine, it’s a bit of a give-andtake. The more obvious your cue, the more likely a visitor will pay attention to something – but the less immersed that visitor will feel in doing so. The less obvious your cue, the more likely a visitor will feel compelled naturally  –  which is the goal  –  but less likely the visitor will catch that cue and respond. Excerpt from “In the Blink of a Mind  – Attention,” https://goo.gl/eY8P4f

Katy Newton and Karin Soukup, Filmmakers and Experience Designers In our tests, some audiences expressed FOMO. For example, in the classroom scene, we observed participants working really hard to read a note some of the students were passing. Participants were so curious about the note that they brought it up repeatedly in the debrief, “I still want to know what’s in that note!” FOMO could definitely distract, taking the audience out of the experience, but may also be a storytelling tool to create suspense or illicit curiosity. Remember when Bill Murray whispered in Scarlett Johansson’s ear at the end of “Lost In Translation”? Sometimes, not knowing is a powerful thing.

The Fear of Missing Out When multiple POIs are present in the same scene, the participants work hard to see everything as they fear missing something important to the story. This need to capture all of the details can potentially derail the experience. Participants may say things like, “I didn’t get that, was it important?” or “I’m not sure what to do,” which implies that they are stepping outside the story world itself to contemplate their own actions. However, this “fear of missing out,” known as FOMO, can also be harnessed as a storytelling tool, to create stress and tension. It can drive the participants to try the experience multiple times so that they can catch what they missed. One interesting example is the VR experience “Cirque du Soleil’s Kurios,” by Felix & Paul Studios. In this experience, the participants find themselves on a stage, surrounded by multiple circus artists and performers. There are so many things going on at once that one has to watch it three or four times in order to see everything. “Cirque du Soleil’s Kurios” does not have a linear storytelling structure but multiple smaller stories attached to each of the performers.

Figure 6.16  “Lost in Translation,” Photo: Focus Features

When thinking about how the story unfolds, ask yourself, “How can you draw attention to the most important story points?” And “Can you use FOMO to your advantage?” Excerpt from “The Storyteller’s Guide to the Virtual Reality Audience,” Medium, https://goo.gl/RE1sZL

Directing Virtual Reality When coming from the traditional movie world (the “flatties”), directing for VR can be frustrating. The loss of the frame, choice of the lens, and depth of field can be unsettling even for seasoned directors. Virtual reality should be approached as something completely different and new, where the known cinematic language does not apply. Instead of designing

VR as a Storytelling Tool  103 a story through the successive transformations of the camera and the projector, VR directors must build an experience where their audience will physically be the camera. In virtual reality, it is more about influencing and less about controlling where the participants look and what they get out of the experience. Influence vs. Control

VR directors are matadors, shaking the red cape left and right to guide the participants. They are magicians, using the art of misdirection, not editing or framing, to create suspense. They are spatial storytellers, designing the 360° space so that the “rebels” who do not want to look at the obvious POI are still entertained. They also have to accept the fact that each participant will get a slightly different experience and see and

Paul Raphaël, Co-Founder/Creative Director, Felix & Paul Studios Right after film school, the kind of films I wanted to make were very much about not directing, not controlling the viewer as much as possible to create a sense of presence. You always know where to look in real life. There is no question that if I don’t look at the right thing at the right time, I’ll somehow miss the purpose of why I’m here. In life, you usually know where to look instinctually. It’s about applying these principles to whatever we do, whether it’s a documentary or a fiction. If information is too brief and too ephemeral, then the scene will fall apart because the viewer may or may not look at the right place at the right time. In a “flat” film, you can cut to the thief stealing the watch and putting it in his pocket. In VR, you wouldn’t tell that moment in that way or you’d tell a different story. You’d tell a story that you couldn’t tell in a film because film doesn’t have that same feeling of presence that allows you to do different things. There is a subtlety to reality that you can communicate in VR. It’s amplified in many ways, for example, the shift in a character’s posture or expression is much louder in VR. It’s about focusing on different things and telling stories in a different way that doesn’t rely on such precise clockwork.

hear different things. The letting-go of control over the experience can be frustrating for directors, but it can also be exhilarating. Initially, we believed VR technology would usher in a new role for the audience, moving them from simple “observer” to the more active state of “influencer” (having impact on the story, but not changing the outcome of the narrative). However, observing is already an active state. Looking is doing, and it requires a lot of work from the audience. It is actually not the audience that feels the need to influence the story  –  they have enough to “do.” Instead, the storyteller needs to shift how they think of themselves, moving away from “director” and towards the role of “influencer.” After all, influencing the audience is all that directors can do: we cannot frame the shot for them; we cannot cut away. We can borrow techniques from other media  –  from theater, art, film, and design  –  to draw the audience’s focus, but in order to choose whether to show a color, break the fourth wall, etc., we first have to put ourselves in the audience’s shoes and understand their cognitive, emotional, and physical experience. We need to embrace a human-centered design lens of “audience experience,” and let that guide our choices. To influence the audience, the VR director has plenty of tools at his/her disposal: the POI as described earlier, movement in the frame, sound, acting, lighting, etc. Think outside the box, think immersive theater. The possibilities are endless.

Eve Cohen, Director of Photography, “The Visitor VR” My background is in painting, drawing, and photography when I was much younger. In painting, there are a few things that draw your eye to a specific spot, like the size of the object, the focus, and the greatest point of contrast. In VR, you don’t necessarily have control over the size of the object unless you have somebody really close in the frame. You certainly don’t have focus. So what I really rely on is greatest point of contrast, and that still applies in a VR where you can use lighting to draw somebody’s eye, and that I find is one of the easiest, kind of subtle, tools to help provoke where somebody might be looking

104  Virtual Reality Storytelling

at any given time. Because lighting is really about shaping the space and shaping the world in which the actors are existing or the story is being told, and when you don’t have a lot of the same regular film rules, you need to go back to the basics of, well, if this is a really big, wide painting, how do we get somebody to look over there? I’ve learned a lot about how lighting and sound draw the attention of the viewer from immersive theater. In immersive theater, you are following certain actors and you’re really following a story. But there can be a sound or a lighting cue that happens somewhere else and it kind of draws your attention and you get sucked into that world. So I think seeing that and experiencing that inside an immersive theater space and taking that into VR knowing how you help guide this story. If I have this one white wall over here in the background, this probably isn’t that big of a deal, but inside of this space, if that’s the brightest thing in the frame, eventually somebody is going to keep coming back to that.

Blocking or Choreography

Due to the nature of virtual reality, the pace of the story is often set by blocking instead of editing. A carefully crafted choreography of each scene allows the participants to feel like they are in the story and reduces the FOMO effect. It is therefore advisable to

prepare a storyboard for each scene and to rehearse them before the shoot. Directors can use one of the small and inexpensive VR cameras, like the Ricoh Theta or the Samsung Gear 360, to stage and shoot a rehearsal, and then review it in a VR headset. In the photo in Figure 6.17, cones are set all around the camera marking the position of objects and actors in the sphere. A VR photo is taken at the exact position and height of the future VR camera and is reviewed in an HMD. These tests allow the director to fine-tune the blocking and save some time on shoot day.

Jessica Kantor, VR Director, “Ashes” Distance from the camera is an incredible tool for emotional tension and presence, but I’ve also learned being too close can be just as ineffective as too far away. There’s a dance with the camera that needs to be in line with the emotional rhythm of the story. It is something still being explored and understanding how to choreograph a space is an incredible step to understanding how to use proximity as an emotional tool. In addition, I’ve begun playing with the participants’ field of view. Very subtly blocking two characters just out of the comfort zone so there’s a slight tension in viewing their conversation (not quite a two shot), and as the story progresses, changing their blocking in a participant’s field of view to show the two characters coming fully together or falling fully apart.

To facilitate communication when rehearsing and shooting, it is vital to find a way to efficiently describe the 360° space. In VR, “left of the frame” or “behind the camera” do not apply. For this, I choose to use the “clock position system.”

Figure 6.17  Celine Tricart on the set of “Marriage Equality VR”

A clock position is the relative direction of an object described using the analogy of a 12-hour clock to describe angles and directions. One imagines a clock face lying flat in front of oneself, and identifies the 12-hour markings with the directions in which they point. Using this analogy, 12 o’clock means ahead or above, 3 o’clock means to the right, 6 o’clock means behind or below, and 9 o’clock means to

VR as a Storytelling Tool  105 the left. The other eight hours refer to directions that are not directly in line with the four cardinal directions. Wikipedia, https://en.wikipedia.org/ wiki/Clock_position

In the example in Figure 6.18, the camera is at the center of the clock. “12 o’clock” (12OC) has been defined as the official podium. Therefore, “zone 1” is at 9OC, “zone 2” is at 3OC, etc. Once the point of reference – 12OC – has been defined, this system greatly facilitates communication between crew members. For example, “the lead actor is at 6OC, five feet from the camera.” This simple sentence delivers very precise information on the location of the actor in the VR sphere. Do not hesitate to use this method when storyboarding or doing a previz of the VR film as well. Then comes the question of close-ups in VR. When telling an emotional story, how can the VR filmmakers convey emotion without the well-used close-up shot? The nature of VR as an existential medium seems to make it incompatible with the constructed reality of traditional filmmaking. Yet, some VR filmmakers have started to find ways around this constraint.

Figure 6.18  The clock position system

Figure 6.19  Director Andrew Wilson and DP Celine Tricart on the set of “A 360 VR Tour of the Shinola Factory with Luke Wilson” – ©ReelFX

106  Virtual Reality Storytelling

Paul Raphaël, Co-Founder/Creative Director, Felix & Paul Studios Close-up in VR is when someone comes close to the camera or the camera comes close to someone. That is a close-up. In fact, it’s better than a close-up because it’s a close-up plus presence which brings you in closer. It’s about staging, which is where the theater aspect comes back in. It makes it less direct; as a filmmaker you can just decide to cut to a close-up; in VR you need to construct the close-up. The close-up needs to organically happen within the scene, which is a fun way to think of a close-up. For example, there is this scene in “Miyubi” where the father is recording a video cassette for his son because he is about to go on a trip and he wanted to leave him a little video before he leaves for Japan. The scene begins and the father has his back to the VR camera but facing a video camera that is shooting a feed to a television that is right next to the VR camera. He is talking to the camera and you’re getting a close-up in video, yet he is a few feet away from you. Then, he backs away from the camera that’s in scene and then sits next to the VR camera. Now you’re getting a different close-up, so we really just go from one close-up to another, even though we are in 360°. There are ways to do that which are actually more fun than just cutting to close-up; it’s more rewarding because the character actually comes closer to you in a way that is organically justified by the story. There is a beauty to that.

Figure 6.20  Close-up in “Miyubi” ©Felix & Paul Studios

Directing Actors

Once the VR director has taken into account the powerful effects of immersion, presence, and embodiment, and answered the questions of “Where am I?” and “Who am I?,” he/she has a very good idea of what the VR experience is going to be. The choice between a single POI, multiple POIs, and no POI has been made

and the blocking has been carefully choreographed around the camera. Now, how to direct actors in a three-dimensional 360° space where the camera is literally the head of the participant? It is advisable to show actors some relevant VR films before the shoot and to discuss them. It is important to understand that acting for VR is indeed

VR as a Storytelling Tool  107

Paul Raphaël, Co-Founder/Creative Director, Felix & Paul Studios

Figure 6.21  Actor Luke Wilson facing the VR camera on set of “A 360 VR Tour of the Shinola Factory with Luke Wilson” directed by Andrew & Luke Wilson – ©ReelFX

very different from acting for film. First, in VR, emotions are usually communicated by body language and voice rather than facial expression. Second, an actor must stay in character all the time, even when he/she is not the POI, as some participants might be looking elsewhere. Third, because there are fewer cuts in VR (and sometimes no cuts at all), the actors must be capable of doing 5–10-minute scenes in one take. These three aspects make acting in virtual reality very similar to acting for theater. In theater, the audience is usually further away and does not get close-ups of the actors’ faces. So long as they are on stage, actors must remain in character. Finally, theater actors must learn and perform hours-long plays and show precision and consistency in their performance and blocking. When casting for your VR project, make sure your

For the acting to look “fresh,” in the sense that this is the first time all of this is happening, is a little unrealistic considering the complexity and the length of the VR scenes, unless it really is semi-scripted. In this case, you put actors in a situation and let them kind of bring the thing to life. In our first VR experience, “Strangers with Patrick Watson,” we just shot him for a couple of hours composing and rehearsing music. The five minutes that we ended up choosing were the ones that had the most imperfections. Those were the most beautiful and made his character “shine” the most. We wanted to make a portrait piece about an interesting character and find a way to make the viewer feel as present as possible and not have a story, not have really anything other than an interesting moment with an interesting person. That was all about the imperfections and bringing that into fiction is definitely something that I want to be able to do.

actors can deliver this type of work. Actors who have experience only in film or commercials might find acting in VR difficult. If you find yourself working with an inexperienced cast, make sure to allow enough time during pre-production for rehearsals. If you can, shoot the rehearsals in VR and show them to the actors

Harry Hamlin, Actor, “Mad Men,” “Defrost” Because there are no close-ups in VR, it’s up to the overall story to convey tension. The actors are limited in what they as individuals can accomplish. As for acting any differently in VR, I just acted as though there were three or four cameras. I tried not to play to the array. On “Defrost,” we rehearsed for half a day and shot every scene about ten times until we got it close enough to print. It was never perfect and since there is virtually no editing, the film will never be just right. Compromise was the word. I wanted to do it as an experiment to test what the limitations of VR are in live action. It turns out there are more limitations than I would like and I doubt I will do another one. When watching it, I long for the camera to go close and feature specific emotional moments or story points. Because that’s not possible, the whole thing operates on one level. One note. I’m not sure that the connection between actor and viewer is better with VR. For narrative storytelling it may be worse. The real value of VR is in first-person experiential modes where the wearer of the device is “in the game or experience.” When the wearer is a real part of the action and having an interactive experience, there is a true paradigm shift in terms of conscious experience. That’s the sweet spot of VR.

108  Virtual Reality Storytelling in a headset. They will learn a lot from watching themselves and the other actors in virtual reality. In the case of a “first-person” VR film, the camera is acknowledged and is part of the story. The delicate task of the director is to make their audience believe that they are really in the scene, and this depends mostly on the acting. The actors must behave as though another human being is in the room with them, and look at the VR camera as if it were someone’s face. When done well, VR can be a powerful medium for actors to create a direct connection with their audience, to have a moment with them. For directors, it is often useful to rehearse the scene a couple of times on set without the camera, but while standing (or sitting) where the camera is supposed to be, as described earlier in this chapter. It not only simulates the participants’ experience of the scene and allows the director to change the staging or camera position if needed, but it also helps the actors find the right behavior to accommodate an extra body in the room.

Editing

Live-action virtual reality is about immersing the participants into the story world; therefore, the art of editing has to become something completely different. Every time there is a cut, the participants are forcibly removed from the location where they were standing and transported somewhere else. This can throw them out of the experience as they have to re-assess their surroundings and settle into the new scene. It is also a powerful tool that can be used to challenge the participants as long as it does not comprise their suspension of disbelief. Editing can thus be difficult and must be done with great care.

Steve Schklair, Founder, 3ality Technica and 3mersiv If I have a dialogue scene between two actors and I want to intercut “close-ups,” I can do that, but suddenly the 360° environment is rotating around at light speed. The building that was on the right is now the same building but it’s on the left. It’s very confusing spatially to have the environment spinning around like that. I’m actually finding that on straight cuts the background has to be completely different shot to shot. Or I fall back on the fade to black, fade up again instead of just straight cuts. If there are two scenes that take place simultaneously in two different locations, then I can cross-cut between those two locations without losing my audience. I can even dissolve those two. There’s a lot I can do, but when I’m in the same location trying to do multiple shots, all I have is an environment that’s spinning around the viewer. That doesn’t work so well. It’s disorienting. Editing is the vast unexplored wilderness in VR. It’s not an established art or science yet, but eventually the language of editing will build, and likely with techniques that we have not yet imagined.

Figure 6.22  360 VR editing – credit: dashwood3d.com

The longer the shot is, the more the participants can settle in, allowing them to freely explore the narrative sphere. This does not mean a VR film is slow. The language of VR is closer to our everyday experience of reality than cinematic storytelling, so the pace will not feel unnecessarily slow. A lot of VR filmmakers decide to cut only when there is a change of location and/or time, using the unsettling effect of the cut to their advantage. In “Cirque du Soleil’s Box of Kurios,” the participants first find themselves inside a wooden box with circus characters staring at them through cracks. Then the shot dissolves into the main scene where the camera is on stage and multiple little performances/stories unravel all around. The only cut of this VR film is used to transition from one location to the next, and then the editing is left to the participants themselves: by deciding where to look and which story/ies to pay attention

VR as a Storytelling Tool  109

Figure 6.23  “Cirque du Soleil’s Box of Kurios VR” by Felix & Paul – ©Cirque du Soleil

to, they indeed make their own edit of the film, just like we all do in real life. It is possible to do multiple cuts in a VR film as long as the POIs are matched from one shot to the next. For example, in “A 360 VR Tour of the Shinola Factory with Luke Wilson,” there are a total of 14 shots in four minutes, 30 seconds, including one shot that lasts only five seconds. However, most people

who watch this VR film do not feel bothered by the numerous cuts due to the use of a “POI-matching” technique. In “A 360 VR Tour of the Shinola Factory with Luke Wilson,” the POI (actor Luke Wilson) is carefully tracked and matched from shot to shot, which allowed the editor to cut many times without compromising the participants’ immersion in the story.

Jessica Brillhart, Principal Filmmaker for VR at Google Placing bets on POIs helps to inform consequential editing decisions, like which world I’d want to go to next and how best to transition from one world to another. For instance, if I can make a solid bet on POI, I might be inclined to do a match on attention. This would involve identifying where a visitor’s attention lands and then cutting from that to something else I’d like for the visitor to pay attention to. By doing something as simple as identifying visitor attention, you can start to craft an edit that feels far more natural and purposeful than just cutting from whatever, whenever you feel like it. A great deal of confusion results from the latter, where someone is essentially pried from her experience and thrown somewhere else. It’s also important to note how a visitor’s attention shifts during an experience. By identifying this shift, we can essentially get our in and out points. In traditional editing, an in-point is where a shot begins and the out-point is where it ends. In VR, the in-point is where a visitor’s experience is most likely to begin and the out-point is where it’s most likely to end.

110  Virtual Reality Storytelling

Figure 6.24  Jessica Brillhart’s POI-matching technique

Then it’s just a matter of rotating the worlds around each other to line up those in and out points. The result is a kind of mental pathway through the overall experience. I’ve been calling this method unlocking the Hero’s Journey [. . .] Excerpt from “In the Blink of a Mind  –  Attention,” https://goo.gl/eY8P4f

There are a lot of different avenues to explore when it comes to editing a VR story as the medium is still being invented. For example, different VR shots can be mixed together using the split-screen technique (imagine a phone call scene where the VR sphere is divided in two and each 180° shows one of the protagonists on the phone), or 2D shots can be comped in the VR sphere (for example the dearly missed close-ups).

Figure 6.25  Example of POI-matching technique in “A 360 VR Tour of the Shinola Factory with Luke Wilson” – ©ReelFX

Steve Schklair, Founder, 3ality Technica and 3mersiv For “Marriage Equality VR,” I came up with the idea to comp in windows with close-ups of the talent. I really believe in layering 360° content. First of all it’s a lot of real estate to fill. Generally most of that real estate is wasted. It’s just environmental. So let’s use that space. If you’re shooting full 360°, making close-ups is difficult, they just don’t look great because of the foreshortening of the lenses. It’s really difficult to get a close up and it’s really difficult to convey the nuances of emotion of an actor or performance in a wide shot. I miss the close-ups a lot. The answer in that piece and some other things we’ve done is to start doing picture in picture, putting it in windows. In “Marriage Equality” we found an integrated narrative way to insert pictures. As it was a recreation of a news event, we had a prop cameraman

VR as a Storytelling Tool  111

Figure 6.26  2D “flat” video comped in a VR shot – “Marriage Equality VR” by S. Schklair and C. Tricart ©3ality Technica

in the scene shooting close-ups during interview sequences. Except that he was not just a prop, he was actually recording. As we are so used to the convention of the picture in picture inserts in news broadcasts, it was totally in place to use the close-ups of what the actor cameraman shot as picture in picture. We wrote it into that script so it wasn’t gratuitous. How am I going to read what’s in the actors’ eyes on a 180° lens? Their eyes are dots. Drama, comedy, pretty much everything is in the nuanced performance, it’s in the facial expressions. It’s in performance delivery. We’re missing some of that in VR. I’m still a big believer in finding ways to do picture in picture, inserts, segments, montages . . . Plus, you’ve got a lot of real estate to work with, why waste it?

New techniques for virtual reality editing are being invented every day. The language evolves very quickly thanks to the boldness of the VR pioneers. Transitions from one shot to the next can themselves be used to tell the story of why there is a cut. For example, “The Strain” VR is a first-person horror experience which takes the participant into the world of the television series. To go from one shot to another, a transition simulating blinking eyes is used, reinforcing the sense of presence. For some VR filmmakers, it is preferable to use a fade to black/fade from black when cutting from one location to the next; for others, a cut is acceptable as long as the POIs match.

Duncan Shepherd, Editor, “Under the Canopy,” “Paul McCartney VR” There has been a lot of thought put into how to manipulate, or goad, the viewer into looking in a particular direction or at a particular object inside the VR view. When reviewing my favorite pieces of work, I have found it’s far more engaging if there is something interesting to look at in every direction. For example, when I was working with Tony Kaye on the Paul McCartney project, at every point during the edit Tony was adamant that

112  Virtual Reality Storytelling

we must have something for the viewer in every direction. It can’t matter where they look, and you can’t force them, no matter how hard you try. And that’s turned out to be true in all of the things that I enjoy viewing and reviewing again and again: there is always something for me to discover wherever my attention has wandered. My personal view on this, and I’m by no means an academic in the subject of film theory, is that by giving the viewer agency and the opportunity to essentially edit parts of the film themselves, we create a stronger connection between the viewer and the story we are trying to tell. There are many examples of VR viewers reacting in a deeply emotional and visceral way to content they are exposed to, so that even with a passive viewpoint, they have an active connection to the story that will linger with them on a truly meaningful level. The possibilities for empathizing, educating, and entertaining people have never been greater than the era we are entering right now.

Sound Design

When it comes to designing a story in virtual reality, sound is key in the director’s tool belt. Indeed, the spatialization of sound has improved tremendously over the last century, from mono to stereo and then to the 5.1 surround system. Take the example of a VR 3D shot where the audience has the complete freedom to look wherever they want. Somewhere in the sphere is the point of interest, where the director ostensibly wants the audience to look. Ideally, the director wants to guide the audience’s gaze to the POI in a subtle and organic way, as if the audience were discovering it by chance. In film, storytellers can use close-ups, tracking shots, or they can rack focus to the POI. In VR, a relatively accurate positioning of the sound in the 360° sphere and in depth can achieve that, and will bolster the realism and the immersiveness of the scene. “Notes on Blindness” is an immersive virtual reality project based on John Hull’s sensory and psychological experience of blindness. Each scene addresses a memory, a moment, and a specific location from John’s audio diary, using binaural audio and real-time 3D animations to create a fully immersive experience

Tim Gedemer, Sound Supervisor, Owner Source Sound, Inc. Sound has been used relentlessly by directors through the years to call attention to plot points and/or action that is either happening “off camera/off screen,” to emphasize certain moments, or to simply shift the viewer’s gaze to the other side of the movie screen. Sound used in this way is deliberately influencing the viewer’s gaze by focusing attention to one sound event over all others. We are simple creatures this way, and follow our instincts by giving our attention to the loudest and most conspicuous sounds in our path. This ties into a phenomenon I call “persistence of information.” The concept is that if you provide people with aural or visual information (or both) from a certain direction long enough, people will begin to expect all of their important information to be coming from that certain direction. For example, in a piece of 360° video content, if the director shoots action to be in only 100° of the sphere, the viewer quickly adapts to that, and will largely ignore the other 260° of visual and aural space. Even if you deliver a loud sound somewhere in that other 260° of space, the viewer will not turn to look, as they are already conditioned to receive their most important information in the 100° area of information persistence. So, in VR experiences, we need to consciously avoid this phenomenon by deliberately conditioning the viewer to receive aural and visual information from all directions. Our recommendation is to “calibrate” viewers in the first 30–60 seconds by making sure audio is present and “conspicuous” in as large a range of the full 360° space as possible, forcing the viewer to turn their head and body fully around. Once this calibration is achieved, as filmmakers we can begin to direct the viewer’s gaze simply by deftly providing them with an audio track highlighting any needed plot point or event anywhere in the full spherical space. The thing is, if the content itself has any kind of “gimmicky” visual element to it, just creating sound that “goes along” with the picture will result in a gimmicky audio track. In other words,

VR as a Storytelling Tool  113

just like in any other medium, audio treatment needs to serve the overall creative direction. If a given piece sounds more gimmicky with a given audio treatment, the audio professional will need to take a different approach and possibly do something that is completely different from what is presented visually. This is the nature of sound design, and its tenets hold true in VR as much as they do elsewhere. A way to avoid issues like this is to resist creating sound that is “see/do,” or “what you see on screen is what you hear.” Try using sound substitutions as a way to stay away from common mistakes like this. For example, instead of using a fast car going by sound for a fast car going by on screen, take an animal scream or growl, make a Doppler out of it and use that instead.

in a “world beyond sight.” This is an excellent example of how important sound is when it comes to VR storytelling.

As always when working with a tool as powerful as VR sound, there is a risk of over-doing it. Sound design must first and foremost move the story forward and not just act as the matador’s red cape. If a specific sound is added just to attract the participants’ interest to the POI, then they will feel as if they are being spoon-fed by a director who cannot let go of the control. Gimmicks can be good in certain types of experiences (horror, comedy, etc.), but not when trying to immerse the participant in a naturalistic story. Misdirection, yes, but the more subtle it is, the more immersive the experience will be.

Jean-Pascal Beaudoin, Co-Founder, Headspace Studio Too often in film, sound design is being relegated to post-production. In cinematic VR, and especially in fiction, this approach will almost certainly lead to an unrealized potential, if not outright dramatic failure. Sadly, VR platform stores

Figure 6.27  “Notes on Blindness VR” by A. La Burthe, A. Colinart, J. Spinney, and P. Middleton ©Arte

114  Virtual Reality Storytelling

are filled with examples. I think it’s important to clarify that by “sound design,” we’re not merely referring to creating sound effects but the overall creative and technical approach in regards to sound for the project, for which the term “VR audio direction” is perhaps more accurate. Perhaps it is because there is still a (justified) presumption that 360° video VR production is – at the very least technically – not business as usual, and also because after experiencing enough both good and bad VR the industry is starting to realize the crucial impact audio can have, but on our end at least, people come to us quite early in the creative process. The few times we’ve been asked to come in at the post stage, I could not help but witness missed creative opportunities, let alone struggling to salvage the project because audio had been captured without understanding the needs for 3D audio post. That being said, the projects I’ve worked on where spatial audio has been used to its full narrative potential – not just to support the story – are ones that were written with sound in mind. For that reason, I cannot stress enough how crucial it is for VR directors and screenwriters to understand and leverage the potential of “sound design” in VR storytelling.

Conclusion

The “rules” of VR filmmaking are constantly evolving and becoming more flexible as we understand this new medium more and more. Breaking those rules without taking the time to understand them can lead to making mistakes that can potentially disappoint the early adopters and threaten VR’s nascent audience. It is our responsibility as VR storytelling pioneers to ensure the delivery of great immersive content. Moving the camera, cutting quickly, and creating close-ups are all possible, but they must be done properly to protect comfort and bolster the senses of immersion, presence, and embodiment. We should build on what we know in these early days of VR. By leaning into things that reinforce presence, we get results that we know are going to work at least to some degree. At the same time we gain experience, and we learn, and then we get better at possibly breaking those rules in different ways.

Paul Raphaël, Co-Founder/Creative Director, Felix & Paul Studios One example is in the White House project, “The People’s House,” where we broke the principle of the viewer sitting down – therefore the camera is at sitting height and not moving. We have “unjustified” movement and positioning for the first time. In the case of “Miyubi,” you’re a toy robot so everything is justified: your point of view, your height, when you move and why and how. In the case of most of our documentaries, and almost everything we’ve ever done, you’re sitting down as a real person, and therefore you’re sitting down in the experience, and if you’re moving, it’s because you’re in a vehicle. In the White House project, it’s the first time we did more cinematic camera positioning and movement. You are floating through space and you’re not in any sort of vehicle or anything like that. By abiding by these rules up until that point, we learned to frame for VR; we learned how to pace for VR. We know what people will look at, and when, and how, and how this or that will feel. By sort of letting go of one of these life rafts of presence, but having enough expertise in the other ones, I feel like we were able to sort of emancipate ourselves from one of these principles. I feel that maybe a few years from now we’ll be emancipated from all of them, but we’ll have a foundation of what makes a shot work without it necessarily abiding by all these principles of presence. As your foundation gets stronger, you can maybe break down a pillar, and then start exploring something else. I think that today if you just kind of jump in and you’re quick cutting and moving your camera without any sort of logic, without really knowing much about the medium, the chances of it really feeling good are pretty slim. In fact, I haven’t seen it.

VR Documentaries and VR Journalism Whether it is standing at the top of Mount Everest, sitting courtside with the Warriors, walking along with a protest in real time, or witnessing what happens inside a refugee camp, VR is giving audiences the opportunity to embody and experience journalistic stories beyond

VR as a Storytelling Tool  115 reading, hearing or watching. Journalists are offering glimpses into new worlds, and showing familiar stories under a new light.

Melissa Bosworth and Lakshmi Sarah, VR Journalists and Filmmakers For creators, these expressions of immersion signal that virtual reality, 360° video, augmented reality and mixed reality give us access to a new channel of understanding that can be used to convey meaning to our viewers. We now get to put our audience within a scene, and allow them to experience something in a physical and emotional way that stands apart from understanding it in an intellectual sense. With that in mind, a major goal for today’s immersive journalists is to find out what can be done with immersive media that couldn’t be achieved with any other medium to date. Our challenge is to understand what we stand to gain through this new dimension of access to our audiences, and how we can use immersive media to better accomplish our journalistic directives.

the scene, emulating to the best extent possible what it might feel like actually to travel there. • Access: For the 2015 interactive piece “Discovering Gale Crater,” The Los Angeles Times used geographical data mapped into a virtual reality environment to create a 360° guided tour on the surface of Mars, allowing users to feel as though they were standing on the surface of a distant planet. The use case is clear: there is no other medium through which the audience could have that experience, and no possibility of actually traveling to Mars. The piece offers an opportunity for the audience to gain a deeper understanding of – or at least a new perspective on – a world beyond our reach. • Experience: The Emblematic Group and USC School of Cinematic Arts’ “Project Syria,” which was commissioned in 2014 by the World Economic Forum to draw attention to children impacted by the Syrian war, includes a re-creation of the scene of a bomb blast using actual audio from the event. The graphics in the visual reconstruction are based on photo and video from the scene, and the end product approximates a first-hand experience of the horror and disarray of war. The effect is a visceral sense of shock and remorse. The content was originally aimed at world leaders, and the goal was to create

The subject matter of most recent immersive journalistic projects fits into at least one of a few categories: • Stories for which the location matters, and the experience of standing there is a large part of the piece. • Stories in which the audience has access to a place they could not or would not otherwise go. • Stories that attempt to help the audience feel something as if they had experienced it first-hand. • Stories in which the audience can experience something from a new perspective. That is, of course, not an exhaustive list of what can be or has been done with immersive journalism, but here are a few projects that exemplify these categories: • Location: Number four on The New York Times travel section’s “52 Places to Go in 2017” is Zermatt, Switzerland. Integrated into the article is a 360° video moment from the gondola ride over the snow. The piece has a simple goal: to place the viewer in

Figure 6.28  “Project Syria” – ©Emblematic Group

116  Virtual Reality Storytelling an experience powerful enough to motivate people to do something to address the humanitarian crisis in Syria. • Perspective: The United Nations’ 2015 project “Clouds over Sidra” took viewers into a refugee camp in Jordan, on a tour guided by a young refugee girl named Sidra, with a voice actor narrating a touching and endearing first-hand account based on interviews with Sidra herself. The piece offers the viewer a chance to stand where Sidra might stand, and gain insight in a fundamental, sensory way into the experience of living in a refugee camp. It seeks

to make its audience care about Sidra and people like her, and does so by guiding viewers along in a world colored by her perspective. VR documentaries will continue to grow and provide journalists and filmmakers with a new way to tell stories. When it comes to journalism, VR forces the journalist really to think about their story because it forces you to be authentic in a way, what you see is what you get. With traditional methods of storytelling, some journalists may manipulate the atmosphere but with VR it is much more difficult to do so.

Melissa Bosworth and Lakshmi Sarah, VR Journalists and Filmmakers In addition to experimentation into what we can do with the power of immersive media to take us places we’ve never been able to go, it’s also the moment to consider how we might best use this new way of story­ telling to enhance already-existing journalistic formats. As journalists experiment with immersive projects, ethics continue to play a key role in our work, and must be adapted to the new medium. There are ethical considerations around recreating scenes and editing out interviewers, tripods or gear left in the shot. These considerations are not new; they are simply codes that must be carried forward. Virtual reality may be lauded as “the empathy machine,” but we must remember that our job is to do more than spark emotion based on others’ suffering. If we do that, it must be in the course of shining a light on something our audience needs to see. Thus, as we experiment, one major consideration is to always be vigilant of the line between journalism and entertainment. What kind of immersive stories can we tell and how do we create interactives without losing sight of why we are telling those stories? When does an interactive become merely a game? When does an experiential story that witnesses real suffering cross over into exploitation for the sake of entertainment? Journalists’ challenge, whether we are working in print, audio, or immersive media, is to understand how to do our work and reach audiences in the most effective way possible. It is with that in mind that we ask ourselves what kinds of stories should be told in VR. The answer may change and evolve with advances in the medium. Experimentation will continue, and research into what does and doesn’t work will expand, but the realities we witness in the course of our work – the lucky accidents that make for moving and impactful storytelling – will also show us how immersive media can push journalism forward, probably in ways we never predicted.

Dylan Roberts and Christian Stephen, War Zone Journalists and VR Filmmakers Probably the worst and most unsafe thing to do is filming 360° in a conflict zone. Mainly because you need to leave the VR camera standing alone and then you somehow need to get out of the shot, which is hard to do, especially when there isn’t much room to hide from crossfire. Also, VR cameras are not 100% foolproof operational-wise, and you don’t know what you filmed until you ingest everything to review. I was filming in West Mosul, and was planning to use the six-camera Freedom 360 rig. As I was getting ready to begin filming an Iraqi military helicopter shooting at ISIS, one of the GoPros wouldn’t turn on. To fix this, I

VR as a Storytelling Tool  117

Figure 6.29  Christian Stephen shooting “Welcome in Aleppo”

would need to quickly unscrew the camera, take it out, remove the battery, and reboot the camera. Instead, I ended up just using the Samsung 360 camera as a backup! Always try to bring two VR cameras if you can. The main cameras we’ve been shooting with are GoPro rigs, such as the Freedom 360 and GoPro Omni rigs, and for run-and-gun situations, using the Samsung Gear 360 has been fantastic, especially shooting in complex territories such as Iraq. Most recently, we have enjoyed shooting with the Z Cam S1 on the front line of Mosul. I bring lots of SD and micro-SD cards since I don’t normally bring a computer to download the footage in the field. It depends on what kind of shoot, honestly; ideally I would like to take my laptop at all times to pull in footage whenever, but working in conflict zones, the less equipment you have, the better. It’s all about the story. Never forget that and don’t be afraid to make mistakes in this industry – it’s the only way to learn. Make sure to focus on what your story is and the access to pull it off. What’s great about VR is it allows filmmakers and journalists new ways to pitch stories and hopefully provide another revenue source, since the freelance market for journalists isn’t too good right now.

Chapter

7

Make a Film in VR from Start to Finish This chapter provides step-by-step instructions for making a narrative live-action VR film. A similar structure can be used for both live broadcast and documentaries; simply omit the irrelevant steps. Development Screenwriting for VR

When CinemaScope was invented in the 1950s, “Metropolis” director Fritz Lang mocked its superwide aspect ratio and said, “Cinemascope is not for men, but for snakes and funerals” (in “Contempt,” dir. Jean-Luc Godard, 1963). However, CinemaScope became incredibly successful and its anamorphic format has continued to this day. Similarly, one can

wonder, what is VR good for, story-wise? Data gathered on the most-watched VR films show that the most successful genre is horror.

Robyn Tong Gray, Chief Content Officer and Co-Founder, Otherworld Interactive “Sisters” is the first project we created at Otherworld. Personally I love horror movies – bad movies, good movies, I enjoy them all. It was nearly Halloween season and so the timing was ripe to play around with the horror genre. Until then, I’d only ever been a consumer of horror, never a creator.

Anthony Batt, Co-Founder and Executive Vice President, Wevr Storytelling will move from three-act narratives with cameras over their shoulder and cutaways, to a completely different form of immersed storytelling, simulation-based. You will be able to live the hero’s journey, the victim’s journey, etc. I can’t even explain what it’s going to be like because it’s so different. I can’t conceive it. But if we keep working and figuring out how to deliver really amazing, agency-rich, sim[ulation]-based content stories, then we’re opening up the opportunity for a new medium to emerge and to decide what that really, frankly is.

Figure 7.1  “Sisters” – ©Otherworld Creative

In two weeks with a production team of three, we created “Sisters” and a few months later we polished it up a bit and released it. Since then, it’s been downloaded over 2.5 million times across mobile platforms and is our most popular experience. Something about the particular subsection

120  Virtual Reality Storytelling

of horror makes it accessible to everyone (kids, young adults, even moms), and it also, unintentionally, is a very social experience. People love playing it but they also really love watching (and filming!) others playing it. A big part of indirect control is about getting players to react naturally to authored stimuli. A horror atmosphere calls for heightened awareness and responses. The spatial sound cue of a door opening in a café on a rainy day elicits a very different response than if the same sound cue is played against a dark and stormy haunted house. It’s also just fun! I think there’s something about the horror genre that allows people to loosen up. The threat isn’t real, they can cede control and indulge in fear in a safe environment. When we craft horror experiences, we shoot for a specific subsection of horror – ghosts and creepy dolls and spooky atmospheres are fair game, intense blood and gore and hyper-realistic visuals are not. We aim for that level of fun horror, and work to create a visual and audio style that complements the story but also leaves plenty of room to remind the player it isn’t the real world. I think as creators in VR we have a responsibility to our audience to take care of them, and with genres like horror we have to remember that VR really is intense. It really can feel like you’re there and the experience we give our audience in VR creates memories much more akin to real-world experiences than any stories we might tell on a 2D screen or on paper.

Shari Frilot, Founder/Chief Curator of New Frontier at Sundance Film Festival I think people get excited about VR and excited about the feelings that it creates because it engages with your biochemistry. What makes you laugh, what makes you cry, that’s biochemistry. Not only does VR do that, but it engages with your sense of place, where your body is, and that engages your survival instincts, what makes you live or die, what helps you instinctually avoid the bullet, avoid the tree that’s falling, and that’s something that you don’t get so much in a theater. You always know you’re in a theater, even though you’re losing it in a tragedy or a horror story. You always know that you’re in the safety of that theater. When you’re in the VR headset, you don’t know that. You really don’t. You have to remind yourself that you’re not actually in the situation that you perceive when you’re in a VR experience.

Horror is one of the easiest genres to play with in VR. It is very visceral and that makes it really easy to evoke player response. With horror movies and even video games, there is a safety net – players can only see this curated, two-dimensional framing contained within their screen. In VR the safety of the 2D screen

has been removed and they are forced to start literally looking over their shoulder. Certain genres are indeed more successful in VR than others. However, VR opens up a lot of opportunity for all genres and all stories as long as its immersive factor is taken into account and the storytellers use the power of presence to their advantage. When it comes to screenwriting, it can be difficult to describe the newly acquired 360° space using the traditional script format. For example, see Figure 7.2. What is happening and where? How do you indicate what the point of interest is? Traditional formatting is optimized for describing actions and characters that fall within the edges of a rectangular frame. Things can become very confusing when multiple actions are happening at the same time in different zones of the sphere. Given the inadequacies, try dividing the four main quadrants of the sphere into different columns. In the example in Figure 7.3, the POI is highlighted allowing the reader to understand how the 360° sphere is utilized. However, this formatting is timeconsuming and can be tedious. Another method is to use a type of formatting that is sometimes used for commercials. No matter which format the writer decides to use, the entirety of the 360° sphere should be described at the beginning of each new scene to help readers get situated. For this, I recommend the use of the “clock position system,” as described in Chapter 6 (see Figure 7.4).

Make a Film in VR from Start to Finish   121 Figure 7.2  Traditional script formatting

Figure 7.3

Richard L. Davis, Screenwriter There is a tradition in screenwriting that the length of a description of an action or dialogue on the page roughly equates to how much time it takes up on screen. This is how the guideline “one minute per page” came to be. Writing in VR, there are several actions going on simultaneously, several points of interest that should reinforce the theme or build out the world. Settings, the reality around the viewer, can be described at length, sometimes taking up a quarter of a page. Time described is simultaneous, but the page is sequential. Some pages are three minutes, others are 30 seconds. “One minute per page” is out the window – positively defenestrated!

122  Virtual Reality Storytelling

Figure 7.4  Commercial script formatting

Budgeting/Scheduling

Jessica Kantor, VR Director, “Ashes” The biggest learning is to design an experience within your budget. I’ve found it far more effective to experience something simple but beautifully executed then something beyond the scope of the team which risks the participants escaping the experience emotionally. For example, camera movements without proper stabilization, or reasoning that makes sense to the participants’ brains. The intense feeling of nausea instantly drives participants to take off the headsets and can even keep them from putting on a headset in the future. It is also really expensive in post to clean up the artifacts from a moving camera, so if the budget doesn’t allow for that to happen properly, I’d recommend redesigning the concept.

When it is time to start working on the budget of a VR experience, a lot of producers have the same questions: Is it more expensive to shoot VR than traditional film? Is it slower? How big should my crew be? These are all important questions, but the answers depend first and foremost on the creative. A line producer who has experience in virtual reality can help you adapt your generic budget to VR. In general, shooting live-action VR can be faster than shooting traditional film. This is due to the fact that we tend to do fewer shots and less coverage in VR. Also, the lighting is often limited to practicals and the camera to static shots. On the other hand, the entirety of the 360° sphere must be dressed and staged, and long scenes require a lot of rehearsals (which are not usually scheduled on expensive shooting days). Long scenes (5–10 minutes) can take anywhere between a half to a full day to film, while shorter scenes can

Make a Film in VR from Start to Finish   123 be done in a matter of hours. On a standard film production, you can expect to shoot about five pages of script per day, which usually equates to five minutes of final product footage. The average on VR sets is closer to 5–8 minutes a day. Again, this depends on the story, number of actors, locations, etc. For example, “Marriage Equality VR,” a seven-minute single shot, was filmed in just one day.

Paul Raphaël, Co-Founder/Creative Director, Felix & Paul Studios In the case of “Miyubi,” we did about 12 shots total for a running length of close to 40 minutes. If you do the math, that’s an average of about three minutes per shot. Some are a little shorter, some are a little longer. We did four days of principal photography with about two shots a day. Then we shot one scene in Cuba and we shot one scene in LA, so a total of six shooting days.

In terms of crew sizes, VR crews are usually smaller than traditional film crews. The camera, electrics, and grip departments are the ones that will be the most different, due to the reasons explained above: little-to-no camera movement, practical lighting, etc. In the case of a VR shoot using the “nodal” technique, the crew is often the same as on a regular shoot, as non-VR cameras and lights are required. When it comes to budget itself, there is usually only one additional line item specific to VR: stitching. Everything else remains similar to a “normal” budget. Stitching price varies greatly depending on the resolution, whether it’s 2D or 3D, and the quality of the stitch itself. As the technology progresses and automatic stitching algorithms are developed, costs will go down. As of 2017, good stitching services for stereoscopic content can top at US$10,000 per minute. It is important to note that these budget caveats do not necessarily apply for game engine-only VR experiences. A line producer with extensive experience in gaming budgeting should be considered for those projects.

Steve Schklair, Founder, 3ality Technica and 3mersiv

Ryan Horrigan, Chief Content Officer, Felix & Paul

If you’re using a GoPro setup, two or three people can go out and make a project. There is a camera person, a sound person, and a director-type person. A lot of 360° content is shot that way, especially documentaries. Some of the more dramatic narrative material where there is extensive lighting or VFX, then you may have normal-size crews as if it were a film. A small independent film has a very small crew and a large feature film has a very large crew, so it really depends on the scale of the 360° piece you’re shooting. The camera department is taking a big hit. You don’t have focus pullers, you don’t have operators. You’ve got somebody handling the camera so you can call them the first AC (assistant camera). They’re not pulling focus, they’re keeping the dirt off the lenses. They’re helping the engineer plug everything together. In a professional world you need an engineer and camera technicians. On bigger shoots you might bring in a secondary engineer to handle the recording.

Currently non-gaming VR content is often much cheaper (live-action 3D 360° video) than cable television content on a cost-per-minute basis. Animated VR content, however, can cost considerably more to produce than live-action VR content, but is still quite low on a cost-per-minute basis compared to feature film animation. I do believe there is a sweet spot for cost per minute for content being released in the near term, but as the user base across platforms doubles, triples, and eventually quadruples, budgets can and will increase to more closely reflect the cost-per-minute budgets of cable TV series. If VR/MR content someday becomes the most widely consumed content and film, and TV becomes secondary, then we expect VR/MR production budgets to rise as a reflection of the global audience size.

124  Virtual Reality Storytelling Financing

The million-dollar question is how to finance VR experiences when audiences are still small and limited to early adopters.

Paul Raphaël, Co-Founder/Creative Director, Felix & Paul Studios Most of the projects we’ve made to-date were in partnership with one of the platforms or studios, and without any hope of making money from sales as of today. The platforms really want there to be content. Some have a lot of faith in 360° filmmaking; some are more focused on computergenerated, interactive, positional VR. I think you can make a pretty ambitious project today and have it funded by one of these platforms if you have a good track record. That’s going to shift in the next two years, when you should be able to sell a piece to the public and make money that way. The way I see it, there will definitely be fewer headsets out there than there are people with smartphones and TVs. However, there will also be less content. If you make a good piece of content, you are not competing against many others, like in TV or in film. Figure 7.5  “My Brother’s Keeper” ©PBS

Ryan Horrigan, Chief Content Officer, Felix & Paul It’s our belief that there is one segment of the content market currently missing, which will come to fruition in the next few years and may actually drive the largest majority of user adoption. This type of content, in our opinion, is original premium content, perhaps episodic and serialized, in both fiction and non-fiction: VR native stories that can only exist in VR in the way that these stories are told, thus proving the industry’s assertion that this is an entirely new storytelling medium. In other words, stories that could just as easily be films or TV shows in the way in which they’re told don’t need to be produced in VR and won’t help drive viewership ultimately. As original VR native content emerges, the feeling may be reminiscent of the early days of cable and premium cable, where it took original IP [intellectual property], never before seen in film or on network TV, to give consumers a reason to care (“The Sopranos,” “The Shield,” etc).

The VR technology is still in its infancy, as is its business model. Investors have yet to understand fully how best to monetize VR content. As of today, most of the high-end VR content is either branded or

created as companion pieces for big intellectual property tent poles. In this case, the project’s financing usually comes from a separate marketing budget. Marketers and publicity people are very keen to

Make a Film in VR from Start to Finish   125 produce engaging VR content as it is perceived as new and exciting. Partnering in these situations offers VR filmmakers the opportunity to tell beautiful stories with a decent budget. For example, “My Brother’s Keeper” is a story-driven VR re-enactment of the Battle of Antietam from PBS Digital. The experience is a companion piece to PBS’ primetime Civil War series, “Mercy Street.” Other financing opportunities include grants from organizations such as the National Film Board of Canada (NFB), the CNC in France, Oculus (the “VR For Good” initiative, the “Oculus Launchpad” scholarship), HTC Vive (“VR For Impact”), etc. As the VR market develops and profits are realized, financing opportunities will increase accordingly and the process of developing VR experiences will become easier. Other interesting avenues are Google Daydream, PlayStation VR, Amazon, Hulu, Youku, Tencent Video, IQIYI, Fox, Sony, and Disney, which are all currently financing and/or buying virtual reality content. Pre-Production Cast and Crew

When the time has come to hire your crew and cast your actors/actresses for your VR project, a few things differ from traditional pre-production. First, it is important to hire a VR supervisor (it can be a director of photography who has experience in VR) as soon as pre-production starts so that he/she can advise both the director and the director of photography before the story­boards or the shot list is finalized. He/she then helps them to choose suitable VR equipment according to their needs and budget. Do not forget that the most important crea­tive aspects of VR are decided in pre-production: artistic and narrative discussions, shot lists, storyboards, and so on. When it comes to casting, it can be wise to look for actors and actresses who have extensive experience in theater or improvisation. There is an ongoing debate regarding whether a director of photography is needed or not when shooting VR. It is true that part of the DP’s responsibility is to select lenses and help design shots, which involves framing, camera movement, and depth of field, which do not apply when shooting 360°. The cinematographer’s job is to interpret a script and that is done

Paul Raphaël, Co-Founder/Creative Director, Felix & Paul Studios Virtual reality is much closer to theater than cinema, so when we were casting, we were looking for people who have experience in theater. If they don’t, then we test them with scenes with long dialogue and improvisation. If another actor drops the ball, you need to be good at picking it back up because a long take will never be perfect unless you can spend weeks rehearsing it like you do in theater, and very few films have that luxury. I would say that none of the scenes we shot were what we had written because stuff happens. We have some scenes that have six or seven actors and they’re all interacting and moving around and talking to each other. It never exactly turns out how it was written; someone will skip a line or forget a line or say a different word so it has to be very organic and everyone needs to be working together. It’s also really fun and beautiful to see. The fact that none of the takes are perfect brings a bit of that into it as well. The happy accidents.

through photographic tools. Those tools are lighting, filtration, lens choice, depth of field, etc. In VR, it is really a director’s decision on where he/she wants to stage the action and the camera. However, others argue that the cinematographer in a VR project is almost the same as traditional filmmaking in the sense of being the visual storyteller for the director and helping get his/her vision across within this 360° space. There is also an added layer of technical understanding that is a bit more than just knowing the camera; it is understanding all of the elements that go into creating a VR project. Scouting Locations

The importance of location and world building are described at length in Chapter 6. It is useful to scout potential locations with these elements in mind, and to take photos in 360° (using, for example, the Ricoh Theta or Samsung Gear 360). Viewing these photos in a headset can help determine whether a location is “360° ready” or not.

126  Virtual Reality Storytelling

Eve Cohen, Director of Photography, “The Visitor VR” I find myself on VR projects needing to know more technically than I would have before. But I’m hoping that it gets to a point where I can actually just be a cinematographer the way that I feel like I’m the best version of myself, which is just to help get the vision of the director across onto the screen, or onto the sphere. As a DP, I have a lot of conversations with the director about how we’re going to guide the story. It’s not so much about working with the director to just make a frame anymore. The DP really works with the director to help guide this story visually. They’re guiding the actors and they’re guiding the live action that’s happening around the camera. But as a cinematographer, you’re really in control of the entire sphere and the images and the way that people are looking. For example, if an actor is crossing left to right, are they going to be moving from a shadowy side of the world to a brighter side of the world? Or maybe as they’re walking to the right that’s going to be a cut point. And the next edit that we get to, what is that frame going to look like? And how are we going to fade into that one, using lighting perhaps? There are documentary projects and television projects that say you don’t need a cinematographer also. And if that’s the kind of project you’re making where you don’t want that role to exist, you have to know what it is that you’re missing. When it comes to VR, if you’re on a project that requires a very small crew and your director is skilled enough to also operate a camera and train their brain to think the way a DP does, then maybe you don’t need one. But just because it’s VR doesn’t mean you don’t need that creative collaborator who is going to be able to, say, let the director really do what they need to do, and then get all of the cameras, and all of the operators, and tech[nician]s, and really oversee the camera process. And if you just have a tech person who’s working on the camera, they’re not necessarily trained in the creative visual collaboration that goes along with making really great projects. I think a DP is really essential. I think it’s always great to have a collaborator in anything that you do, and you have another brain to make the film even better.

Paul Raphaël, Co-Founder/Creative Director, Felix & Paul Studios For “Miyubi,” the film is set in 1982 and takes place in a house. We decided to find a real house and re-did the decoration of almost every room in the house. People who were there on the day of the shoot thought, “How did you find a house that still looks like it was stuck in the 1980s?” There was very little that seemed fake. In VR, the camera ultimately becomes a person, and you really need to be able to convince the viewers they are there for real as well. Figure 7.6  “Miyubi” ©Felix & Paul Studios

Make a Film in VR from Start to Finish   127 Rehearsals

If the director and the camera crew have not shot VR before, extensive tests and training are strongly recom­ mended. The crew must learn the new language of VR and be prepared to face any technical and narrative challenges before the first shoot day. When rehearsing, the director should position him/herself where the VR camera will be, thus ena-

bling him/her to fine-tune the blocking in the 360° space. Play with different heights for the camera and different staging until the scene feels organic and flows naturally. This process can also help the actors/actresses to understand that the VR camera is indeed the head of the participant and adjust their behavior accordingly. Storyboard and Previz

Paul Raphaël, Co-Founder/Creative Director, Felix & Paul Studios

Richard L. Davis, Screenwriter

Once we have the location, we would adapt our staging diagrams and then start rehearsals with the actors, but these rehearsals were not done on location. For “Miyubi,” we did rehearsals in Montreal first, while our actual location was outside of Montreal, about an hour away. We discovered a lot of things and adapted our staging accordingly. Then we had rehearsals on location, and finally a couple more the day we were shooting. In total, we had three sessions of rehearsals.

The next generation of screenwriting software should directly incorporate elements of previz. This new kind of screenwriting software might look like a simplified game engine or a markdown language that compiles into an environment using a standard collection of objects and actors. Whether a writer is creating on a flat screen or directly in a virtual environment, the writer should have the ability to directly associate action and dialogue with a mock-up scene.

Figure 7.7  “Tilt Brush” VR app by Google

128  Virtual Reality Storytelling Richard Davis evokes the possibility of building a VR script within a game engine itself. This can very soon become a reality thanks to software such as Mindshow, which makes the process of creating stories in VR simple and accessible. One can also use VR apps such as Google’s Tilt Brush and Quill (for HTC Vive and Oculus, respectively), to draw storyboards in VR and in three dimensions. You can also use Unity or Unreal to build a VR app that serves as a previz. This may seem overly complicated, but it can be a powerful asset when raising money for the final project. Production A Day on a VR Set

You made it! You have successfully written your VR project, raised money, and hired the best VR crew and great actors/actresses. It is now shooting day one. Shooting virtual reality is very different from shooting traditional cinematography. First, there’s no behind-the-camera. There are a lot of things to consider in that: the amount of crew you have, where they are going to hide, not being able to have lighting rigs

and a huge amount of equipment around because you are either going to have to come up with a really tricky way of replacing or removing it, or you are going to have to tear it down and light organically. It impacts the way one lights and designs a scene, as well as how the set is run (where do you hide the video village? Crafty?). However, more importantly, it changes the relationship between the director and the actors/actresses, who often find themselves alone with the camera when it comes time to roll. Some of the current VR solutions do not have live VR monitoring. Therefore, the director and the rest of the creative team must find ways to look at the takes without being seen by the camera. Sometimes one person can sit right underneath the camera if it is not moving. Another possibility is to be in the same room with the smallest footprint possible and to shoot a plate that will replace this portion of the sphere in postproduction. Of course, this add-on comes at a cost. Another option is to use an additional small VR camera which can broadcast live, such as the Samsung Gear 360 or the Teradek Sphere system (see Chapter 2 for more details about VR cameras). In the example shown in Figure 7.9, the VR camera is a Google Odyssey but you will notice that a

Maxwell Planck, Founder, Oculus Story Studio, Producer, “Dear Angelica,” “Henry” We’ve found it is helpful to storyboard out an idea, but it is dangerous to spend a lot of time discovering story in this process. It can be misleading to feel good about storyboards or 2D animatics when you’ll discover that the same story and pacing feels wrong when adapted to VR. It is better to quickly move to one of the three previz techniques we’ve used over the course of making “Lost,” “Henry,” and “Dear Angelica.” The first technique is to use a radio play. By crafting a paced audio experience that’s decoupled from visuals, it’s easier to close your eyes and imagine how an immersive experience could emerge from a radio play. We feel like you get better signal on having a good radio play turning into a good VR experience. The second technique is from game development and it’s called gray boxing. Similar to how a story layout department works in computer animation, gray boxing is the work of creating rough blocking of the story and interaction mechanics by using simple shapes and stand-in characters. It’s a great way to iterate quickly, but it’s sometimes hard to project how ugly proxy sets and characters will turn into an experience that will feel good. I believe with time, as we see more projects from beginning to end, we’ll have more confidence to project how interesting ideas presented as gray stand-ins will eventually turn into a compelling experience. The third technique we’ve just started working with is to have an immersive theater acting troupe workshop a script, and then over the course of several days, act out the experience with our director acting as the visitor. We can record the experience through several “playthroughs” and use that coverage to edit together a blueprint of what we would build in the game engine.

Make a Film in VR from Start to Finish   129

Randal Kleiser, Director, “Grease,” “Defrost”

Paul Raphaël, Co-Founder/Creative Director, Felix & Paul Studios

The fact that the crew is isolated from the shooting experience is a bit of a drag for them. In “Defrost,” I cast myself as a character so that I’d be on the set and able to watch and direct the actors. It isn’t easy to muster-up enthusiasm for the day’s work when the crew is in the dark.​ I​ played a wheelchair nurse pushing the viewer along . . . so I was able to fulfill a longtime wish of being a dolly grip. By being behind the viewer, I knew that not many of them would turn around and watch me, but I had to mask any reactions to the performances, which was not easy.

In one of the last scenes of “Miyubi,” there’s a remote-controlled robot and the “puppeteer” had to be in the room to see it. He was therefore in the frame and I think (co-director) Felix probably was there, too, in an angle where very little action happened. I think even the sound guy was there. It was one of the scenes that we shot where there were the most people in the shot that weren’t supposed to be there. Sometimes it’s more complicated like, for example, when we were shooting “Sea Gypsies” in Borneo on the open sea, then often we’re on a boat that’s far enough that you can’t really tell that it’s a crew with people watching monitors on it. Felix and I will also be there for the first few rehearsals on set and then we will split, with Felix staying close to the camera and me going behind the monitors where we basically have a combination of a 360° unfolded view and a security cam-style version of the scene. The person who stays next to the camera has to be very close to the camera or somewhere that’s a dead angle. VR is the perfect medium to be two directors because there’s so much going on in all directions. Felix and I have been working together for 12 years. The first seven or eight years of our career where we were making movies, we did it because there was a great synergy between us, but we were never using ourselves at full capacity. But then as soon as we started working in VR, it all made perfect sense. In some cases, we even brought on a third director. In the case of the Cirque du Soleil series we’ve been doing, we’re working with François Blouin who has a lot of experience with Cirque du Soleil and really knows the shorthand of the acrobats and the performers.

Figure 7.8  Randal Kleiser on set of “Defrost VR”

small Samsung Gear 360 was also attached to the rover to deliver live VR monitoring to the director and the clients. The operator wears a McDonald’s costume,

which allows her to pilot the rover while blending into the frame. When shooting using the “nodal” technique, you do not have to worry about these issues, as parts of the sphere are shot separately using a normal camera. When designing the shots, one must keep in mind the constraints inherent to the VR stitching process

130  Virtual Reality Storytelling described in Chapter 2, and respect a minimum distance to the camera. Most of the current VR cameras can present stitching issues when objects or people come within 4–6 feet of the camera. It is vital to know the workflow well and make decisions regarding the minimum distance before the first day on set. Cinematography

Figure 7.9  Celine Tricart on the set of “Building a Better McDonald’s, Just for You” – photo credit Benjamin Scott

Liddiard’s description of the work done on “Help” to select the right camera/lens setting and overcome challenges due to the nature of 360° filmmaking proves how technical VR is. An experienced VR DP or supervisor is a must, especially for ambitious projects such as “Help.” However, VR is a challenging medium for directors of photography. Not only do most current VR cameras lack the same dynamic range and quality of professional film cameras, but they also provide limited choices of lenses and filtration. In the case of documentaries, we usually rely on natural lighting, with the exception of interviews where we will do some lighting and then erase the lights. For narrative, we will almost always light a scene through a mixture of practicals, natural lighting, and lights that we paint out in post-production. The lights can either be up out of the way on the ceiling, which can be relatively easily plated, or somewhere in the frame that has the least amount of crossing. If the camera is moving, we often use a motion-controlled dolly for the camera,

Gawain Liddiard, Creative Director, “Help,” The Mill For “Help,” we used a RED Epic with a Canon 8-15 lens. We found that the geometry was really good. And the 8-15 we can set up in such a way that you are cropping slightly. We had the cameras mounted in portrait so that you had the 180° sweep vertically, but then not the full 480° horizontally, and that’s actually why we went with four cameras. If you had the 480° sweep, you could knock it down to three cameras, but we wanted that boost in resolution, so at that point, there’s the decision to go with four cameras versus three. And so by going with four, you can get a lot more resolution – it’s not just that you had an extra camera, it’s the fact that you can allow a larger coverage of your film back. The only issue with this lens is that it’s an F4 so we had to boost the ISO. We involved the guys from the R&D department at RED. Discussed all the settings that we could possibly tweak to push out as much noise as possible. With such a slow lens, you want to pump your environment full of light, but in the VR world, it’s a really hard thing to do. Any light you put in, you’re gonna have to paint it out. So it’s a real catch-22 if you want a lot of light, but where do you put those lights? And not just seeing the light, but something we’ve found in proof-of-concept tests that we did is the camera shadow is horrendous. You really have to dance around your camera shadow because it’s almost guaranteed

Make a Film in VR from Start to Finish   131

Figure 7.10  From: Behind The Scenes: Google ATAP “Help”

that it’s going to ripple across an actor at one point. Obviously the camera shadow moves with the camera itself, so there’s no motion blur on your camera shadow and you get this horrendous, crisp, perfect image of your camera rippling across your scene that has to be taken out later on. So it was a big consideration and it was something we just had to accept, the fact that it was a relatively slow lens and another reason why we’re thankful that the RED camera has the range to sort of pull that up. We really pulled it up to that almost breaking point where you did start to see unpleasant grain. Excerpt from “fxpodcast #294: Making the 360 degree short HELP,” Mike Seymour, Sydney University, Fxguide.com

which can either be programmed or remote controlled. Depending on the surface you are on, you can get a relatively close-to-perfect repetition. Achieving beautiful visuals can be a challenge given all of these limitations. Some cinematographers find ways around them by picking different VR cameras depending on the light conditions and the type of shot needed. For example, three different VR systems were used when shooting “Under the Canopy,” a documentary about the Amazon rainforest. The daytime shots were made using the Jaunt One; the night-time shots with a VR rig that was custom-made for the Sony A7S DSLR. Finally, the aerials were shot using a GoPro rig as the Jaunt One was too heavy for

the drone. The key is to adopt a more “documentary” approach to VR cinematography, even for narrative projects. On the bright side, it can be fun and exciting to find ways of making the best out of the current VR cameras and to use the subtleties in the lighting to guide the participant’s gaze to the POIs. When moving the VR camera, extra care must be taken to avoid exposing the viewer to motion sickness. Movement itself is not the issue; rather, it is caused by acceleration (change of speed), rotation of the sphere, and variations in the horizon line. Moving shots in VR are possible but challenging to do right. Motioncontrolled dollies, cable-cams, or drones often provide the answer.

132  Virtual Reality Storytelling

Eve Cohen, Director of Photography, “The Visitor VR” If I had all the control in the world, the sets would be built and I would be able to build my lighting into the set. It’s a lot of production design, and it’s a lot of very strategically placed props in the frame to hide your lights. The other end of the spectrum is that you’re outside and you don’t have any control over the lighting. You are left with choosing the right time of day and knowing where to place the camera so that the flares aren’t too extreme. But there are also ways of lighting that need to be cleaned up later if you have a budget that can allow for that. I tend to use lights that have a much smaller footprint, or lights that I can hide really easily. Whether that be light strips, like LED light strips that are getting wrapped around on the back side of a very thin lamppost to bring up the opposite side of where the light might need a little bit more ambience. I end up using a lot of LEDs, and a lot of practicals that are on dimmers. And a lot of production design to hide the cables. The production design department needs to do more than it normally would. A lot of times in flat movies they don’t actually have to light things, whereas in VR a lot of times these practical lights have to actually light things. I filmed a segment of “Memory Slave” at the Ace Hotel Theater, in downtown LA. There’s this whole lighting grid that’s set up because you’re inside of a theater space. So part of the production design and actually knowing that we were in a location where if you see all of our actual lights and units, it’s okay to see them because they’re on a lighting grid, and you know you’re in a theater, and that is part of the scene. I actually just repositioned almost every one of the lights that was on this grid to be doing something in the frame that I needed it to do.

Figure 7.11  “Memory Slave” ©Wevr, Seed&Spark

Halfway through the scene, there’s a lighting gag where everything turns off and just a spotlight comes up. And all of that was only possible because we staged this part of the scene in a theater space with somebody sitting up on the second level of the balcony. I don’t know that I would have been able to do that hiding of all of the lights. I probably would have had to paint them out.

Make a Film in VR from Start to Finish   133 Dailies

In VR, just as in traditional film/TV, reviewing dailies is a great way of making sure the needed shots were captured, and to learn from the day’s work. VR scenes can look and feel very different when watched stitched in a headset versus on a flat screen or a mobile 360° player. Most VR cameras come with dedicated stitching software that can render a low-resolution rough stitch quickly at the end of the day. This process can also potentially highlight technical issues and allow for a reshoot the next day, if needed. Post-Production It’s a wrap! Time has come to start post-production, which is where the real challenges of VR lie. The successive steps of a traditional VR post-production workflow are described below. Technical details and software are listed in Chapter 2. Figure 7.12  Jaunt One camera mounted on a cablecam system to shoot “Under the Canopy” – credit photo Celine Tricart

Alex Vegh, Second Unit Director and Visual Effects Supervisor To move the camera in “Help,” we started off with a cart system. We found that we were better moving the camera from above. We had a great deal of elevation changes. At one point we talked about doing motion control [moco], but we really found that there wasn’t a moco unit out there that would give us the freedom of movement and the length of movement that we wanted. So we wound up using multiple cable-cam systems. It was all intended to try and capture everything in one take. There are three different sections – China Town, subway, and the LA River. The three sections were then tied together and seamlessly blended. One of the big things was allowing the director to have the creativity to do what he wanted to do, and not have technology get in the way of that. One of the things we did learn along the way was that the cablecam systems are very capable – they’re generally pre-programmed and you can slow it up or speed it down, but it’s always going to go along the preprogrammed path. Normally the actors would act and you have the camera following them. But our camera is going to move where it’s going to move – there’s no adjusting for that. So it’s a different sensibility. Excerpt from “fxpodcast #294: Making the 360 degree short HELP,” Mike Seymour, Sydney University, Fxguide.com Figure 7.13  From: Behind The Scenes: Google ATAP “Help”

134  Virtual Reality Storytelling Pre-Stitch

Final Stitch

Pre-stitch is a purely technical step consisting of rendering a rough stitched version of the foot­age in preparation for editing. It is similar to making a rough previz of the future CG shots to help the editor cut the scenes together. Most stitching software can render low-resolution/low-quality stitched footage, which will allow the editor to go through the footage and select the best takes.

Once the editing is locked, the final stitch process can start. This is the only post-production step that is unique to VR and can be time-consuming and expensive. It is vital to have tested the entire workflow and, more specifically, the various stitching solutions before entering production. Depending on whether the film is in 2D or stereoscopic 3D, the chosen VR camera, and the complexity of the shots, the stitch can become extremely complicated. The creative team reviews the stitched shots once they are ready, but it is advisable to involve the VR supervisor/DP in this process, as some stitching imperfections can be hard to detect. Someone who has experience in VR will be able to spot these and make sure the final stitch is as perfect as it can be. Some plug-ins specifically for VR, such as Mettle’s Skybox or Dashwood, provide tools to de-noise, sharpen, glow, blur, or rotate the sphere in a way that is compatible with Adobe or Final Cut Pro.

Editing

During the editing process, the director and the editor work together to pick the best takes and piece the film together. The pace of a VR film can feel very different when watched in a headset compared to when it is watched in 360° on a flat screen. It is recommended to check the edit in a headset on a regular basis. It is also during editing that the “POI-matching” technique can be applied and the transitions from shot to shot be tested, as described in Chapter 6.

Figure 7.14  Stitching imperfection

Make a Film in VR from Start to Finish   135 VFX

Color

In certain cases, the stitching of specific shots can be too difficult for traditional stitching software. It is then necessary to use advanced VFX tools to do this, such as The Foundry’s Nuke and its “CARA VR” toolkit. Other types of VFX often used in VR include stabilization (to prevent motion sickness), tripod/drone/ cable-cam replacement, and plates to hide lights or crew, etc. More traditional VFX, such as those used in film/TV, come second. A stunning example of VR VFX work is Google Spotlight Stories’ “Help,” directed by Justin Lin.

Color is a post-production step carried out under the supervision of the director, the director of photography, and the colorist, yet the VR supervisor/DP can be valuable at this phase to share his/her knowledge of VR and, more specifically, the HMDs. The current HMDs have a much lower quality than calibrated digital cinema screens in terms of resolution, dynamic range, and color reproduction. Most VR headsets have OLED displays, which have known issues when it comes to true black and smearing. It is vital to adjust the color grading and the black level when exporting for HMDs.

Gawain Liddiard, Creative Director, “Help,” The Mill

Eve Cohen, Director of Photography, “The Visitor VR”

Something we found throughout the project was almost every tool broke. It was quite bizarre that everything we came across just didn’t work as expected, and the blur is one of the examples I always go to. Something as simple as a 2D blur doesn’t work in an equirectangular environment where you want to blur the horizon less than you blur the top and bottom. Part of it was just being diligent, our 2D team had to be incredibly careful, and simple tasks such as roto[scoping] just took longer. And then the other side to it was there wasn’t sort of an overarching, fix everything, R&D tool that was created. It’s much like plugging all those little gaps of writing, lots of little small tools that allow us to tie it together. Excerpt from “fxpodcast #294: Making the 360 degree short HELP,” Mike Seymour, Sydney University, Fxguide.com

I spent a while researching standards in television, but nobody has come up with this standardization for VR color grading. When you do export a VR file, it would go onto Gear, it would go onto Oculus, it would go onto the Rift, or HTC, and I kept asking, “Well, don’t they all look different? They’re all going to look different. How do we grade for this output? How do we grade and export for this output?” And in normal filmmaking that’s pretty standard that you do a different digital path than you would for a theatrical. But we don’t have that yet in VR, but that needs to happen. If you’re doing a car commercial and you have a client who is really particular about the color of that car, it is going to be different on each headset. And there’s no way to control that right now.

Gawain Liddiard, Creative Director, “Help,” The Mill There was a huge amount of back and forth when coloring “Help.” It was great that we did all the color here in-house. Greg Reese did all our color for us. He would work with grading a certain area that he knew he wanted to grab hold of separately, and then we’d work back and forth between Nuke and him. And then there was also ways that we found to solve the edge-of-frame issue. We would essentially overscan the edge, the entire frame, take a small sliver of left-handed frame and stick it on the right side, and a small sliver of the right-handed frame and stick it on the left side, so you had this safe zone buffer. Excerpt from “fxpodcast #294: Making the 360 degree short HELP,” Mike Seymour, Sydney University, Fxguide.com

136  Virtual Reality Storytelling

Figure 7.15  Color grading session for “A 360 VR Tour of the Shinola Factory with Luke Wilson” – ©ReelFX

One possible workflow for VR color grading is as follows. The first linear “match” grading is done before or during the final stitch to make sure all the cameras are even. Next is the linear primary and secondary grading within the spherical frame composites. Finally, the spherical frame is sent to “traditional” non-linear grading (in DaVinci, for example). Beware when grading an equirectangular unwrapped frame as any grades that get close to the edges will create a visible line when the sphere is mapped. Sound

Sound post-production happens in parallel to the final stitching/VFX/color steps. It usually starts right after the edit becomes locked (although some preliminary sound design and music work can be done beforehand). When it comes to sound editing, there are no differences between VR and a more traditional post-production. Sound recordings are selected and assembled in a timeline. Things become more complicated when it comes to sound mixing. The positioning of the various sound elements in the VR sphere is a delicate, yet important, process. Dialogue, sound effects,

Anthony Batt, Co-Founder and Executive Vice President, Wevr The early attempts that we were making were very visually focused. What we’ve learned through doing that is that humans are into relations and in our real lives, audio plays a much richer input device than we’ve realized. That’s a great creator of a simulation, whether it feels like a film or it feels like a video game, or it feels like anything. Not paying attention to the richness of how audio needs to be placed and played back is a big miss. We have learned and recognized and respect that. But it’s different than having Hans Zimmer’s score or something for us. It’s more about creating the right room tone and have the right spatial audio. A lot of people that jump into making VR really focus on the visuals first. They don’t realize how important sound is.

Make a Film in VR from Start to Finish   137 soundtrack, and voice-over can be used as “cues” to influence the participants and set the POI. Just like in real life, sounds often precede visuals and allow us to create a mental representation of our surroundings, beyond the limits of our field of vision. Using the binaural or ambisonic technologies as described in Chapter 2, we are able to create and emulate a realistic and precise sound sphere to accompany VR films. Unfortunately, sound often comes as an afterthought even though it is key to creating presence and immersion in the VR story world. Titles

Titles, subtitles, and supertitles (supers) in VR can be hard to place in space (and in depth, as is the case for stereoscopic 3D projects). If the POI and the cor­ responding title/subtitle are placed too far from each other, the participants might not see it at all. Ideally, all the added elements should be reviewed and adjusted in an HMD. Sometimes a title is placed in multiple positions in the sphere to ensure it does not get missed, such as in the example in Figure 7.16. Another possibility is to use a game engine to “host” the VR film and use its tracking capacities to make titles and supers appear in the participants’ field of vision, no matter where they are looking. Just like the previous post-production steps, quality control of the title placements should be done by someone who is knowledgeable about VR, and who can only make the final project better.

Distribution The audience for virtual reality is currently limited to early adopters due to the price and technological limitations of HMDs. However, the fast and steady growth of 360° platforms such as YouTube 360 and Facebook 360 considerably opens the market for VR creators who are willing to have their content seen on a flat screen instead of a headset. Most of the current highend VR content is financed by the various distribution platforms themselves. This brings the VR industry closer to the TV industry than to the cinema system of distribution. Content is paid for upfront and then belongs to the platform, instead of being created in the hopes of finding distribution and monetization after the fact. This approach might change in the months or years following the publication of this book, but is the case as of 2017.

Steve Schklair, Founder, 3ality Technica and 3mersiv This is where we get deeply into the realm of opinion. I am still skeptical in the viability of monetizing narrative scripted VR content. In today’s world, a majority of professional VR content is coming out of corporate marketing budgets. You could possibly monetize educational material. You could possibly monetize industrial and

Figure 7.16  “A 360 VR Tour of the Shinola Factory with Luke Wilson” directed by Andrew & Luke Wilson – ©ReelFX

138  Virtual Reality Storytelling Festivals corporate projects. You can monetize product demos as they come in on a work-for-hire basis. The traditional short film narrative scripted that you view for entertainment purposes, I don’t know how you monetize that, but since so much is being done in that area maybe I’m just missing something. I’ve been involved with a few projects of this nature, but it has been difficult to monetize those. Maybe someone in the future will figure it out or when the audience hits critical mass it becomes obvious. The idea behind a lot of companies is that there’s going to be a “Netflix” of VR, an entertainment channel. I’m having trouble buying into that because what happens to your business model when Netflix wants to be the Netflix of VR? Or maybe it’s YouTube being the Netflix of VR, if you want to use that as a way to describe it. I’m not sure what the potential is for monetizing or not. You’d need a very large audience in order to monetize content in that way. I’m not sure they’re going to get that size of an audience for this type of content.

A lot of prestigious festivals now have a VR selection, which has encouraged storytellers to create bold VR experiences that push the boundaries of the medium. One of the most renowned is the Sundance Film Festival and its “New Frontier” program. New Frontier has been accepting VR submissions for many years, but the overwhelming quantity of virtual reality submissions has led to the creation of VR-specific venues that are now ticketed. With Sundance leading the way, additional festivals are adding VR. Sundance

The Sundance Film Festival committee encourages the submission of inventive, independently produced works of fiction, documentary, and interactive virtual reality projects. Projects must be viewable via Oculus Rift, HTC Vive, Samsung Gear VR, or Google Cardboard. The late submission date is usually in August.

Figure 7.17  Sundance festival-goers immerse themselves in new-media artist Oscar Raby’s virtual-reality piece “Assent.” Photo by Neil Kendricks

Make a Film in VR from Start to Finish   139 SXSW

South by Southwest is an annual film, interactive media, and music event that takes place in mid-March in Austin, Texas. The last date to apply is usually in October. Tribeca Film Festival

Tribeca exhibits experiential work throughout the festival to showcase new forms and uses of media, including virtual and augmented reality. The last date for submission to this category is usually in December. Atlanta Film Festival

Atlanta Film Festival is now open for groundbreaking works in virtual reality. The late deadline for submission is October. Cinequest Film & VR Festival

This vanguard film festival is organized in San Jose, Silicon Valley. Cinequest fuses the world of the filmed arts with that of Silicon Valley’s innovation to empower youth, artists, and innovators to create and connect. Dubai International Film Festival

“DIFF” is a leading film festival in the Arab world. Since its inception in 2004, the festival has served as an influential platform for Arab filmmakers and talent at an international level. Kaleidoscope VR

Kaleidoscope is not only a film festival but a community for virtual reality creators to share their work, build a fan base, secure funding, and discover new collaborators. There is no deadline for submissions; Kaleidoscope accepts projects on a rolling basis. World VR Forum

The World VR Forum is a Swiss non-profit organization dedicated to advancing the virtual, augmented, and mixed reality industries. Many more VR festivals and events can be found online and with the help of VR-focused social media

Shari Frilot, Founder/Chief Curator of New Frontier at Sundance Film Festival The things I look for as a film programmer are a bit different than what I look for in VR. With film, it’s rarely different because it’s a very mature medium and it’s a mature industry. With VR, every year it’s different. Last year, in 2016, I looked for innovation and got excited about demos that showed a new kind of technology. For example, I got really excited about volumetric capture. There wasn’t a lot of story there but I knew what they had achieved was significant. Sometimes, we bring technologies that are not necessarily driven by story, but to expose our storytelling audience to these technologies so that maybe they could do something with it. Maybe it could expand their storytelling culture. Now, last year, I knew going in judging that I did not want to show any demos. I really wanted to show work that was breaking ground in storytelling. There are so many films about Africa that it was really hard for a film about Africa to make it to the selection. There was one, but it wasn’t going to be ready on time for the festival. The VR experiences had to be the best of what was out there that I had seen of its kind. I drew a lot from my own experience looking at work and listening to my response and trusting it. Trying to learn how to trust it. My instinct is to do both this coming year. There are new trends that are coming up that I know are going to be important. I think I’ll continue to separate VR as an independent platform and work to really serve the community that is now coming to the festival that, basically, doesn’t even see films. I’m trying to figure out how to get them to see films as well as VR!

groups. Festivals are a great way of getting innovative and artistic VR creations seen by wide and diverse audiences. Choosing the Right Platform/Monetization

Most of the current distribution platforms are listed in Chapter 2 (for live-action VR) and Chapter 3 (for game engine-based VR), but many more are likely to

140  Virtual Reality Storytelling have been released since the publication of this book. Once the festival round is completed (if any), it is time to pick the right “home” for your VR project. Unfortunately, as noted above, most of the high-end VR content is produced and financed by the distribution platforms, such as Oculus or Within and its “Here Be Dragons” production department. Even if none of the established platforms is financing your project, you should have discussions with them as early as possible and try to put a distribution deal in place. If no distribution has been agreed on, you will likely have to publish and publicize your project yourself, either through the non-curated platforms such as YouTube 360 (for non-game engine content) or by packaging it into a downloadable app. As of the first quarter of 2017, only 30 VR apps have made over $250,000 on Steam, according to Valve. More than 1,000 titles are supported on the technology. Only a few VR titles have seen more than

$1 million in revenue. However, Steam is a high-end gaming platform compatible only with expensive headsets such as the HTC Vive. When it comes to narrative live-action content, it is complicated to assemble precise numbers. Currently, most VR content is accessible for free to encourage audience growth. Even high-end and expensive content such as that from Felix & Paul is free. Once the market reaches its maturity and enough HMDs are sold, it is likely that these platforms will switch to a subscription-based or pay-per-view system. The Transport platform by Wevr has already made this transition: since 2017, users can have access to a number of free curated VR experiences, but can also pay a yearly fee to get access to premium, high-end content. The mobile version is offered at $8/year, and the PC+mobile version at $20/year. Time will tell if this system works and if the market is ready for the switch to a subscription model.

Ryan Horrigan, Chief Content Officer, Felix & Paul While some of the platforms/hardware makers are funding projects (Oculus, Google, HTC, PlayStation), they are doing so in exchange for often lengthy exclusivity periods and with the intent to make the content free for all of their users, avoiding monetization for the time being as they seek to grow. This has been a great way to kick-start the industry for content studios like ours; however, as user adoption continues to spike in the next one to two years, we’ll start to see content financing from other avenues become more readily available. Such as via China, from SVODs [subscription video on demand] like Netflix, Amazon, and Hulu, telecoms with marketing and SVOD dollars like AT&T and Verizon, as well as from the world of independent film and television finance. While some content studios are also distribution platforms that sit on top of Oculus, PlayStation, Daydream, and HTC Vive, we have deliberately remained platform agnostic so that we can be nimble and flexible on a project-to-project basis in terms of how we finance and distribute said projects, working more closely with the hardware makers/core VR platforms. We see a near-term opportunity for models to develop akin to those from independent film and television finance, whereby financing and distribution can be arranged across platforms via pre-sales of various windows and territories. It’s important for content studios to seek monetization (direct-to-consumer TVOD [transactional video on demand] in particular), in order to become self-sustaining entities that can ultimately fund their own productions for wide distribution across platforms, taking the onus off VR hardware makers/platforms such as Oculus and Google, who are surely not interested in funding VR content indefinitely once a self-sustaining market is established. After running many detailed financial models in regards to content financing and distribution, also taking into consideration the current number of active VR users across platforms, and projections for user growth in one year, two years, and beyond, we believe making premium VR series (scripted and nonfiction) will be a profitable business within one to two years, once said content is distributed across platforms in TVOD and SVOD windows.

Make a Film in VR from Start to Finish   141

Ryan Horrigan, Chief Content Officer, Felix & Paul Monetization is coming, but as an industry we must do more to differentiate for the consumer what is branded/ marketing-driven content for free, versus high-quality original content worth paying for. The move to episodic​ content will help delineate between the two, however. Platforms must not wait too long to begin training audiences that premium quality is worth paying for. It’s easy to neglect this in the short term as we all seek user growth, but in the long term it’s the key to a self-sustaining business model. Direct-to-consumer pay-per-view (TVOD) is much more interesting to me than SVOD, as there is more upside perhaps for content creators, which will only help them self-fund their own content. SVOD incumbents such as Amazon, Netflix, and Hulu will play a big part in funding and licensing content, the three of which will also lead the way in terms of VR SVOD business. They already have millions upon millions of people paying $12/month or $99/year for their TV and film content, so as soon as there are enough VR users, we expect them to make and distribute plenty of original VR content. The barrier to entry is quite low for them as they don’t have to acquire new users like others that may attempt SVOD models as VR-only distribution platforms. In other words, these incumbents have an advantage and a pre-existing user base to pivot into VR with.

The Future of VR/Conclusion

Shari Frilot, Founder/Chief Curator of New Frontier at Sundance Film Festival I don’t think virtual reality is going to reduce into a gadget form, and I do hope it stays young for a while because it’s just absolutely energizing to view what’s happening on the creator scene of this medium. It’s almost as if the creators who are dying to tell stories that they couldn’t before, either because they were shut out or they didn’t have the right medium, met a medium that would provoke them to push themselves even further and it’s just exhilarating. It’s like being on an anti-gravity ride watching this stuff grow.

It’s only been five years since the “rebirth” of virtual reality. VR is a medium that is not just storytelling, it’s not just gaming. We are not only talking about traditional ideas of storytelling, we are talking about completely opening up the box on storytelling. VR is headed toward new ways of computing and new ways of communicating with one another that take more from transmedia storytelling than they take from traditional cinema. Will virtual reality survive and become an established and respected medium for storytelling? Yes, but only if storytellers and content creators doing VR understand it and embrace it. There are lessons to be learned from the rise and fall of stereoscopic 3D which is now reduced to enhancing blockbusters and missed the mark when trying to become a storytelling tool. 3D was sold as something

it was not, with taglines such as, “With 3D, you’re in the movie!” The second mistake was the 3D ticket upcharge which made it unaffordable for a big part of the audience, families in particular. Last but not least, 3D was often an afterthought, the cherry on top of the cake to help market tent poles and make more money at the box office. We are already making the same mistakes for virtual reality. We publicize badly stitched content or 180° 2D films as “breakthrough virtual reality experiences.” The price tag of most VR HMDs is still extremely high. We shoot stories in VR just as we would do a traditional film, cutting between different positions within the same room and moving the camera without any justification or consideration for motion sickness. The few gems published on YouTube 360 are lost in hundreds of questionable 360° videos. Let’s make great VR and let’s try to push the boundaries of this new medium in a thoughtful manner, always respectful of our nascent, yet enthusiastic audience.

Anthony Batt, Co-Founder and Executive Vice President, Wevr I think that as the hardware and time progresses, you’re going to find yourself living deeper and deeper in the simulations. That will eventually move from a 2D form into a 3D form, which is now what we call VR, and it will become more of a simulated life. You’re already spending hours in front of a screen now, be it a 2D screen in your hand and on your computer; then you move to

144  Virtual Reality Storytelling

this next level of simulation where you actually have to have a form in which it takes place, where it’s not actually constantly telling you stories. It’s reinforcing, it’s reaffirming what you thought would be important to you. Then, inside of that world, there’ll be other simulations that you can go into that actually are for entertainment, exercise, learning, social. That’s the trajectory we’re on. If you ask a person a hundred years ago, if you said, “Hey, do you think you’ll sit in a chair and look at a screen all day long as your primary job function?” They’ll probably be like, “Are you crazy?” But now, there are hundreds . . . millions, hundreds of millions of people that spend all day long just looking at a screen. That’s already happened. I think that that is just a fact of our lives today. I’m not judging it. I’m not saying that it’s good or it’s bad. Some might say it’s good, some might say it’s bad. I’m just saying it’s just what we do right now as humans. As we move further and further into the screen, it makes sense that the screen surrounds us as the new medium arrives. I think the transition already happened.

List of Interviewees (in alphabetical order)

Grant Anderson, VR Executive Producer Grant Anderson is an experienced entertainment executive and executive producer with decades in the industry. Throughout his career, Anderson has focused on melding imaginative stories with the latest advances in technology, including visual effects, 3D, and virtual reality. Most recently he helped form Jaunt Studios to create leading VR experiences leveraging Jaunt’s cutting-edge technology platform. Prior, he was executive director at Sony’s 3D Center, CG supervisor for Sony Imageworks, a senior producer at Stan Lee Media, and digital artist at Disney Feature Animation. He is a board member of the Advanced Imaging Society and chairman of its VR committee.

Anthony Batt, Co-Founder and Executive Vice President, Wevr Anthony Batt is co-founder and executive vice president of Wevr, the leading virtual reality studio and distributor. He is an expert storyteller who has achieved extensive massive building of audiencedriven networks. Anthony was the president of digital at Ashton Kutcher’s Katalyst, a media studio creating programming for television, film, and the social web. Together with Ashton, he co-founded Thrash Lab, a media venture with YouTube. As founder and CEO of Buzznet (SpinMedia), Anthony built one of the most popular pop culture lifestyle companies from the ground up. Earlier in his career, Anthony co-founded

146  List of Interviewees and was chief technology officer of big data software company EMC|Greenplum (Pivotal) and co-founded Digital Threads, a web-consulting agency, with Craig Newmark where they created craigslist.org.

Melissa Bosworth and Lakshmi Sarah, VR Journalists and Filmmakers Melissa Bosworth and Lakshmi Sarah are multimedia journalists and 360° video producers. They co-founded Tiny World Productions, a company based in Berkeley, California, committed to innovative and immersive storytelling. They have covered energy, the environment, technology, and policy across the Americas and in Europe and South Asia, producing for Mic, Global Voices, AJ+, KQED, Fusion, and The New York Times. They are currently writing a book on VR documentary/journalism to be completed in 2018.

Jean-Pascal Beaudoin, Co-Founder, Headspace Studio Jean-Pascal is a director of sound design, supervising sound editor, re-recording mixer, and music supervisor based in Montreal. Along with long-time collaborators Felix & Paul Studios, a recognized creative and technology leader in cinematic VR, he co-founded Headspace Studio, the first sound studio entirely focused on virtual reality content. With Headspace Studio, Jean-Pascal has worked with important partners on projects such as the Daytime Emmy® Award-winning “Cirque du Soleil’s Inside the Box of Kurios,” “Jurassic World: Apatosaurus,” “Through the Ages: President Obama Celebrates America’s National Parks,” the White House’s first VR experience, and “Miyubi,” VR’s first long-form scripted comedy in co-production with Funny or Die and Oculus Studio.

Eve Cohen, Director of Photography Eve M. Cohen is a cinematographer whose work ranges from independent feature films to television series, documentaries, and live-action virtual reality. She entered the world of filmmaking through the study

List of Interviewees  147 of fine art and subsequently cinematography, both at University of California, Los Angeles. Her cinematography work for films and virtual reality projects has screened in festivals and theaters around the world as well as on many digital, broadcast and cable platforms. She was the director of photography of the truly independent feature film “Like the Water” (dir. Caroline Von Kuhn), which was the catalyst for creation of crowdfunding and distribution platform Seed&Spark. Eve is also a co-founder at Seed&Spark, where they believe the art of storytelling is about expanding imagination – shining a light on a world inside and deepening empathy for the world outside.

Shari Frilot, Founder/Chief Curator of New Frontier at Sundance Film Festival

Richard L. Davis, screenwriter Richard L. Davis studied screenwriting and computer science at Loyola Marymount University. He has written scripts for comics and film, as well as 360° cinema and VR games. Instead of pursuing a place in film or television, he sought out opportunities for writers in the revival of VR. His first script written for VR was commissioned by director and stereographer Celine Tricart. Currently, he is working on a VR game adaptation of the novel “Nemo Rising.”

Shari joined the programming team in 1998 and currently focuses on US and world cinema dramatic features, as well as films that experiment and push the boundaries of conventional storytelling. She is also the chief curator and driving creative force behind New Frontier at Sundance, a program highlighting work that expands cinema culture through the convergence of film, art, and new media technology. As co-director of programming for Outfest from 1998 to 2001, she founded the Platinum section, which introduced cinematic installation and performance to the festival. From 1993 to 1996, Frilot served as festival director of MIX: The New York Experimental Lesbian & Gay Film Festival. During that time, she also co-founded the first gay Latin American film festivals, MIX BRASIL and MIX MÉXICO. Shari is a filmmaker and recipient of multiple grants from institutions including the Ford Foundation and the Rockefeller Media Arts Foundation. She is a graduate of Harvard/Radcliffe and the Whitney Museum Independent Study Program.

148  List of Interviewees Music Design, a boutique music production company servicing the feature film, television, game, and interactive media spheres. Since 2015 he has pushed deeper into virtual reality by working with Samsung, Nokia, Mirada, Wevr, Dolby, and many others, positioning Source Sound as one of the leading providers of VR-specific audio services worldwide.

Tim Gedemer, Sound Supervisor, Owner Source Sound, Inc. Tim Gedemer is president and CEO of Source Sound, Inc., a premier audio design and mixing group based out of Los Angeles. He is a 27-year veteran of the Hollywood feature film audio post-production community, having concentrations in music, sound design, mixing, sound supervision, and sound editorial. Tim has been afforded numerous awards throughout his career, including an International Monitor Award, several Golden Reels, and was a member of the “Best Sound Editing” Academy Award-winning team for the film “U-571.” Under his direction, Source Sound has become the premier audio finishing provider to Warner Brothers, Universal, and Fox Feature Film marketing groups. Partnering with game audio legend Charles Deenen in 2012, Tim created Source Sound Digital for the purpose of servicing the game and interactive media industry at large. Since that time, the company has been very active in the AAA game business, providing sound design and mixing for “Star Wars Battlefront,” “Call of Duty,” “Halo,” “The Crew,” “Need For Speed,” “Gears of War,” and many other top franchises. In 2014 he began working with Jaunt on their cinematic virtual reality projects, including the Paul McCartney and Jack White concert experiences, as well as the cinematic short film “Kaiju Fury” from New Deal Studios and Academy Award winner Ian Hunter. Also in 2014, he started Calliope

Harry Hamlin, Actor, “Mad Men,” “Defrost VR” Harry Hamlin is an American actor of stage, television, and films. He graduated from Yale University in 1974 with degrees in drama and psychology, and was later awarded a Master of Fine Arts in acting from the American Conservatory Theater in San Francisco. Though awarded an ITT-Fulbright scholarship in acting in 1977, he opted instead to make his feature film debut in Stanley Donen’s comedy spoof “Movie Movie” opposite George C. Scott, for which he received his first Golden Globe nomination. Best known for his roles as Perseus in “Clash of the Titans” with Lawrence Olivier, and as Michael Kuzac in the Emmy Award-winning TV series “LA Law,” he is the recipient of four Golden Globe nominations and one primetime Emmy Award.

List of Interviewees  149

Ryan Horrigan, Chief Content Officer, Felix & Paul Studios Ryan is the chief content officer of Felix & Paul Studios, where he oversees content development, strategy, and partnerships. Powered by industry-leading proprietary technology, the Emmy Award-winning cinematic virtual reality studio has a multiyear content deal with Facebook’s Oculus to produce original fiction and non-fiction virtual reality experiences and series, as well as a partnership with 21st Century Fox to adapt their film and TV IP for VR experiences and series. Previous works by Felix & Paul Studios include VR’s first long-form scripted comedy, “Miyubi,” starring Jeff Goldblum; the “Nomads” series; “Strangers with Patrick Watson”; adaptations of Cirque du Soleil’s “Kurios,” “Ka,” and “O”; Universal Pictures’ “Jurassic World: Apatosaurus”; multiple collaborations with President Barack Obama, as well as VR experiences with President Bill Clinton, LeBron James, 20th Century Fox, and Reese Witherspoon.

Jessica Kantor, VR Director Jessica Kantor is an LA-based director, producer, and interactive storyteller. Jessica’s credits include a short VR film called “The Archer,” which was programmed in the Kaleidoscope VR Film Festival, Cucalorus Film Festival, and the Liege Webfest; and “Ashes,” which premiered at Tribeca Film Festival and went on to play at Arles VR Festival and Edinburgh Digital Festival. Jessica has created 360° videos for Miller Lite/Pride. com, RallyBound, Wiser Distillery, and Google/ YouTube. Most recently, Jessica directed “Be a T-Rex in Virtual Reality,” which is being featured on YouTube’s Best of 360 List, a 360° music video for Warner Music artist Hunter Hayes (which premiered January 2017), and an interactive comedy of manners called “Dinner Party” (2017).

150  List of Interviewees

Ted Kenney, Director, Field and Technical Operations, Fox Sports Ted Kenney is the current director, field, and technical operations at Fox Sports. At Fox Sports, Ted played a role in executing the first live, multi-camera VR shoot at the 2015 US Open. Before Fox, Ted was the director of production at 3ality Technica from 2006 to 2012. He advised clients on every aspect of the stereoscopic 3D (S3D) production process. His recent achievements have earned him the label as the industry’s leading producer/director for S3D live broadcast. Ted’s background in the industry encompasses a wide range of entertainment media, from TV, film, documentaries, sports, comedy, concerts, and S3D live broadcasts. Ted is a member of the Producers Guild of America and the Academy of Television Arts and Sciences, and received his Bachelor of Arts and Sciences degree from Florida State University. Ted has experience in live broadcast productions such as Radio City’s Halftime Show for Super Bowls XXXII and XXXIII, The Centennial Olympic Games Opening and Closing Ceremonies, ESPN’s 50th Anniversary Special, President Clinton’s Inaugural Gala, and directing FIFA World Cup Beach Soccer.

Randal Kleiser, Director, “Grease,” “Defrost” Randal Kleiser wrote and directed the 360° virtual reality short “Defrost,” featuring Carl Weathers and Bruce Davison, which premiered at the 2016 Sundance Film Festival. He is in post-production on 11 more five-minute episodes of this virtual reality series also starring Harry Hamlin and Veronica Cartwright. Working in 70mm 3D, he directed “Honey, I Shrunk the Audience,” which ran for over a decade at the Disney Parks in Anaheim, Orlando, Tokyo, and Paris. He serves on the Sci Tech Council of the Academy of Motion Picture Arts and Sciences. At the Directors Guild of America, Randal chairs the annual Digital Day presentation and serves on the National Board. His directing credits include “Grease,” “The Boy in the Plastic Bubble,” “The Blue Lagoon,” “Summer Lovers,” “Flight of the Navigator,” “White Fang,” “Big Top Peewee,” “Honey, I Blew Up the Kid,” “North Shore,” and the 1996 AIDS drama “It’s My Party.” With George Lucas, he produced the online course, “USC School of Cinematic Arts presents the Nina Foch Course for Filmmakers and Actors.” www. randalkleiser.com

List of Interviewees  151

Eric Kurland, CEO, 3-D SPACE Eric Kurland is an award-winning independent filmmaker, past president of the LA 3-D Club, director of the LA 3-D Movie Festival, and CEO of 3-D SPACE: The Center for Stereoscopic Photography, Art, Cinema, and Education. Operating out of a Secret Underground Lair in Los Angeles, he specializes in 3-DIY, and consults on every stage of 3D production and post-production, from development through exhibition. His 3D clients have included National Geographic, Nintendo, and NASA’s Jet Propulsion Laboratory. He has worked as 3D director on several music videos for the band OK Go, including the Grammy-nominated “All Is Not Lost.” He was the lead stereographer on the Academy Award-nominated 20th Century Fox theatrical short “Maggie Simpson in ‘The Longest Daycare’,” and served as the production lead on “The Simpsons VR” for Google Spotlight Stories. In 2014, he founded the non-profit organization, 3-D SPACE, which will operate a 3D museum and educational center in Los Angeles. He sometimes wears a gorilla suit and space helmet.

David Liu, Creative Director of Virtual Reality, Viacom NEXT As the creative director of virtual reality at Viacom NEXT, David steers the thought and vision for projects, bullish in the belief that VR will change lives in ways wonderful and strange. A champion of combinatorial creativity, he works across storytelling and experiential media. He has created AAA and indie games, directed television, and film productions, and designed large-scale alternate reality games. His résumé includes work at Electronic Arts on “The Sims 4,” the BBC World Service, Singapore Arts Festival, Ford Motors, and SAP. Viacom NEXT creates and produces unique VR experiences for brands such as Paramount Pictures and MTV. Their projects have premiered across festivals and game platforms. Notable highlights include Tyler Hurd’s “Chocolate” (South by Southwest, Sundance); “Smash Party VR” (launched December 2016 on Steam); and MTV’s “Open Your Eyes Tilt Brush VR Art Show” (USA Presidential Debates, South by South Lawn). Viacom NEXT is a member company of the MIT Media Lab, NYC Media Lab, and NYU Game Center. David has worked in Singapore, Australia, and the United States. He holds a Master’s degree from the Entertainment Technology Center at Carnegie Mellon University.

152  List of Interviewees

Alex Pearce, VR Director and Producer Pearce has a Bachelor’s degree in Fine Arts in Motion Pictures and Television from the Academy of Art University in San Francisco. After graduating, Alex started his own company, creating 360° video experiences from all around the world. As a producer at Jaunt, Alex has developed and produced the “Every Kid in a Park VR Experience,” in association with the White House and First Lady Michelle Obama. He is currently producing projects at Jaunt’s Santa Monica office.

Maxwell Planck, Founder, Oculus Story Studio, Producer, “Dear Angelica,” “Henry” After graduating from MIT with a degree in computer science, Maxwell joined Pixar as a technical director, solving the creative technical problems of making six animated feature films (“Cars,” “WALL•E,” “Up,” “Brave,” “Monsters University,” and “The Good Dinosaur”). After ten years, as amazing as Pixar is and will continue to be, Maxwell felt that the technical problems of making a computer-animated film had been worked out. So he left to find his next adventure and help build something that is novel, awe-inspiring, and unsolved. He found that challenge in virtual reality storytelling – the next leap in how we tell stories and touch our audiences. There are enough technical and creative problems worthy of solving for the rest of his career and beyond. As the technology and feature animation production specialist, Max and his co-founders joined the Oculus team to build Oculus Story Studio, a small team of technical artists from film and games bent on inspiring and educating the next generation of storytellers. Since its founding, Oculus Story Studio has created “Lost” (technical supervision by Max), “Henry” (produced by Max), and “Dear Angelica” (produced by Max). The studio continues to develop and release stories based on different story genres, proving that this medium has the versatility to be an art form.

List of Interviewees  153

Paul Raphaël, Co-Founder/Creative Director, Felix & Paul Studios

Dylan Roberts and Christian Stephen, War Zone Journalists and VR Filmmakers

Paul is a filmmaker and visual artist based in Montreal. With a combined passion in cinematic storytelling and technology, he teamed up with collaborator Félix Lajeunesse and created award-winning stereoscopic 3D films, multimedia installations, and commercials presented across the world. The two directors developed proprietary 3D 360° recording technology, and with a team of key collaborators founded Felix & Paul Studios, a complete creative, production, and technology studio dedicated to the exploration of the storytelling and experiential possibilities of the medium of cinematic virtual reality. Felix & Paul Studios is recognized as a pioneer and key player in the fast-growing industry of virtual reality content. The studio has content creation partnerships with Facebook/Oculus, The White House, Fox, and Cirque du Soleil. They recently premiered their first long-form VR narrative, “Miyubi.”

At the age of 18, Dylan Roberts began traveling internationally as a journalist, photographer, and editor. Through media, Dylan has been able to travel all over the world to countries including Iraq, Lebanon, Paraguay, Uganda, Gaza, Somalia, and more. From these experiences, Dylan joined forces with Christian Stephen, to establish Freelance Society in 2013. Christian Stephen is a journalist, filmmaker, and writer from London, United Kingdom. Over the past five years he has specialized in multimedia coverage of hostile environments and conflict zones, focusing on humanitarian stories. Freelance Society is a multimedia production and VR/360° company created to source, gather, package, and deliver immersive content that reaches the world. Freelance Society exists to identify, gather, immerse, and conjure emotion through the power of virtual reality. Propelling the medium forward through empathy and excellence, they are committed to utilizing the very best in new technology to educate themselves and others about the world we live in. www.freelancesociety.com

154  List of Interviewees

Steve Schklair, Founder, 3ality Technica and 3mersiv

Duncan Shepherd, Editor, “Under the Canopy,” “Paul McCartney VR”

Steve Schklair is dedicated to producing innovative, immersive, and cost-effective native 3D productions. Steve served as a producer on the groundbreak­ing 2007 “U23D” movie. He has provided native S3D expertise and advice on motion pictures including “The Hobbit: An Unexpected Journey,” “The Great Gatsby,” and “The Amazing Spider-Man,” and was the 3D producer for one of the highest-grossing Russian films ever pro­duced, “Stalingrad.” Formerly a vice president at Digital Domain, Steve is currently coproducing a number of films in China. In 2014, the International 3D and Advanced Imaging Society recognized Steve and 3ality with the prestigious Century Award, presented to those individu­als and companies that have indelibly left their mark on 3D’s first century. www.3alitytechnica.com

Duncan Shepherd started editing in the 1980s, splicing music videos and TV commercials in London, Europe, Africa, and the United States. Working with leading directors in these fields, Tony Kaye, Vaughan and Anthea, Sam Bayer, Daniel Barber, and Stephane Sednaoui, for clients as diverse as INXS, U2, George Michael, Lauren Hill, Levi’s, BMW, and Nina Ricci, taught him the toolset and instincts required to deliver artistic perfection for highly demanding people and organizations. Recently devoting himself to VR editorial and post-production, in the same commercial and musical arenas, he is endeavoring to bring all that he learned and developed in three decades of creative editing to reinvent the narrative and artistic language required in a completely new art form. Fades to black are not part of this paradigm.

List of Interviewees  155

Robyn Tong Gray, Chief Content Officer and Co-Founder, Otherworld Interactive

Alexander Vegh, Second Unit Director and Visual Effects Supervisor

Robyn Tong Gray is an interactive media designer who weaves new media together to explore narrative and empathy. She is the co-founder and chief creative officer of Otherworld Interactive, a studio focused on creating consumer-facing virtual reality experiences. Her projects have been featured at venues including the Independent Games Festival, Tokyo Game Show, and Sundance. She received a Bachelor’s degree in computer science and fine art from Carnegie Mellon and a Master in Fine Arts from the University of Southern California’s Interactive Media Program. She creates and collaborates on interactive experiences across a variety of digital media, with an emphasis on narrative and empathy. Her independent game, “a•part•ment: a separated place,” was a finalist at the 2015 Independent Game Festival and a Digital Select at IndieCade 2015.

Alex has worked in the film industry creating groundbreaking sequences and dynamic visuals for over 19 years. Most recently, he directed the Second Unit for “SWAT,” “Star Trek Beyond,” and “True Detective.” He also led development for the stunning VR short film “Help,” where he served as both the visual effects supervisor and previsualization supervisor. He has provided his expertise to such films as “Fast and Furious 6,” “R.I.P.D.,” “Fast Five,” “The Thing,” “Wolverine,” “Watchmen,” “The Matrix” sequels, “Tropic of Thunder,” and “Panic Room” . . . to name a few. Alex has collaborated with many prominent directors, including Justin Lin, the Wachowskis, David Fincher, Zach Snider, and Oliver Stone. Having worked closely with these visionary storytellers, Alex has a unique perspective on the filmmaking process. Across the board from Hollywood blockbuster movies to small independent films, his skills have helped define contemporary filmmaking. He earned a Bachelor’s degree in art from The State University of New Paltz, and he did his post-Baccalaureate studies in digital animation at The Cyber Arts School in Rochester, NY.

About the Author Celine Tricart is a VR filmmaker and founder of Lucid Dreams Productions, a production company specializing in new technologies and the future of storytelling. Her work was showcased in numerous Academy Award festivals, including the Austin Film Festival, the Clermont-Ferrand Film Festival, and the Chicago Film Festival. Celine was the recipient of a Creative Award by the Advanced Imaging Society, among many other accolades. After graduating from the prestigious Ecole Nationale Supérieure Louis-Lumière film school in Paris in 2008, Celine became a world-renowned expert in stereoscopic 3D, and has worked on numerous live broadcasts, commercials, and documentaries. Her recent credits include two films in the “Transformers” franchise. She wrote the book 3D Filmmaking, published by Focal Press in 2016. Celine’s first narrative VR film as a director, “Marriage Equality,” was shot with the Nokia Ozo in 2015 and showcased in the MUTEK festival and FIVARS. She then produced and directed numerous VR projects, including “Slices of Life,” an interactive “choose your own adventure.” Celine was also the director of photography on a Shinola VR commercial directed by Andrew and Luke Wilson, which won a Creative Award for Best Branded Content, and the acclaimed Jaunt VR/Conversation International documentary “Under the Canopy.” She also shot Maria Bello’s “Sunladies” VR documentary about the female Yazidi fighting ISIS in Iraq. www.celine-tricart.com www.luciddreamsprod.com Twitter: @CelineTricart

Index

2D: blur 135; conversion to 3D 35–36; films 143 3-D SPACE 7, 151 3ality Technica 20, 75, 108, 123, 137, 150; Century Award and 154; “Marriage Equality VR” 46, 101, 110, 111 3D Filmmaking (Tricart) 157 3D films 143 3D glasses 14–15 3DF Zephyr 59 3DSMAX 35 3mersiv 20, 75, 108, 123, 137, 154 4D theaters 66 5G networks 40 8i 60–61, 61 20th Century Fox 7, 149, 151 21st Century Fox 149 “52 Places to Go in 2017” 115 “360 tour of the Shinola factory with Luke Wilson” 100, 105, 107, 109, 110, 136, 137, 157 360° live action 19, 19–20; capture 19; nodal technique 19, 24–25, 123, 129; stitching process 19, 20 360° paintings 9 360° video 2 360° and VR diffusion platforms 47–48, 48 AAA games 87, 148, 151 Academy Awards 7, 148, 151, 157 accelerometers 69 access, immersive journalism and 115 accessories 75, 77–78; audio 77; eye tracking 78; haptic vests 65, 75, 77; motion accessories 77; untethered VR 77 active markers 70–71, 72 actors 125; clock position system and 105; POIs and 107; rehearsals, VR and 127; shooting VR and 128, 129; theater and 82, 107; VR and 82, 105, 106–108, 111, 126, 127 additional dialogue recorded (ADR) 44 Adobe 134 Adobe Premiere Pro 33

ADR see additional dialogue recorded Advanced Imaging Society 145, 154, 157 agency, audience and 45, 99, 112 Aladdin VR 13 “All Is Not Lost” 6, 151 all-purposes trackers 76 Amazon 140, 141 Amazon rainforest 131 ambisonic microphone 40, 41 ambisonics 40–41, 42, 45 ambiX format 42 American Civil War battles 9 Anderson, Grant 88, 145 animation 51, 59, 112–113, 123 “a·part·ment: a separated place” (game) 155 AR see augmented reality arcades 13, 48–49 “Archer, The” (VR film) 149 ARRI Alexa 22 “L’arrivée d’un train à La Ciotat” (film) 10 “Ashes” (VR film) 29, 58, 83–84, 95, 104, 122, 149 asides, theater and 97 Assassin’s Creed VR Experience 49 Atlanta Cyclorama 9 Atlanta Film Festival 139 audiences: 90° scenes and 93; 180° scenes and 93; 360° scenes and 93; agency and 45, 99; FOMO and 102; influence and 103; “Taro’s World” and 93 augmented reality (AR) 3, 13, 38, 115, 139; Fox Sports VR app and 39; Google Project Tango and 71; IMU and 68 augmented virtuality (AV) 3 “Austin(s), Texas VR” 101 Autodesk ReMake 58–59 AutoPano Giga 32, 34 AutoPano Video Pro 32, 34, 34 autostereoscopic screen 13 AV see augmented virtuality “Avatar” sequels 21

160 Index avatars 51, 89 Avid 34 “Back To The Future” (theme park ride) 14 backpacks 77 Barker, Robert 9 Batt, Anthony 51, 85, 119, 136, 143–144, 145–146 “Be a T-Rex in Virtual Reality” 149 Beaudoin, Jean-Pascal 43–44, 91, 113–114, 146 Behind The Scenes: Google ATAP ‘HELP’ 131, 133 Bello, Maria 157 Betts, Daisy 94 binaural sound 42, 43, 45, 112 Black Magic 40 blind spot zones 23, 24 blindness, “Notes on Blindness” 112–113 “Blocked In” (VR diorama) 94–95, 95 blocking 24, 104–105, 106, 107, 127, 128 Blouin, François 129 Bluetooth 69 Bolas, Mark 15 BOOM (Binocular Omni-Orientation Monitor) 15 Boston Public Schools 45 Bosworth, Melissa 115, 116, 146 Brewster, Sir David 8 Brillhart, Jessica 109–110, 110 budgeting/scheduling 122–123, 122; animated VR content and 123; gaming budgeting 123; stitching costs 123 “Building a better McDonald’s, Just for You” 130 Burton, Tim 35 C# 51 cable TV 124 cable-cam systems 131, 133, 133, 135 “Café Âme” 90–91, 90 California Adventure Park 14 Calliope Music Design 148 camera shadow 130, 131 cameras 20–49, 116–117; 2D vs. 3D 26; blind-spot zone 23, 24; camera configurations 23, 24; compression 22; dynamic range 21–22; frame rate 20–21; list of 26, 27–28, 28; live view 29–31; minimum acceptable distance 23–24; nodal technique 24–25; resolution 21; rig design 22, 23; sensor size 21; stitch lines 24; stitching 23, 30, 32–34 Cameron, James 14, 20–21 Canon 25, 130 Cara VR 38, 135 Cartier-Bresson, Henri 96 cast/casting 125; theater experience and 125

cathode ray tubes (CRTs) 12 Cave Automatic Virtual Environment (CAVE) 14–15, 14 CAVE2 15 Century Award 154 CGI see computer-generated imagery chairs: motion-controlled 66; Positron Voyager 66; Roto VR 77 China 49 choreography 104, 106 chromatic aberration 73; compensation 73 cinema: 3D and 10, 85; development of 10; frame and 85, 86; vs virtual reality 85, 86 CinemaScope 119 cinematic VR 2–3 cinematography 130–32; camera shadow and 130, 131; night-time shots and 23, 131 Cinéorama 9, 10 Cinequest Film & VR Festival 139 Cinerama 11 Circarama 13 Circle-Vision 360° 13, 14 “Cirque du Soleil: Inside the Box of Kurios” 102, 108–109, 109, 129, 146, 149 classrooms, stereoscopes and 8 Clinton, Bill 149, 150 clock position system 104–105, 105, 120 close-ups 40, 105, 106, 106, 107, 110, 111; intercutting 108 “Clouds Over Sidra” 116 codecs 22, 47; H264 MP4 codec 47; H265 codec 47 coding languages 51 Cohen, Eve M. 103–104, 146–147; color grading and 135; “Memory Slave” 132; “Visitor VR, The” 23, 31, 83, 103, 126, 132 Colinart, A. 113 color 135–136; grading 135, 136, 136 color-match 33, 33 computer graphics 12, 13, 14–15, 62 computer-generated imagery (CGI) 15, 37 computers 14–15; tracking and 14–15 Constellation Tracking System 69 “Contempt” 119 controllers 75, 76 crew 129; clock position system and 105; hiring 125; rehearsals and 127; sizes, VR and 122, 123, 126, 128 crowdfunding 16 CRTs see cathode ray tubes Cruz, Carolina 14 cues 102, 104, 120 cycloramas 9

Index 161 “Dactyl Nightmare” (video game) 13 Daguerre, Louis 85 dailies 133 dance 83–84 Dashwood 360VT Toolbox 34, 134 data gloves 14, 15, 76 DataGlove 15 DataSuit 15 Davis, Richard L. 127–128, 147 de la Peña, Nonny 17, 94 “Dear Angelica” 2, 52, 58, 63, 128, 152 DeFanti, Tom 14, 15 “Defrost” (VR series) 81, 98, 107, 129, 148, 150 degrees of freedom 55, 55, 56–57, 59, 61; six degrees of freedom 55, 56, 57, 59, 61, 62, 67; three degrees of freedom 55, 56, 63 Delaroche, Paul (artist) 85 depth maps 35, 36 depth of sound field 43, 44 depth-mapping 61 dialog layout 45 dialog preparation 45 diffusion platforms: 360° and VR 47–48, 48; game engine-based VR 64 diffusion standards and formats 45–47; game-engine based VR and 62–64 “Dinner Party” 149 dioramas 95, 95 directing virtual reality 102–108; blocking or choreography 104–105; directing actors 106–108; influence vs. control 103–104 “Discovering Gale Crater” 115 Discovery Channel 59 Disney, California Adventure Park 14 Disney Studios 36 Disney VR lab 13 Disneyland theme park 13 DisneyQuest 13 distance to camera 22, 23–24, 24, 104, 130 distortion: equirectangular format and 45; fisheye lens and 37 distribution 137, 139–140 documentaries 114–117, 130, 157 Dolby Atmos 43, 44; Panner Plug-in for Pro Tools 43 Dolby Digital Plus 43 dome screens 13, 14 Dubai International Film Festival 139 dummy heads 42, 42 dynamic range 21–22 earphones 77 edge-of-frame 135

editing 108–112; close-ups and 108; POIs and 109; post-production 134; software 34, 35; split-screen technique 110; transitions and 111 Electronic Visualization Laboratory 14 elevation 43, 133 Emblematic Group 115 embodiment 89–90, 91; one-to-one tracking and 89; sense of agency 89; sense of body ownership 89–90; sense of self-location 89 Emmy Awards 148, 149 empathy: presence and 92, 98; VR and 116 EPCOT 13, 14 equirectagnular projection 46 equirectangular format 36, 37–38, 42, 43, 45–47; blur and 135; color grading and 136; distortions and 45–46 Ernst, Daniël 95 escape rooms 83 ethics 92, 116 Evans, David 13 Evans and Sutherland 13 experience, immersive journalism and 115–116 eye tracking 78 Facebook 2, 16, 40, 42, 43, 66 Facebook 360 43, 48, 137 fade to black 108, 111 Fakespace 15 Favreau, Jon 85 fear of missing out (FOMO) 95, 102, 104 feature films vs. virtual reality 45 Felix & Paul Studios 34, 42, 57, 92, 103, 106, 124; “Cirque du Soleil: Inside the Box of Kurios” 102, 108–109, 109, 129, 146, 149; content creation partnerships 153; content development 149; downloadable apps, player and 47; founding of 153; free content and 140; “Miyubi” 56, 57, 122, 126, 126, 129, 153; “Nomads” 43, 44, 86, 99, 99, 149; “Sea Gypsies” 129; top/ bottom format 46; works by 149 festivals 138–139, 147, 149, 157 fiducial markers 70 field of view (FoV) 10, 12, 21, 22, 44, 74, 85; 180° 25, 38; 220° 25; blocking and 104; fisheye lens and 37; lenses and 37, 73; lighting and 69; occlusion and 69; StarVR headset and 72, 74; VR headsets and 72–73, 74 Final Cut Pro 34, 35, 134 final stitch 134 finances 124–125 first-person format, VR and 97–98, 108 first-person shooter (FPS) 51, 54 fisheye lenses 21, 22, 25, 30, 31, 37–38

162 Index flat film (flatties) 42, 102, 103 flight simulators 12, 13 FOMO see fear of missing out Foundry, Nuke 34, 38, 135 fourth wall, VR and 97 FoV see field of view FOV2GO 15, 16 FOVE 78 Fox Feature Film 148 Fox Sports 38–40, 48, 150; VR app 39, 39, 40, 57 FPS see first-person shooter fps see frames per second fragrance/scents, film and 12, 14 frame: cinema and 85, 86; edge-of-frame issue 135; FoV and 85; VR and 85, 86 frame rates 20–21, 47 frames per second (fps) 20, 21, 26, 47, 47 Freedom 360 116, 117 Freelance Society 153 Fresnel lenses 73 Frilot, Shari 1, 17, 59, 94, 120, 139, 143, 147 Gall-Peters maps 45, 46 game engines 51–52; coding languages and 51; functionalities 51; live capture and 51; rendering engine 51; VR compatibility 51–52 game-engine-based VR 1, 51–52; degrees of freedom 55, 55, 56–57, 57; diffusion platforms 64; diffusion standards and formats 62–64; light field technology 61–62; photogrammerty 58–59; versus live-action 55; volumetric capture 59–61 games 87; AAA titles 87; escape rooms 83; first-person shooter (FPS) and 51; live-action role-playing games 83 gaming vs virtual reality 87 Gance, Abel 10, 11 Garcia, Alexandra 96 Gedemer, Tim 44–45, 112–113, 148 Gernsback, Hugo 11–12 Gettysburg Cyclorama 9 gimmicks, sound design and 112–113 glasses 3, 12, 14–15; television eyeglasses 11, 12 gloves see data gloves “Gnomes & Goblins” 85 Godard, Jean-Luc 119 Golden Globe 148 golf, VR and 38 “Gone” (interactive VR series) 58; hotspots 58 Google 109; Project Tango 71, 72; “Tilt Brush” (VR app) 127, 128 Google ATAP 131, 133

Google Cardboard 15, 17, 67, 68, 92; Android 47, 74; iOS 47, 74; minimum mobile requirements 74 Google Daydream 48, 57, 64, 67, 68, 74 Google Glass 3 Google Odyssey 128 Google Spotlight Stories 7, 135, 151 GoPro 29, 31, 117 GoPro Omni 117 Grammy Awards 7, 151 gray boxing 128 Gray, Robyn Tong 55, 87, 90–91, 119–120, 155 “Grease” 81, 129, 150 Great Exhibition (1851) 8, 9 Grimoin-Sanson, Raoul 9, 10 Gruber, William 11 gyroscopes 68 H264 MP4 codec 47 H265 codec 47 “Hallelujah!” (music video) 62 Hamlin, Harry 107, 148 hand controllers 76 hand detection 76 hand tracking 75, 76 handheld VR viewers 15, 17 haptic vests 65, 75, 77 Harvard University 12 Hayes, Hunter 149 HD see high-definition head tracking 68 head-mounted displays (HMDs) 11–13, 16, 21, 55, 63, 66–78; accelerometer 68; accessories and 75; basic components 67–68; color and 135; controllers and 75, 76; degrees of freedom and 57; field of view and 74; gyroscope 68; high-end tethered headsets 67, 68; infrared light 69–70; inside-out/outside-in tracking 71–72; jitter 69; laser sensors 68–69; lenses and 73; magnetic tracking 70; magnetometers 68; minimum PC/mobile requirements 74, 74–75; mobile VR headsets 67–68, 67; optical tracking 70–71; OSVR Hacker Dev Kit 67; refresh rate and 73–74; resolution 72–73; Samsung Gear VR 14; stand-alone VR headsets 68; structured light 71; “Sword of Damocles” 12–13, 13; tracking 68–72; types of 47, 67; visible light 69; visual quality and 72–74 Headspace Studio 43, 91, 113, 146 Heilig, Morton 12 “Help” 30, 36, 37, 47; camera, movement and 133; cinematography 130–131; color and 135; VFX work and 135, 155

Index 163 “Henry” 2, 52, 58, 98, 152; Gear VR and 63; Oculus Rift and 53; previz techniques 128; switch from observatory to participatory 98 Henry V (Shakespeare) 97 “Herders” 43, 86 HFR see high frame rate hieroglyphs 7, 81 high frame rate (HFR) 20, 21, 47 high-definition (HD): cameras 21; headsets 21; screen 67; signal 39 high-end tethered headsets 67, 68, 73; minimum PC requirements 75 HMDs see head-mounted displays “Hobbit, The: An Unexpected Journey” 154 “Hobbit, The” trilogy 20 Hoberman, Perry 15 Holmes, Oliver Wendell, Sr. 8, 17 holography 85 “Honey, I Shrunk the Audience” (immersive movie) 13, 150 Horrigan, Ryan 123, 124, 140–141, 149 horror genre 111, 113, 119–120 hotspots 58, 58 “House of Cards” (Netflix series) 97 HTC 16 HTC Vive 1, 48, 54, 55, 63, 67, 68; AGlass for 78; base station and laser emitter 69; FoV and 74; Fresnel lenses and 73; laser sensors and 68–69; minimum PC requirements 75; six degrees of freedom 56, 57; vs. PlayStation VR 72 HTC Viveport Arcade 48 Hugo Awards 11–12 Hull, John 112 Hulu 140, 141 “Hunger in Los Angeles” (VR film) 17, 94 Hunter, Ian 148 Hurd, Tyler 151 hybrids 88; hybrid 3D 36; hybrid lenses 73 HypeVR camera 61; volumetric capture 61 Illustrated London News 9 ILM Studios 35 IMAX 49, 57; dome screen 14; VR Experience Centre 49 immersion 89, 91, 115 immersive audio systems 44 immersive devices 16 immersive journalism 115–117; project categories 115–116 immersive presentations 9–11, 13 immersive theater 10, 12, 13, 82–83, 104 immersive VR rooms 15 IMU see inertial measuring unit

“In the Blink of a Mind – Attention” 102, 110 inertial measuring unit (IMU) 68, 68, 69, 71 influence vs. control 103–104 information persistence 112 infrared light 69–70 initial time delay (ITD) 43 inside-out tracking 71–72 intellectual property (IP) 124 interaction 57–58, 59 interactive immersive games 13; escape rooms 83; LARPs 83 interactive VR 2–3, 57–58; hotspots and 58 International 3D 154 “Introduction to Virtual Reality” 44 IP see intellectual property iPads 31 iPhone FOV2GO Instructions 16 ISIS 116, 157 ITD see initial time delay Jackson, Peter 20 Jaunt 42, 46, 48, 148, 152 Jaunt One 30, 41, 131, 133 Jaunt Studios 145 Jaunt VR 47 jitter 69, 71 Johansson, Scarlett 102 Jordan, “Clouds Over Sidra” 116 journalism 114–117; access 115; categories of immersive projects 115; ethics and 116; experience 115–116; immersive 115; location 115; perspective 116 “Jurassic World: Apatosaurus” 146, 149 Kaleidoscope VR 139 Kantor, Jessica 58, 149; “Ashes” 95; budgeting 122; cameras and 29; dance and 83–84; distance from camera 104 Kaye, Tony 111–112, 154 Kenney, Ted 38, 39–40, 150 Keystone View Company 8 Kickstarter 16, 66 Kleiser, Randal 81, 98, 129, 129, 150 KOR-FX gaming vest 77 Kossakover, Wojciech 9 Kreylos, Oliver (Doc-Ok) 69 Kurland, Eric 7, 151 Kutcher, Ashton 145 LA 3-D Club 7, 151 LA 3-D Movie Festival 7, 151 La Burthe, A. 113

164 Index “Lab, The” 59 Lajeunesse, Félix 91, 129, 153 Lang, Fritz 119 Lanier, Jaron 15 LARPs see live-action role-playing games laser sensors 68–69 lat-long see equirectangular format lavalier microphone 42 Leap Motion 76 LED light strips 132 LEDs see light-emitting diodes “lens dance” 37 lenses 73, 125, 130; actors’ eyes and 111; array of micro-lenses 61; Canon 8-15 lens 130; foreshortening of 110; Fresnel 73; HMD lens comparison 73; overlap and 23; sensor size and 21; super-wide-angle 29; see also fisheye lenses LiDAR see light detection and ranging Liddiard, Gawain 30, 36, 37, 47, 130–131 light detection and ranging (LiDAR) 38, 61; depth-mapping 61 light field technology 61–62 light-emitting diodes (LEDs) 69; infrared 69; optical tracking and 70–71 lighting: documentaries and 130; immersive theater and 104; LED light strips 132; POIs and 131; sets and 132 Lin, Justin 30, 36, 135 “Lion King, The” 35, 36 liquid crystal displays (LCDs) 15 Littlstar 42, 48 Liu, David 56, 57, 97, 151 live broadcast in VR 38–40; bandwidth and 39; Super Bowl (2017) 40; technical challenges 39–40 live capture, game engines and 51 live-action 3D 57 live-action role-playing games (LARPs) 83 live-action VR 1, 19, 61; 360° footage 19; CGI and 37; degrees of freedom 55; equirectangular format and 45; interactive experiences and 57; optical flow technology and 34; versus game engine VR 55 location: immersive journalism and 115; scouting and 125; VR and 94–95 location-based VR arcades 48–49 London Stereoscopic Company 8 Lorimer, Eric 15 Los Angeles Times 115 “Lost” 128, 152 “Lost In Translation” (film) 102 Lucas, George 150 Lucid Dreams Productions 56, 101, 157 Luckey, Palmer 16, 17

Lumière, Auguste and Louis 10 Lytro 61, 62, 62, 63 Mac/iOS 29 Macbeth (Shakespeare) 83 McDowall, Ian 15 “Mad Men” 107, 148 “Maggie Simpson in ‘The Longest Daycare’” 7, 151 magic goggles 92–93 Magic Leap 3 magnetic tracking 70 magnetometers 68 Manus VR 76 maps: equirectangular format 45–46, 46; Gall-Peters 45, 46 Marinus of Tyre 45 marketing 140, 141; VR and 124–125, 137 “Marriage Equality VR” 101, 104, 110–111, 122, 157 Mars, “Discovering Gale Crater” 115 Massachusetts Institute of Technology (MIT) 12 “Matrix, The” 59 Mattel 15 “Max and Aimee” 34 Maya 35 “Memory Slave” 132, 132 MetaverseVR 94 “Metropolis” 119 Mettle Skybox 360/VR Tools 34, 134 microphones: ambisonic 40, 41, 41, 45; binaural 42, 45; Core-Sound TetraMic 41, 41; lavalier 42; Sennheiser Ambeo® VR 41; Soundfield 41, 41; stereo 40 Microsoft HoloLens 3 Middleton, P. 113 “Mill Stitch” (monitoring tool) 30, 31 Mill VR camera rig 25 Mindshow (software) 128 MIX film festivals 147 mixed reality (MR) 3, 115, 139 mixing 45 “Miyubi” 57, 106, 114, 122, 146, 149, 153; location 126, 126, 127; rehearsals 127; remote-controlled robot 129 mobile VR headsets 67–68, 67; minimum mobile requirements 74; refresh rate and 74 monetization 137–138, 139–140 Monitor Application 43 “Moon” 62 motion accessories 77 motion sickness 65–66; prevention 66; VR and 66, 131, 135, 143 MR see mixed reality multi-camera arrays 61 multi-camera sequence 33

Index 165 multi-camera widescreen process 10 multi-camera/multi-projector system 11 Murray, Bill 102 “My Brother’s Keeper” 124, 125 “Napoleon” (silent film) 10, 11 NASCAR events 39 National Football League (NFL) 40 National Geographic 59 nausea 35, 66, 122; see also motion sickness NBC Sports 38 negative space 45 “Nemo Rising” 147 Netflix 97, 138, 140, 141 Neuman, Robert 36 Neumann KU 100 Dummy Head 42 New Frontier, Sundance Film Festival 1, 17, 59, 94, 120, 138, 147 New York Times 17; travel section 115 New York Times VR 48, 59 Newmark, Craig 146 Newton, Katy 89, 92–93, 96, 102 NextVR 38 NFL see National Football League night-time shots 23, 131 “Nightmare Before Christmas, The” 35 Nikon 25 Nintendo: 3DS handheld games 13; Entertainment System 15; Virtual Boy 13, 13 no-parallax point 24 nodal technique 19, 19, 24–25, 123, 129 Nokia Ozo 29–30, 30, 39, 47 Nokia Presence player 47 “Nomads” (3D series) 43, 44, 86, 99, 99, 149 “Notes on Blindness” 112–113, 113 Nuke 33, 34, 38, 135 OB see outside broadcast Obama, Barack 146, 149 Obama, Michelle 152 occlusion: infrared light and 69–70; visible light and 69 Oculus 14; Connect 3 event 72; crowdfunding 66; inside-out tracking and 72; sale of 66 Oculus Rift 16, 17, 34, 36, 54, 63, 67, 68; CV1 21; DK1 94; DK2 17; earphones 77; FoV and 74; Fresnel lenses and 73; infrared LEDs 69, 70, 70; minimum PC requirements 75; six degrees of freedom 56, 57; specifications 47; tracking, external sensors and 68; wireless remote 76; XBox One controller 76 Oculus Store 2, 48, 64 Oculus Story Studio 2, 52, 58, 63, 98, 128, 152

Oculus Touch 76 ODEON & UCI Cinemas Group 49 OLED see organic light-emitting diode optical tracking 70–71; active markers and 70–71; fiducial markers and 70 organic light-emitting diode (OLED) 67, 135 OSVR Hacker Dev Kit (2015) 67 Otherworld Creative 90, 119; “Sisters” 119–120 Otherworld Interactive 55, 87, 90, 155 outside broadcast (OB) 40 outside-in tracking 71–72 paintings 7, 9, 85 Pajitnov, Alexey 95 Panasonic VR 74 panoramas 9–10 panoramic theater 13 Pape, David 14 parallax 25, 26, 62; differences 7, 11; motion parallax 43; no-parallax point 24 Paris Exposition (1900) 9, 10 “Paul McCartney VR” 35, 111–112, 148, 154 PBS Digital 124, 125 PCM see pulse-code modulation Pearce, Alex 29, 34, 152 Pearl Harbour 10 Pen Tile matrix screen 72, 73 “People’s House, The” 114 persistence of information 112 perspective 7, 15, 58, 83, 84, 86; art and 96; immersive journalism and 116; interactive pieces and 115, 116; multiple perspectives 92 PhaseSpace 71 photogrammerty 58–59; software tools 58–59; texture on mesh 59 photography 85; director of 125, 126 PhotoScan 58 photosensors 69 Pinch Glove 15 Pixar 152 Pixel XL 73 pixels 72–73; human vision and 72; resolution and 72, 73; seeing the pixels problem 72, 73, 75; sub-pixels and 72 Planck, Maxwell 2, 52, 58, 63, 152; previz techniques 128; storyboards 128 planetariums 13 platforms 2, 47–48, 48, 137, 139–140; 360° and VR diffusion platforms 47–48; exclusivity and 140; funding projects and 140, 141; SVODs and 140, 141; TVODs and 140, 141

166 Index PlayStation Home 87 PlayStation VR 68, 125, 140; controllers 76; headset 1, 21, 67, 69; IMU 69; LEDs 69; lens and 73; minimum PC requirements 75; refresh rate 74; resolution and 72; visible light 69; vs. HTC Vive 72 plenoptic camera 61 plug-ins 34, 35; Facebook 360 and 42; spatialization 43 point of view (POV) 12, 23, 36, 114 points of interest (POIs) 99–102, 103, 106, 107; editing and 109, 111; lighting and 131; match on attention 109; POI-matching technique 110, 134; screenwriting and 120, 121; sculpture and 84; sound and 113 Polyvision 10 positional tracking 68; active markers and 71; infrared light and 69–70; laser sensors and 68–69 Positron Voyager chairs 66 post-Industrial Revolution 10 post-production 45, 133–138; color 135–136; distribution 137–38; editing 134; final stitch 134; pre-stitch 134; solutions 32; sound 113, 136–137; titles 137; VFX 135 POV see point of view PowerGlove 15 pre-production 125–128; cast and crew 125; rehearsals 107, 127; scouting locations 125, 126; storyboard and previz 127–128 pre-stitch 134 Premiere Pro CC 2017 34 presence 89, 91, 92; empathy and 92, 98 previsualization (previz) 105; storyboards and 127–128; techniques 128; Theta S and 29 Pro Tools 42, 43 production 128–133; cinematography 130–132; dailies and 133; a day on a VR set 128–130; see also post-production; pre-production “Project Syria” 115–116 PTGui 34 pulse-code modulation (PCM) 47 PunchDrunk 83 quick response (codes) (QR) 70 “Quill” (VR app) 128 Raby, Oscar 138 Racławice Panorama, Poland 9 Raphaël, Paul 34, 42, 86, 91, 99, 103, 153; casting 125; close-ups 106; financing and 124; “Miyubi” 106, 114, 122, 126, 126, 127, 129; “People’s House, The” 114; “Strangers with Patrick Watson” 86, 107; viewer sitting down principle 114 RAW files 22, 36 Razer Hydra 70; controllers/base station 70

“Realities” app 59, 60 RealityCapture 58 “Recruit, The” 94, 94 rectilinear crop 38 RED Dragons 25 RED Epic 130 red, green, blue (RGB) 72 RED Weapon 22 Reese, Greg 135 refresh rate 73–74 refugee camps, “Clouds Over Sidra” 116 Regard3D 59 rehearsals 127 resolution 21, 72–73; Pen Tile matrix screen 72, 73; RGB 72, 73 RGB see red, green, blue Ricoh Theta 104, 125 Rio Olympics 38 Roberts, Dylan 116–117, 153 room-scale VR 26, 48–49, 55, 66, 70, 84, 88 Rosetta Stone 81 Roto VR 77 S3D see stereoscopic 3D Samsung Galaxy S8 73 Samsung Gear 360 29, 48, 104, 117, 125, 128, 129 Samsung Gear VR 2, 14, 21, 47, 67, 68; ambisonic format 42; minimum mobile requirements 74; Samsung Galaxy S8 and 73 Samsung Pen Tile matrix 72 Sandin, Daniel 14 Sarah, Lakshmi 115, 116, 146 Saturday Evening Post 16 Sawyers Company 11 Sayre Glove 14 Sayre, Rich 14 scene transitions 44 Schklair, Steve 20, 26, 46, 75, 101, 154; budgeting/ scheduling 123; close-ups and 108, 110; editing and 108; “Marriage Equality VR” 101, 104, 110–111, 111; monetization, VR and 137–138 Scott, Benjamin 130 screenwriting: commercial script formatting 122; POIs and 120, 121; previz and 127; traditional script formatting 121, 121; for VR 119–122 sculpture vs virtual reality 84 SDK see software development kit “Sea Gypsies” 129 “Second Life” 94 Seed&Spark 132, 147 Sega 13; VR-1 motion simulator arcade 13

Index 167 SegaWorld 13 Sennheiser Ambeo VR 41 sensor size 21 Sensorama 12, 12 Seymour, Mike 25, 30, 36, 37, 47, 131, 133, 135 Shakespeare, William: Henry V 97; Macbeth 83 Shepherd, Duncan 35, 111–112, 154 “Simpsons, The” (theme park ride) 14 “Simpsons VR, The” 7, 151 simulations 37, 85, 119, 136, 143–144 “Sisters” 119–120 sitting height, viewer and 99, 108, 114 Six Flags amusement parks 14 Sketchpad 12 Skybound Entertainment 58 “Sleep No More” (immersive theater) 82–83, 82 “Slices of Life” (interactive VR) 56, 57, 157 smartphones 15, 29; gyroscopes/accelerometers and 68; mobile VR and 67–68 “Soarin’ Over California” (theme park ride) 14 “Soarin’ Over The World” (theme park ride) 14 software development kit (SDK) 63 Sony 16 Sony A7S DSLR 131 Soukup, Karin 89, 92–93, 96, 102 sound 136–137; cues 120, 137; editing/mixing 42–44; immersive theater and 104; POIs and 137; spatial audio formats 40, 42 sound design 112–114; gimmicky audio tract 112–113; POIs and 113; substitutions and 113 sound fields 42, 43 sound recording 40–42; ambisonic microphones 41; ambisonics and 40–41; binaural sound 42; lavalier microphone and 42 Soundfield (ambisonic mic) 41 Source Sound, Inc. 44, 45, 148 spatial audio 43, 91, 114; formats 40, 42 spatialization plug-ins 43 Spinney, J. 113 split-screen technique 110 Sproull, Bob 12 Starbreeze 49 StarCAVE 15 StarVR 57, 67, 70; active markers and 72; fiducial markers and 70; field of view 72, 74; FoV and 74; IMU and 71; minimum PC requirements 75 Steam 2, 59, 64, 87, 140 SteamVR, laser sensors and 69 Stephen, Christian 116–117, 117, 153 stereo surround sound, Cinerama and 11 stereocards 8, 11

stereoscopes 8 stereoscopic 3D (S3D) 143, 150, 154, 157; 180° cameras 38; polygon graphics 13 stereoscopic slides 11 stero viewers 8 stitch lines 22, 23, 24, 24, 25, 26, 33 stitching 19, 20, 23, 30, 32–34, 129–30; 3D footage and 33; 3D stitching 26; distance from camera and 130; final stitch 134; imperfection 134; pre-stitch 134; price and 123; real-time stitching 40; software 34, 38; synchronization tool 32, 33; VFX tools and 135 storyboards 104, 105, 125; previz and 127–28 “Storyteller’s Guide to the Virtual Reality Audience, The” 89, 93, 96, 102 storytelling: camera’s position and 96; cues 102; directing virtual reality 102–108; embodiment 89–90, 91; fear of missing out 102; first-person format 97–98, 99; immersion 89, 91; location, importance of 94–95; main contributions of VR 91; participant-driven 98; points of interest 99–102; presence 89, 91, 92, 96; storyteller-driven 98; “Where Am I?” 94–96; “Who Am I?” 97–99; world building and 96–97 “Strain, The” 111 “Strangers with Patrick Watson” 86, 91, 92, 107, 149 stroboscopy 20 structured light 71; 3D scanning 71 Styka, Jan 9 SubPac 77 subscription video on demand (SVOD) 140, 141 Sundance Film Festival 1, 17, 59, 94, 120, 138, 147, 150 “Sunladies” (documentary) 157 Super Bowl (2017) 40 Sutherland, Ivan 12–13 SVOD see subscription video on demand “Sword of Damocles” (HMD) 12–13, 13 SXSW 66, 139 Syria, “Project Syria” 115–116 “T2 3-D: Battle Across Time” 14 “Taro’s World” 92–93, 96 TASCAM DR-701D 41 Tele-Eyeglasses 11, 12 Telesphere Mask 12 television: cable and 123, 124; fps 20 television eyeglasses 11, 12 Teradek Sphere 31, 31, 38, 128 Tetris 94, 95 theater: actors and 82, 107; asides and 97; direction of attention 82; immersive 82–83; traditional 82; vs virtual reality 82–83 theme parks: VR and 13–14, 49; VR rollercoaster rides 14

168 Index Theta S 29 third dimension 7–8 “Through the Ages: President Obama celebrates America’s National Parks” (VR film) 99, 146 “Tilt Brush” (VR app) 126, 128 time of flight (TOF) 61 Tiny World Productions 146 titles 137 TOF see time of flight top/bottom format 46–47 TPCast wireless adaptor for Vive 77 tracking 68–72; all-purposes trackers 76; computers and 14–15; eye tracking 78; hand tracking 75, 76; infrared light and 69–70; inside-out tracking 71–72; magnetic tracking 70; optical tracking 70–71; outside-in tracking 71–72; positional 68, 69–70; positional, laser sensors and 68–69; structured light and 71 tracking systems, BOOM 15 transactional video on demand (TVOD) 140, 141 transitions 44, 111, 134 Transport 48, 64 treadmills 75 Tribeca Film Festival 139, 149 Tricart, Celine 46, 101, 104, 105, 111, 130, 147; 3D Filmmaking 157 TVOD see transactional video on demand “U-571” 148 Uncorporeal 61 “Under the Canopy” 35, 111, 131, 133, 154, 157 Underwood & Underwood 8 United Kingdom (UK) 49 United Nations (UN) project 116 United States (US): arcade gaming market 49; military 10; US Open, VR and 38 Unity 52–53, 62–63, 128; first-person shooter character 54; interface 52, 53, 54; moving in the environment 53; skybox 53; terrain 53–54; VR and build 54, 55 Universal Studios theme parks 14 University of Illinois 14 University of Southern California (USC) 15, 115; Institute for Creative Technologies 15; School of Cinematic Arts 115 University of Utah 13 Unreal 52, 54, 63, 128 Unreal Engine 4 52 untethered VR accessories 77 Valve 140; Lighthouse 68–69 Vectrex system 13 Vegh, Alex 25, 36–37, 133, 155

Venus de Milo statue 84, 84 vestibular system 56, 65–66, 65 VFX see visual effects Viacom NEXT 56, 97, 151 Victoria, Queen 8 video on demand (VOD) 39, 140 video games 13, 87 Video Stitch 34 View-Master 11, 11 virtual reality (VR): 360° video and 2; definition of 1–2; use of the term 12, 15 Virtuality system 13 Virtuix Omni 77 VirZOOM 77 visible light 69 “Visitor VR, The” 23, 31, 83, 103, 126, 132 visual effects (VFX) 25, 26, 33, 36–38, 123, 135, 136 visual quality 72–74; field of view and 74; lenses and 73; refresh rate and 73–74; resolution 72–73 Vitarama 10 Vive controllers 76 Vive Deluxe Audio Strap 77 Vive Tracker 76 VOD see video on demand Void, The 49 volumetric capture 51, 59–61 Von Kuhn, Caroline 147 VPL Research 15 VR Renderer 43 Walker, Ralph (architect) 10 Waller Flexible Gunnery Trainer 10, 10 Waller, Fred 10–11, 12 war zones: journalists and 153; “Project Syria” 115–116; VR filming and 116–117; “Welcome in Aleppo” 117 “Welcome in Aleppo” 117 West Mosul 116–117 Wevr 51, 85, 119, 136, 143, 145; annual subscription and 140; “Memory Slave” 132 Wheatstone, Sir Charles 7–8 White House project 114, 146 Wikipedia 105 Wilson, Andrew 30, 100, 105, 107, 157 Wilson, Luke 99, 100, 101, 107, 109, 157 wireless solutions 75, 77 Within 48, 140 workflow 19–20 world building, VR and 96–97, 125 World Economic Forum 115 World Fair (1939) 10

Index 169 World VR Forum 139 World War II 10 XBox One controller 76 YouTube 1, 40; Best of 360 List 149 YouTube 360 2, 47, 48, 137, 140, 143

Zaxcom system 42 Zermatt, Switzerland 115 Zero Latency 49 Zimmer, Hans 29, 29, 136 Zimmerman, Thomas 15

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 AZPDF.TIPS - All rights reserved.