মুখ্য Future Presence: How Virtual Reality Is Changing Human Connection, Intimacy, and the Limits of Ordinary..
আপনি আগ্রহী হতে পারেন Powered by Rec2Me
প্রায়শই ব্যবহৃত পরিভাষা
4 Empathy vs. Intimacy Why Good Stories Need Someone Else EVERY SPRING, a thousand curious people descend on Vancouver, British Columbia, to watch what might be the world’s most expensive live-staged PowerPoint presentations. The TED Conference is the annual flagship event of its nonprofit namesake, TED (which stands for Technology, Entertainment, Design). The short description is that it’s a celebration of ideas; the slightly longer one is that it’s a five-day parade of scientists, authors, ex-presidents, and other “thought leaders,” as well as Bono, delivering short speeches and lectures to a rapt audience of people who had a spare eighty-five hundred dollars to spend on tickets. “TED Talks” are known for being counterintuitive and charming. They also happen to be internet gold: thousands of TED Talks stream online, and they’ve been watched well more than a billion times—not as many as a Justin Bieber video (well, some Justin Bieber videos), but at least enough to make you feel like civilization isn’t completely unsalvageable. In the thirty-four years since TED began, it’s spawned an empire of mini-TEDs in every city and region you can think of. Admittedly, the so-called TEDx conferences aren’t always as rigorous as the original; a few years ago, someone in Portland gave a talk called “How to Use a Paper Towel,” which, like, come on. The flagship remains a hot ticket, though, and every year it gives rise to a handful of TED Talks that legitimately feel like brain food. In 2015, one of those talks came from a guy named Chris Milk. If you’re a music fan, you might be familiar with Milk’s work—he’s directed videos from Kanye West, Johnny Cash, Beck, and a bunch of other big names. He’s also always enjoyed playing with technology and has created a number of experimental interactive projects, like the time he made a video for the Arcade Fire song “The Wilderness Downtown” that featured your childhood home. Yes, your childhood home; the video played in a web browser, and when you entered your address, it; used satellite imagery and Google Street View to personalize the video for each viewer. With a creative streak like that, it’s little surprise that Milk was one of the first filmmakers to start working in virtual reality. In 2014, he filmed a 360-degree video of the Millions March NYC against police brutality, making it an early example of VR journalism. By the time Milk got onstage for his TED Talk in 2015, he’d taken VR pieces to the Sundance Film Festival, including a short documentary he’d made for the United Nations that chronicled the life of a young Syrian girl in a refugee camp. His talk was called “How Virtual Reality Can Create the Ultimate Empathy Machine,” and it drew on that documentary, as well as the “Wilderness Downtown” video, to evangelize about the power of VR presence. “It’s not a video game peripheral,” he said toward the end of his ten-minute talk. “It connects humans to other humans in a profound way that I’ve never seen before in any other form of media. And it can change people’s perception of each other . . . virtual reality has the potential to actually change the world.” Obviously, I think he’s right, or I wouldn’t have written this book. However—and obviously I also think there’s a “however,” or I wouldn’t have written this book—I also think that his TED Talk, as grand as it was, stopped short in a very important way. Empathy is a marvelous quality, and the ability to truly imagine another person’s life would make most of us better to be around in general. It’s a crucial ingredient of the recipe for human connection. It’s the ingredient that matters most for filmmaking, certainly. But it’s not the only one. The other one is just as transformative, and possibly even more immersive. It’s empathy’s fraternal twin, intimacy. EMPATHY VS. INTIMACY: APPRECIATION VS. EMOTION Both of these words are fuzzy, to say the least. Both have decades of study behind them, but both have also appeared on more magazine covers than just about any word, other than possibly “abs” and “Oprah.” What they truly mean often says as much about the person using them as the words themselves, so let’s try to boil them down a bit so we’re starting with a shared definition. To that end, let’s see what the Oxford Dictionary of Sociology says about both terms. Empathy: The ability to identify with and understand others, particularly on an emotional level. It involves imagining yourself in the place of another and, therefore, appreciating how they feel. Intimacy: A complex sphere of “inmost” relationships with self and others that are not usually minor or incidental (though they may be transitory) and which usually touch the personal world very deeply. They are our closest relationships with friends, family, children, lovers, but they are also the deep and important experiences we have with self (which are never exactly solitary): our feelings, our bodies, our emotions, our identities. Immediately, you can see a few distinctions. Empathy necessarily needs to involve other people; intimacy doesn’t. Empathy involves emotional understanding; intimacy involves emotion itself. Empathy, at its base, is an act of getting outside yourself: you’re projecting yourself into someone else’s experience, which means that in some ways you’re leaving your own experience behind, other than as a reference point. Intimacy, on the other hand, is at its base an act of feeling: you might be connecting with someone or something else, but you’re doing so on the basis of the emotions you (or both of you) feel. In fact, you can already see that a lot of the VR we’ve examined up to this point doesn’t necessarily induce empathy, but it still falls squarely into the realm of “intimacy.” While meditation, visualization, and even (depending on how much of a monster you are) Henry the hedgehog’s birthday party may not push you to appreciate what others feel, they do trigger experiences—sometimes articulable, sometimes primal—that touch your personal world. And one type of VR experience perfectly illustrates the surprising gap between empathy and intimacy: live-action VR. Unlike CGI-based storytelling, which falls somewhere in between game and movie, live-action VR feels much more like the conventional video forms that we’re used to from television and movies. Like those media, people have been using VR to shoot everything from narrative fiction to documentary to sports; however, like everything else in VR, it places you at the center of a 360-degree sphere of action. Using it, I’ve experienced empathy without intimacy, and even intimacy without empathy. (I acknowledge that the last sentence makes me sound like a sociopath; it’s not that I couldn’t muster up empathy, only that some experiences might not actively induce empathy. And at the risk of this parenthetical aside going on far too long, let me just point out that we’ll explore this more and more as the book goes on. By “this,” I don’t mean long parenthetical asides—though, actually, probably those too. At this point, it’s just more to see what my editor lets me get away with. Still with me? Great. Let’s get back to the rest of the paragraph.) And a quick spin through some seminal examples of early live-action VR can illustrate some of the differences in—and overlap of—empathy and intimacy. WE’LL DO IT LIVE Well before the renaissance of consumer VR, people were using virtual environments not simply for entertainment, but for information. The most prominent of these was Nonny de la Peña, a documentarian and journalist known as “the godmother of VR”; as a senior research fellow and then doctoral student at the University of Southern California, de la Peña devised prototypes and experiences that charted an early course for reported storytelling in VR. In 2007, she used the virtual community Second Life to re-create Guantanamo Bay prison. (In case you don’t remember Second Life, it was like the video game The Sims, minus the game part. Users could design their own characters, environments, and activities and then interact with other users through a computer interface. It got very popular very quickly, and then got less popular even more quickly. However, the company that maintains it claims that nearly a million people still use it every month.) At the Sundance Film Festival in 2012, the same year the phrase “Oculus Rift” was uttered for the first time, de la Peña screened a VR experience called Hunger in Los Angeles. It used CGI, along with archival audio, to reenact a heartbreaking episode that happened at a Los Angeles food bank on a hot summer day. In the piece, as in real life, an overlong line leads to delays, and a man collapses into a diabetic coma. “Okay, he’s having a seizure,” a volunteer says as the man spasms helplessly on the sidewalk; a bystander calls 911. The entire time, you’re able to walk throughout the crowd of people, seeing their faces, watching their struggle. It’s not light fare. This was many people’s first experience with virtual reality—Chris Milk himself among them—and the first glimpse at presence proved to be overwhelming. In one video viewable on YouTube, the actor Gina Rodriguez (Jane the Virgin) actively weeps while a volunteer helps her take her headset off. As emotionally affecting as these experiences were, though, they were still clumsy; the computer-generated graphics were simplistic by today’s standards. It wouldn’t be until early 2014 that live-action video managed to capture the three-dimensional 360-degree visuals that are virtual reality’s hallmark effect. A short documentary called Zero Point was ostensibly about VR, but it was shot for VR, using a ring of ultra-high-definition cameras pointed outward—so that the camera functioned as the viewer’s eye, able to see anywhere inside the 360-degree sphere. (The 3-D effect arose from using two lenses for each view, allowing for the same fool-your-brain trick that’s been around since the stereoscopes of the nineteenth century.) At nearly the same time, Chris Milk was developing his own VR filmmaking pipeline at a new company he’d founded, and that company—now called Within—came into its own with a documentary he created with the United Nations. Clouds Over Sidra takes viewers inside the life of a twelve-year-old girl living in Zaatari, a Syrian refugee camp in Jordan. When the experience begins, you’re in the desert, footsteps and tire tracks traversing the sand. “We walked for days crossing the desert to Jordan,” a woman says in a voice-over, translating young Sidra’s words. Over the next seven minutes, you glimpse daily life in Zaatari through a series of vignettes, narrated by Sidra’s words: you see her family bustling inside their small house, boys playing first-person shooters inside a small gaming café. As the film continues, the sense of normalcy mounts; whether you’re watching men stacking fresh hot flatbread inside a bakery or a cluster of young girls playing soccer, you find yourself lulled by the seemingly universal daily routines. (“Here in Zaatari, unlike home, girls can play football too,” Sidra says. “That makes us happy.”) Lulled, that is, until a shot fades in and you see Sidra and her family eating a meal—under the shelter of a UN Refugee Agency tent. “My teacher says the clouds moving over us also came here from Syria,” Sidra’s narration says. “Someday, the clouds and me are going to turn around . . . and go back home.” As a conventional documentary, Clouds Over Sidra might seem simple. The brief, static takes feel more like the type of short film you’d see at a museum, in one of those small bench-filled screening rooms you duck into to rest your legs. Yet, as a VR experience, it’s anything but simple. The children aren’t smiling and gesturing at a camera; they’re engaging with you, the visitor to their camp. The clouds rolling overhead—over your head—feel at once ominous and hopeful. You haven’t just watched a video postcard that Sidra made; you’ve spent time in Zaatari with her, seeing what she sees. In other words, you no longer have to imagine the emotions of a young girl living in a refugee camp. You’ve been there with her. And now, to paraphrase our working definition of empathy, you can appreciate how she feels. THE MOMENT OF THE MOMENT About a year before Chris Milk’s 2015 TED Talk—before even Clouds Over Sidra—I got my first firsthand taste of VR’s potential beyond both entertainment and empathy. Oddly enough, it was at yet another annual conference known by its initials: SXSW. The South by Southwest festival in Austin, Texas, has over time become home to three overlapping but distinct conferences dedicated to interactive technology, film, and music. And in 2014, SXSW saw its first influx of VR. In order to promote the upcoming season of Game of Thrones, HBO had brought along a VR installation that allowed people to step into what was essentially a vibrating phone booth—but once they put on a headset, everything they saw, heard, and felt made them think they were ascending the seven-hundred-foot-tall Wall from Game of Thrones. (During her time in the booth, a colleague of mine yelped and flung herself backward, falling out of the booth and into the arms of a quick-reflexed HBO rep.) Not far from that, in a converted loft, I met Félix Lajeunesse and Paul Raphaël, a couple of French-Canadian commercial directors who had recently jumped into developing movies for VR. They sat me down, put an early version of a Rift on my head and headphones over my ears, and played for me their first project, Strangers. This wasn’t a computer-generated environment, like every other VR demonstration I’d seen up until that point. It was video. For the first time, I was inside a movie. I was sitting in an apartment, the floor littered with instruments and recording equipment; in front of me, a man at a piano smoked a cigarette. It was quiet in the apartment, other than the man idly plunking his way through some half-formed riffs, but it was also overwhelmingly tranquil. Soon, the man started to play in earnest, and to sing. He seemed to not mind my being there, or even to register my presence, so I didn’t think it would be rude to look around while he played. I couldn’t move—Lajeunesse and Raphaël had captured the scene by placing multiple cameras on a stationary rig—but I could spin around to see the rest of his apartment. Behind me, on the hardwood floor, a dog dozed in the glow of a nearby lamp. “Are there strangers who live in your head?” the man sang. “Are there strangers who walk in your heart?” I somehow felt both invisible and seen. I felt like there was nowhere else in the world I needed to be. And above all, I felt a sense of warmth like I’d never experienced outside of real life. I was with a stranger, and I felt like I’d just gotten a hug from someone I genuinely cared about. When the short film ended, I took off the headset and looked at Lajeunesse and Raphaël. “This is a weird word to use,” I said, “but it felt . . . intimate.” “That’s how we think about it,” Lajeunesse said. “We think about creating personal experiences for people to live an experience of presence. We don’t think of it as ‘he’s doing a performance’; we think about it as ‘he’s just with you.’ It really is a moment with that person before being anything else.” A moment with that person. Think about that for a second. When’s the last time you told someone a story about a terrible thing that had happened to you and you used the word “moment”? Moments are small, good things. They’re ephemeral, but have lasting effect. To be warm and fuzzy about it, they’re tiny slices of human connection. You can interact with dozens of people in a day, from co-workers to store clerks to people on the bus, but the interactions that stay with you are the moments. They’re the bits of shared laughter or surprising sincerity, the displays of compassion. Moments are the building blocks of intimacy among people—even people you don’t yet have an intimate relationship with. And with VR, those moments can arise when the person you’re sharing it with isn’t actually there. BRINGING FOCUS BACK TO FICTION Let’s back up a second here. Every single story has only one goal at its base: to make you care. This holds true whether it’s a tale told around a campfire at night, one related through a sequence of panels in a comic book, or the dialogue-heavy narrative of a television show. The story might be trying to make you laugh, or to scare you, or to make you feel sad or happy on behalf of one of the characters, but those are all just forms of caring, right? Your emotional investment—the fact that what happens in this tale matters to you—is the fundamental aim of the storyteller. Storytelling, then, has evolved to find ways to draw you out of yourself, to make you forget that what you’re hearing or seeing or reading isn’t real. It’s only at that point, after all, that our natural capacity for empathy can kick in. Yet, innovation in this area has been surprisingly rare; once we settled on the idea that simulations happen inside a frame, all we could really do is bring the frame closer to us: bigger screens, louder speakers, the gimmicky hints of immersion that swept movie theaters during the 1950s and 1960s—rumble seats and Smell-O-Vision, misters and hypnotists. Meanwhile, technology continues to evolve to detach us from those stories. For one, the frame itself continues to get smaller. From news to television to literature, more and more of our media consumption is happening on our phones—or at least something reasonably phone-size, like an e-reader. But the size of the window is only part of it. By nature of its portability, the phone isn’t something you lose yourself in, but is a supplement to everything else. It’s increasingly common to watch a television show on a phone or tablet, while in line at the bank or on the bus commuting to work. (And when we do use a traditional television, we’re often on our phones at the same time, scrolling through social media or news, or even YouTube.) Stories are fighting for our attention, and even when they get it, we’ve got one cerebral foot out the door. Stranger still, this distraction has happened while stories continue to become more and more complex. Narratively, at least, stories are more intricate than they’ve ever been. Long-arc prestige TV series pull scores of characters and numerous storylines along multiple seasons; cinematic universes encompass a decade’s worth of movies; fantasy epics can span a dozen books or more. Even sitcoms, once a simple twenty-two-minute dose of laughter, can rival dramas for structural intricacy, challenging viewers with knotted plots and multiple points of view. And all the while, the things we want to consume but haven’t yet continue to pile up. There’s more than we can ever read or watch, and we want to read or watch it all—but when we do, we binge, often with another frame close by. Now, with VR storytelling, the distracting power of multiple screens has met its match. There’s no Twitter notification or calendar alert or lingering text conversation that follows us into an experience like Clouds Over Sidra or Lajeunesse and Raphaël’s Strangers. There’s just us, in there, with other people’s stories, giving them our full attention. Given what a rarity undivided attention is these days, that’s a miracle in itself. And by experiencing those stories inside the frame, we’re left with, as Chris Milk said in his TED Talk, an unparalleled capacity for empathy. That’s not to say that VR has transformed us into a nation of altruists. In fact, Paul Bloom, a psychology professor at Yale who in 2016 ruffled feathers with his book Against Empathy, has argued that VR at best offers a pale knockoff of empathy: “The problem is that these experiences aren’t fundamentally about the immediate physical environments,” he wrote in The Atlantic. “The awfulness of the refugee experience isn’t about the sights and sounds of a refugee camp; it has more to do with the fear and anxiety of having to escape your country and relocate yourself in a strange land.” He’s partly right; despite the 360-degree fullness of what the viewer experiences, VR documentary experiences can offer only a slice of a subject’s experience. However, there’s evidence that even now, VR’s immediacy and immersion do actually lead to increased philanthropy. When the UN screened Clouds Over Sidra for potential donors before a humanitarian conference in Kuwait, the organization raised nearly $4 billion—twice what was expected. The ability to understand another person’s experience is clearly a magical ingredient for documentary and narrative stories. The characters you meet take on new depth; it’s only natural to become more invested in people’s journeys and relationships when you’re present in their environments. When a story tries to scare or charm or arouse you, it’s all the easier when you’re there. But if those stories are simply about other people, then empathy is effectively the experiential ceiling. You can understand a person’s life, but you don’t share it. When you’re in Zaatari watching Sidra’s family eat, for example, you see them having a moment together, but you’re not included in that moment. You witness it, rather than being part of it. And intimacy—at least once it extends outside yourself—is by nature a shared phenomenon. So the ability to wring intimacy out of VR is going to depend on experiences that involve you, experiences that establish moments. Just how VR can and will establish those moments is still a matter of exploration and experimentation. Félix Lajeunesse and Paul Raphaël’s Strangers doesn’t impose a story on the experience, nor does it enlist you as a character; depending on how much you assume that the pianist is singing to you, it doesn’t even exactly acknowledge your presence. Instead, by inviting you in to another person’s solitude, it lets you share that experience in a sustained way. There’s a with-ness to it, which is what makes it a moment. That’s just the beginning, of course. The first few years of live-action video have been hamstrung by a severe technical constraint—one that we’re just now starting to get past. And as we do, we will be able to unlock a new level of intimacy in storytelling. THE FULLNESS OF YOU For all the grandeur that we attach to the name, Silicon Valley is not a grand place. It’s certainly not a valley, at least not anymore; while the sobriquet once referred to San Jose and lands south, where chip manufacturers cluster, Silicon Valley has crept northward to colonize the exurban sprawl of the San Francisco Peninsula. Now, tech culture crams itself alongside residential communities and commercial arteries on a narrow swath of land that’s bounded by the San Francisco Bay on one side and winding hills on the other. A handful of towns—Menlo Park, Cupertino, Palo Alto—host a matching handful of billion-dollar behemoths. In Mountain View that juggernaut is Google, but the city is also home to dozens of other companies, tucked into small offices along quiet, curving roads. Inside one of those, that of an imaging company called Lytro, CEO Jason Rosenthal hands me a headset. “You’ll hear about eight seconds of audio before you see anything,” Rosenthal says, “then the lights will come up.” When the lights come up, I’m watching an astronaut step off the ladder of his lunar lander, down onto the surface of the moon. The earth hovers over his shoulder in the jet-black sky. “That’s one small step for man,” he says, “one giant step for mankind.” One giant step? I think. I thought it was lea—and as if on cue, a voice behind me barks “CUT!” But then even more lights come up . . . and I realize I’m standing on a film set, watching conspiracy theorists’ favorite hobby horse come to life: Stanley Kubrick directing Neil Armstrong’s “fake” moon landing. The astronaut and lander are still there, but the jet-black sky overhead is gone, replaced by girders and rigging and the other trappings of a Hollywood soundstage. It’s a short video, just enough to sell the joke, but the joke isn’t the real point. The point is that this is a 360-degree video unlike any other. When I lean to the side to get a better look at the lunar lander, my perspective changes; when I crouch down to get out of Kubrick’s way, he looms higher in my field of vision. This is a video, but I’m moving within it more like it’s real life. This is “lightfield” technology, and it’s one of a handful of ways companies are trying to game VR video’s biggest limitation. Remember this? That’s the diagram of the six different axes on which your head can move, the so-called six degrees of freedom. The curved lines are all the ways your head can rotate—tilting, nodding, turning—and the straight lines are how it can change position within space. Physicists refer to these as “rotational” and “translational” movement; the VR industry prefers “rotational” and “positional.” (Personally, I prefer “rotational”/“locational,” because it rhymes so nicely, but I also lack a degree in physics, so you might want to side with the scientists on this one.) [image: image] Ad Librum As I mentioned in Chapter 1, high-end VR headsets are able to track both rotational and positional motion, but mobile headsets like the Samsung Gear VR and Google Daydream View can handle only rotational tracking. That’s because they rely on a smartphone’s internal accelerometer and gyrometer. If you load up Henry on your Oculus Rift, you can peek underneath Henry’s living-room table; not so if you watch it on your Gear VR. Because of that limitation, mobile VR tends to be a seated and stationary activity, while PC-driven headsets allow so-called room-scale VR in which you can roam around on foot IRL in order to explore a virtual space. However, we’re on the cusp of a big change here as well. In 2018, all-in-one “standalone” headsets are coming to market that are able to track your position in space, meaning they allow all six degrees of freedom. (The jargony phrase is “6DOF tracking,” which you can call it if you’re hanging out with VR enthusiasts. And yes, it’s pronounced “doff.”) Those may well end up becoming the future of everyday VR: They contain dedicated displays and computer processors so that you get all the 6DOF-tracking goodness of a PC-powered headset, without the actual PC—or the cables that tether you to it. Better VR at lower prices is great news for everyone. Yet, just because wireless headsets can be tracked positionally won’t change the fact that live-video VR still can’t take advantage of it. That’s always been a limitation of any photograph: you might be able to use two almost-identical images to trick your brain into seeing depth, but that depth isn’t real. The same thing holds true for a 360-degree photo or video. Clouds Over Sidra may induce a 3-D effect, yet in reality it’s about as deep as a 3-D movie—you can’t change your perspective within the spherical frame, or change your location within it. Wherever the camera rig was placed, that’s your vantage point, for better or for worse. Lytro’s moon-landing video, though, utilizes an entirely different kind of camera rig. Rather than arraying a bunch of outward-facing cameras around an axis that’s meant to be the viewer’s perspective, Lytro’s Immerge camera is a big flat hexagonal panel with sixty small cylindrical camera lenses embedded on it—like a fly’s eye, kind of. Here’s the fun part: each of those lenses is able to capture not just light, but the direction in which light is traveling. By shooting the same scene from multiple angles, the camera is able to record the scene in a way that knows exactly what you’d see from any point within that scene. Software then stitches it all together into a VR video that allows you to examine your surroundings from multiple angles. (It’s much much much much more complicated than I’m making it sound. Lytro’s founder wrote a doctoral dissertation on the topic of lightfield capture, and let’s just say that it’s beyond most of us. Also, don’t get your hopes up about picking one up for a quick shoot; the camera system in question is six-figures expensive.) The moon landing was an early demo, but Lytro is already seeing the fruits of its work. Chris Milk’s company, Within, used the system to create Hallelujah, a music video in which a singer performs Leonard Cohen’s stirring song. The effect is extraordinary; while the singer doesn’t follow you with his eyes the way Henry the hedgehog does, the sense of with-ness is strong—and when the lights come up, as they did in the moon-landing demo, and you see for the first time the choir behind the singer and the splendor of the church you’re standing in, that with-ness is coupled with straight-up awe. Lytro’s lightfield method is just one way people are creating “volumetric” video—video that isn’t a flat plane, but a full space. Some companies are recording human performances on a green screen and then digitizing those performances and placing them into other video environments that have also been digitally re-created, allowing for movement within the environment. (Imagine motion-capture performances like Gollum in The Lord of the Rings, just without the need for a mo-cap suit.) Facebook has even developed two small cameras that promise video you can move within, for more indie-film-friendly prices. VR video will soon feel far freer, and more comfortable, than it does now. As welcome as the technical triumph is, though, the true impact may be what it means for storytelling’s ability to establish intimacy. Merely sitting in a musician’s studio, as I did in Félix Lajeunesse and Paul Raphaël’s Strangers, was enough to create a moment. But what if I had been able to get up, walk across the room, and sit on the floor with the napping dog? We’re starting to get answers to that question, as more and more volumetric VR video experiences are screening at film festivals. In one, you accompany a Holocaust survivor to the Polish concentration camp where he spent his childhood; standing with him in the barracks and crematorium, looking at the empty beds, is more personal and affecting than any documentary or tear-jerking feature film. In another, created by Nonny de la Peña’s studio Emblematic Group in collaboration with PBS series Frontline, you stand with an ex-convict in a solitary confinement cell. Despite some moments where you can tell that he’s been digitized—the edges of his clothes flicker at times—leaning toward him while he talks about the long-term effects of sensory deprivation is chilling. It’s sympathy, empathy, and intimacy, all wrapped up in one VR package. Perhaps most exciting, each of these experiences comes from different studios, all of which are using different methods to deliver new depth to video—and new depth to the connections we make within those videos. EXPERIENCING OUR LIVES—TOGETHER What video still can’t do, though, is bring more people together inside VR, the way Ray McClure’s singing-multicolored-blobs-at-each-other tag-team project VVVR does. That’s why even VR filmmaking powerhouses like Within are moving beyond mere documentary and narrative and trying to turn storytelling into a shared experience. Make no mistake: storytelling has always been a shared experience. But I don’t mean sitting in the dark with friends listening to a creepy tale or watching a movie together. I mean being conscripted into the story, or even being the story. And that’s exactly what I’m looking for when I walk into the Within office on a sunny afternoon in May. Like so many of the companies in Los Angeles’s burgeoning VR scene, Within is based in Culver City, a hybrid of hip and Hollywood that’s also home to more conventional media powerhouses; Sony Pictures’s massive studio lot is less than a mile away. Fittingly, Within’s director of original content also hails from the world of conventional filmmaking. Jess Engel got her start producing independent movies but landed at Within in 2016 to spearhead the company’s narrative efforts. Later in 2017, she’ll venture out to launch her own VR production company, but today she’s been nice enough to indulge my request to come see the company’s newest work. However, “see” doesn’t quite cover it. When I visit Engel at Within, she’s just returned from the Tribeca Film Festival, where she helped bring festivalgoers through an interactive VR project called Life of Us. And now, we’re going to experience the title the way Within intends it—together. Once I have my HTC Vive headset on, and the controllers in my hands, Engel heads into a small office just down the hall, gets her own gear on, and launches us into the experience. My first reaction is surprise. I’ve seen just about everything Within has created, and the vast majority of it is video. Now, though, I’m not just a viewer, but a character—and not even a human one at that. I’m a brightly colored, polygonal, almost origami-like amoeba. And so is Engel! “Can you see me?” she asks in my headphones, the software distorting her voice into a burbling high-pitched coo. “There you are!” I say, laughing as my own pitch-shifted voice echoes in my ears, a split second after I say the words. We float around each other, twitching in what I can only assume is a totally amoebic fashion and laughing hysterically. Then, in a flash, the scene changes. Now we’re some sort of primordial marine tadpole things, blowing bubbles and swimming together through the sea while rays of light pierce the ocean’s surface above us. We’re moving as if on rails, free to move our head and hands—if we had hands, that is. But soon enough, we do: now we’re lizards racing two-legged across a desert, a T. rex chasing us. And again; we’re flying now, fire-breathing pterodactyls soaring together over volcanoes. Once more—this time we’re gorillas running in grasslands, besieged by baby monkeys trying to hitch a ride on our galloping simian bodies, swatting the cute pests off each other’s shoulders and arms as we race together toward the next evolutionary step. Then we’re humans, dark-suited office drones running in a sea of look-alikes through a cityscape, papers flying from our briefcases. So much for evolution, I think to myself. As we run, the city starts to become darker, more futuristic looking. I look at Engel’s human, and she’s wearing a headset; we both are, with digital devices strapped all over ourselves. Everything goes black. The music stops. Our bodies fall apart. And when a light blinks back on, we’re both female robots (“The future is female,” Engel reminds me later), dancing to a Pharrell Williams song that the artist recorded especially for Life of Us. All around us, we’re surrounded by the creatures we once were, from amoeba to hyper-technologized posthumans—until finally, everything goes black again, and big block letters appear in front of me: WAKE UP. Like so many VR experiences, Life of Us defies many of the ways we describe a story to each other. For one, it feels at once shorter and longer than its actual seven-minute runtime; although it seems to be over in a flash, that flash contains so many details that in retrospect it’s as full and vivid as a two-hour movie. There’s an almost paradoxical compression to the way I think about it now—one that I can only compare, oddly enough, to the one time I went skydiving. When I jumped out of that plane on a clear July morning, I was in free fall for less than two minutes. Yet, that short time has stayed with me in stunning clarity for years—not as a fluid sequence, but as a haphazard collection of micromemories, each one a gimlet snapshot: my feet flailing, the wind buffeting my cheeks, the plane shrinking away behind me. It’s not a holistic experience anymore; it’s a slideshow. Life of Us has stuck with me much in the same way: the volcanoes, the laughter, Jess–gorilla’s face looming near mine as she plucks a monkey from my arm. There’s another thing, though, that sets Life of Us apart from so many other stories—it’s the fact that not only was I in the story, but someone else was in there with me. And that someone wasn’t a filmed character talking to a camera that I somehow embodied, or a video game creature that was programmed to look in “my” direction, but a real person—a person who saw what I saw, a person who was present for each of those moments and who now is inextricably part of my odd, shard-like memory of them. I know Engel has gone through this journey dozens, maybe hundreds of times, so when we sit down to talk afterward, I ask her the question that I can’t stop thinking about: Is VR—and Life of Us in particular—still something that she can lose herself in? “It’s the moments when so many of my senses are engaged,” she says. “My voice. My eyes. My body. Even though I’ve done it, it’s still different, because we’re having a different experience together. It’s like a restaurant. You can go to the same restaurant over and over again. But every time you go to that restaurant it’s going to be different because the person you go with is different—even if you order the same thing, it’s going to be different. That’s what shared experiences can do.” “What’s interesting about this tool,” Engel says, pointing to a headset, “is that it has no meaning. It’s just some hardware. But the way you use it has a lot of meaning.” “It’s the restaurant,” I say. “The restaurant is the headset.” She smiles. “You go there, and it’s about who you’re with and how you engage with each other.” Who you’re with. How you engage with each other. For the first time in our lives, we’re faced with a technology that actually puts us somewhere else. Physically, mentally, and, yes, emotionally. It doesn’t merely demand our attention, or capture our imagination, or make us think about it afterward; it does away with sentences like I read a book about . . . or I played a game in which . . . and replaces them with I evolved from a microbe into a shimmering posthuman light. And best of all, it lets us experience that story with someone else—and, in doing so, find a new kind of intimacy with them. Acknowledgments Fun fact: the acknowledgments section is my favorite part of any book. Some are dry, some are disarming, some are pompous—okay, a lot are pompous—but they make a book feel alive, like some sort of complex organism that only exists thanks to this magical concert of disparate relationships. Writing one, though? Not as easy as you’d think. For one, I have no idea how far back I’m supposed to go. Obviously, any chance I had at stringing together this many words in a row starts with my parents. Being the kid of a research librarian (mom) and a college professor (dad) means that I grew up in a house where books and ideas and words mattered—wordplay too, thankfully—and while I absolutely despised writing until sometime in my late teens, there’s not a chance that I would have considered it without them. But after that, where do I go? The teachers who valued critical thought over rote curricula? Sally Harvey, Bob Courtney, Carla Gardner, Greg Mongold, Craig Wilder, Phyllis Garland? Sam Freedman, who finally told me to stop using so many words and just write? (And who also did me the favor of telling me I wasn’t ready for his book-writing course in grad school—because that would have been a disaster for everyone involved.) And once I actually figured out I wanted to write, who then? The writers and editors at Art Cooper’s early-’00s GQ, where I learned from a murderer’s row of incredible talents? Devin Friedman, Brandon Holley, Adam Sachs, Lucy Kaylin, Andrew Corsello, Chris Raymond, Jim Nelson, Adam Rapoport, Michael Hainey, Marty Beiser, Mark Healy—even writing those names out makes me feel like a twenty-four-year-old freelance fact-checker again, but every word I wrote there was at its root emulation, trying to mimic my way into some umbral approximation of their work. Later, at Complex, I got the chance to learn all over again, working alongside people who became family as we weathered the upheaval of a recession and learned how to pivot: Noah Callahan-Bever, Donnie Kwak, Justin Monroe, Anoma Ya Whittaker, Tim Leong, Jack Erwin, Damien Scott, Joe La Puma, Bradley Carbone, Mary H. K. Choi. And at WIRED, where after my first week I said the same thing everyone says there after their first week (“holy shit, this is the smartest group of people I’ve ever been around”), I learned not just how to be a better magazine editor and writer, but a better storyteller in all capacities. There are way, way too many people there to thank after more than six years there, but I’ll try to limit it to the people that I’ve worked with on VR-related stuff: Scott Dadich, Rob Capps, Jason Tanz, Caitlin Roper, Jason Kehe, Angela Watercutter, Sarah Fallon, Jon Eilenberg, Adam Rogers. That’s just a start, though; there are so, so many other incredible editors and writers I’m lucky to call not just colleagues but friends. And to the research, copy, design, photography, production, social, video, and every other department, past and present: you’re all amazing. I wish I could thank you all by name. (Side note: Man, this really is hard. What’s taking me so long?) Formative experiences aside, this book wouldn’t exist without my inimitable editor Hilary Lawson, who not only emailed me out of the blue because she’d read an essay I’d written and thought that I might have a book in me, but who managed to keep me sane while I took way too much time writing this book. (“But the longer I wait, the more stuff happens!” I told her no fewer than three times over coffee, while she managed to make her patient grimace look like a smile.) To the unflappable champions at HarperOne who made this book better in innumerable ways—Sydney Rogers, Lisa Zuniga, Jessie Dolch, Ann Edwards, Melinda Mullin, Courtney Nobile, and everyone else—thank you. And Tiffany Kelly, who managed to fact-check this book in record time: I’m still looking for a third confirmation of how to spell “gratitude,” but believe me, you’ve got mine. Thank you, Scrivener, for being an amazing writing tool that somehow harnessed a zillion interview transcripts, research PDFs, pages of handwritten notes, and assorted disjointed ramblings into something resembling coherence. Thank you to everyone in the VR world who was willing to share their expertise and genius with me, both on and off the record. Thank you to the users—the passionate, curious, invested early adopters who support VR’s promise not just with their purchasing dollars but with their hearts and minds. The future of presence has you to thank. But most of all, I wouldn’t have a future without my wife, partner, and best friend—so thank you, Kelli. Thank you for believing in me, thank you for trusting me, and thank you for talking me off the writer-anxiety ledge more times than I can count. Sitting here on the couch writing this right now, I’ve got you on one side and a tiny sleeping panda-bat-looking dog on the other (hi, Crosby!). Life outside the headset really couldn’t be any better. Wait, though; there’s still one left. So thank you, dear person who read all the way to this point without throwing the book away. There wouldn’t be an acknowledgments section without you. I’ll see you in the Metaverse, I hope. 3 Hedgehog Love Engineering Feelings with Social Presence IN JANUARY OF 2015, consumer headsets were more than a year away—yet, the Sundance Film Festival was experiencing the onset of VR fever. The festival’s New Frontier program, which celebrates “the convergence of film, art, media, live performance, music and technology,” featured thirteen installations, eight of which were virtual reality. They ranged from a short film about refugee children (Project Syria) to a short campy ode to Japanese monster movies (Kaiju Fury!) to a short thought-provoking piece about date rape (Perspective, Chapter 1) to a short documentary about VR itself (Zero Point). All were, yes, short. But more important, each was different from all the rest and thus took small steps in its own direction toward figuring out the new rules of whatever VR filmmaking would look like. That year would prove to be an inflection point; the festival would soon establish a stand-alone VR program to allow for the crush of submissions it was getting. Looking back with the benefit of hindsight, though, the most important piece of VR at Sundance wasn’t a festival selection. It wasn’t even a completed work that you could actually watch. It was a title and premise only, mentioned in passing as part of an announcement that Oculus would be producing animated shorts. Henry, the company said, would be a comedy about a hedgehog who loves balloons. It was that, but despite being a mere twelve minutes long, it was also much, much more. THE HEDGEHOG WHO COULDN’T HUG When Henry begins, you’re sitting in an apartment that’s basically the classic six of cartoon-animal real estate: part Ewok village, part artist’s cottage. (Really, what’s an Ewok but a hedgehog with a spear?) Picture frames line the walls, and a slice of a stout tree branch in the middle of the floor serves as a coffee table. Behind you, an easy chair sits next to a wood-burning stove. There’s a teakettle on the stove, and a stack of newspapers next to the chair. It’s no ordinary day in Henry’s home, though. Above you, leaves strung together next to a cluster of floating balloon animals have HAPPY BIRTHDAY written on them. Noises off to your left alert you that someone is puttering around in the kitchen. If you crane forward to get a better view, you see Henry, absentmindedly talking to himself in adorably high-pitched hedgehog-ese. Finally, Henry walks out, holding a feast: a single strawberry on a tray, capped with a dollop of whipped cream. It’s all unbearably endearing, like you walked through a movie screen and woke up in a Pixar film. If this is a Pixar movie, though, it’s WALL-E. Because from the moment Henry puts a single candle in the strawberry, you’re not entertained as much as you’re devastated. It’s his birthday, you realize, and he has no one to celebrate with. He puts on a brave face, tossing some confetti in the air and blowing a tiny noisemaker, but not even the ladybug crawling across the table wants to stick around. And when his eyes slide in your direction, his giggles giving way to a sad sigh, all you want to do is to take him home with you. But why? Some of it is because of smart character design, clearly informed by decades of Disney movies: his eyes are huge and his birthday wish universally heartbreaking (“I want a fwwiiiieeeeend!” he whimper-whispers fervently). Some of it is irony-rich writing: when his wish comes true, and the balloon animals come to life, they’re terrified of his spines and flee from the hugs he so desperately wants to give them. But above all, it’s the fact that despite your invisibility, you’re there in a very important way. You’re not just a witness. You’re an attendant. [image: image] When Henry’s eyes meet yours, you’re no longer an audience member—you’re there with him. Oculus If that doesn’t quite make sense, think about what Henry would look like if it were a regular, screen-contained, see-it-in-the-theater movie. (Also, don’t get too sad worrying about Henry. No spoilers, but everything works out in a properly heartwarming way.) You’d see his apartment, but you’d see only the things that a screenwriter and director had deemed necessary to the story, and only in the order that an editor had decided made the most sense. You’d watch Henry put the candle on his strawberry and blow his tiny noisemaker, but when he felt a pang of loneliness, he wouldn’t look at the camera—he’d just stare wistfully out the window. And when the balloon animals came to life, instead of watching Henry’s reaction devolve from wonder to excitement to despondence, you’d instead get a slapstick barrage of quick cuts as he chased them around his apartment. Every emotional beat, in other words, would be choreographed by someone else, and presented to you. But in VR, you’re there. You’re there the whole time, in sequence, without interruption. You see everything Henry sees and feel everything he feels. And, perhaps most important, he sees you. His eyes lock on yours; he acknowledges your existence. He’d be breaking the fourth wall—if you hadn’t already clambered over it to get into his apartment with him. Remember presence? This is the beginning of social presence. Mindfulness is cool, but making eye contact with Henry is the first step into the future. SOCIAL PRESENCE: THE SEED OF SHARED EXPERIENCE Back in 1992, our friend Carrie Heeter posited that presence—the sensation that you’re really there in VR—had three dimensions. There was personal presence, environmental presence, and social presence, which she basically defined as being around other people who register your existence: [I]f other people are in the virtual world, that is more evidence that the world exists. If they ignore you, you begin to question your own existence. The Hollywood fantasy theme of a human who becomes invisible to the rest of the world, and is able to move freely around (and through) people, exemplifies the experience of reduced presence in that kind of hypothetical virtual world. However, if the others recognize you as being in the virtual world with them and interact with you, that offers further evidence that you exist. Just as being able to move in a virtual world makes it more real, so does being acknowledged in a virtual world. And when Henry looks at you, that’s exactly what happens. Of course, when people were thinking big thoughts about virtual reality twenty-five years ago, they weren’t necessarily considering the possibility of a hyper-cute hedgehog, but Heeter left room for presence involving nonhumans: “Social presence can also be created through computer generated beings.” The fact that this happens during Henry is all the more amazing because you’re not exactly there. I don’t mean you’re in VR and so there is no “there”; I mean that if you look down at yourself, you don’t see a body—no hands, no legs, no evidence that you exist in the world of the story. Nor do you have a way to interact with Henry; even if you talk, he won’t hear you. You’re basically the shyest party guest of all time. (But possibly also the best party guest of all time.) There’s nothing social about Henry, other than the fact that his eyes meet yours. And even that isn’t unique. The entire concept of breaking the fourth wall hinges on a fictional character crossing the divide between fiction and reality and addressing the viewer directly. Since Oliver Hardy first shot an exasperated look at the camera in the 1920s, characters in movies and TV have cribbed the move. Think back to Ferris Bueller in Ferris Bueller’s Day Off. Amélie in Amélie. Deadpool in Deadpool. Notice a trend? But the phenomenon isn’t restricted to titular characters. In Annie Hall, for instance, Woody Allen treated the fourth wall like the mound of cocaine his character sneezed into oblivion. Again, though, consider how these acts play in a conventional 2-D movie. Sometimes they’re surprising; often, they’re funny. Seldom do they provoke a serious emotion or create a connection between the character and the viewer. In Henry, however, those moments of eye contact have an impact that far outpaces any vaudevillian mug or superhero metawisecrack. So what exactly happens in those moments? It depends. Shared eye contact, or what social scientists call “mutual gaze,” can have some pretty stark effects. In 1989, three psychologists decided to test a premise that had been around since Charles Darwin: the idea that emotion can be not a cause of behavior, as we so often assume, but a result of it. To do this, they rounded up almost one hundred college undergraduates and then randomly matched forty-eight pairs of male and female subjects—first establishing that they didn’t know each other—and put them in a test room. They gave each person one of three instructions to follow for two minutes: Look at the other person’s hands. Count the other person’s eye blinks. Look into the other person’s eyes. After the two minutes were up, the man and woman were taken to different rooms, and each filled out a questionnaire based on something called the Rubin Love Scale (no relation, unfortunately). Given the instructions, there were five ways things could have played out, only one of which resulted in mutual gaze. Just as the psychologists hoped, though, the pairs who had gazed into each other’s eyes expressed significantly more affection and respect for each other than any of the other volunteers did for their own counterparts. All well and good—but the experimenters thought that Rubin’s Love Scale was a little limited (hey!), so they ran a second study. The Rubin scale, they wrote, measured “dispositional love”—whether someone was likely to forgive their partner, for example. It measured the probability of certain behavior rather than emotion itself. The researchers wanted to see whether mutual gaze between strangers could give rise to “passionate love,” one based more on signs of physiological arousal. So this time, they put together thirty-six pairs of male and female strangers and told them they were taking part in an experiment about extrasensory perception. Before it began, each volunteer filled out a questionnaire that contained not just questions based on Rubin’s scale, but also some drawn from clinical discussions of passionate love, as well as some based on in-depth interviews with actual couples. For example, they had to rate how strongly they agreed or disagreed with the statement, “When I see _________, I feel excited.” For the “ESP test”—this was 1989, Ghostbusters II was in theaters—the volunteers were again brought into a room and asked to look either at the other person’s hands or into their eyes. This time, though, the room could be either lit normally or with the lights turned low and jazz piano playing. After two minutes of that, the volunteers were asked to perform a second ESP test, one in which they performed exaggerated smiles and frowns while their counterpart tried to describe the “symbol” the person was sending. Then they filled out new questionnaires. As expected, mutual gaze resulted in increased passionate love, as did jazz piano and dim lighting. However, there was a bit of a surprise as well: the “romantic setting” had a significant effect only on people who had proved to be emotionally susceptible to their own facial expressions. In other words, the people who felt happier after smiling or angrier after frowning were more affected by the lighting and music. For them, a number of different behaviors could give rise to emotion. But even with the others, prolonged eye contact stirred the pot of emotional connection. So what does all this mean? Well, it means that not everyone is going to get turned on by a smooth jazz album, like they did in the study. It means that people have different buttons; some are simply more suggestible to outside cues. But it also means that the eyes aren’t just the windows to the soul, but to the heart as well. And even to the libido. Before you start composing that angry email: I’m not suggesting that you should be cultivating an amorous attachment to a hedgehog. I’m just saying that something as incidental as eye contact can have a real effect. When presence is a factor, like in an immersive VR environment, that effect can go from real to profound. Feelings, it seems, can be engineered—even for a CGI animal. And if VR can forge an emotional connection between you and Erinaceus concolor, imagine what it can do with a CGI person. Or even a real person. That whole windows-to-the-soul thing takes on even more weight in VR, where you’re already primed to connect. FROM PASSIVE TO MASSIVE[LY IMPORTANT] Henry’s impact wasn’t exclusively inside the headset, either: in 2016 the short became the first piece of original VR content to win an Emmy. But while its statuette says “Outstanding Creative Achievement in Interactive Media—Original Interactive Program,” its interactivity is somewhat limited. When you visit Henry’s apartment, you’re there to watch, not to participate. Regardless, VR had won a television award for something that was part cartoon, part video game, and completely unprecedented. The industry was growing fast, with old Hollywood studios and VR-first creative companies converging on this undefined terrain, and the need to agree on some common terminology became a concern. Simply “VR,” after all, was too fuzzy; it could mean anything happening in a headset, from a video game to a meditation environment. “Filmmaking,” for its part, is a term born of a now-archaic process; what do you call it when there’s not only no film, but no frame? In the search for a just-right term, creators defaulted to the most general term possible that could still mean something—“storytelling.” And shorts like Henry came to be known as “experiences.” It made sense: you kinda watched them, and you kinda played them, but you definitely experienced them. As VR storytelling grew and VR experiences proliferated, Henry was joined by other CGI shorts. The Rose and I, a Little Prince– inspired experience in which a young boy finds an unexpected friend, came out of Penrose Studios, a storytelling company founded by an Oculus alum named Eugene Chung. That same year, Baobab Studios—a VR start-up cofounded by the director/screenwriter of the Madagascar animated films—released Invasion!, a charming comedy short about a huge-eyed bunny defending Earth against alien interlopers. Both played at film festivals, both received critical acclaim, and both were just the beginning. By now Oculus Story Studio, Penrose, and Baobab have all released multiple experiences, some of which have taken steps in entirely new directions. (However, Oculus Story Studio is no more, having ceased operations in 2017. “Now that a large community of filmmakers and developers are committed to the narrative VR art form, we’re going to focus on funding and supporting their content,” wrote Jason Rubin, an Oculus executive.) VR experiences are no longer Pixar clones eliciting “awwwww” reactions. Some are meditations on grief and love: in Oculus Story Studio’s breathtaking Dear Angelica, a young girl reads letters from her late mother, an actor, while the woman’s movie performances thunder through the virtual space. Others, like Penrose’s Allumette, present you with a floating city, one you can peek into like a VR diorama to follow the melancholy story playing out. (Just as The Rose and I has a Little Prince feel, “The Little Match Girl” is Allumette’s spiritual ancestor.) We’ve seen this experimentation before—when film first emerged more than a century ago. Just as the Lumière brothers created the illusion of a train rushing at the screen in 1895’s Arrival of a Train at La Ciotat, or 1902’s A Trip to the Moon explored how to tell a story through multiple scenes, so too does each early VR experience pioneer new storytelling techniques. It took decades for film to create a visual grammar: cuts, reverse angles, and montages may all be familiar to moviegoers now, but once upon a time each was just a director’s wild stab at conveying information inside the constraints of a new medium. Now that VR brings you inside the frame, though, those constraints are gone—and it’s time for a new generation of storytellers to try a new generation of narrative techniques. Many of those techniques are going to grapple with directing your attention; VR experiences don’t have the benefit of a rectangle to bound your focus, so creators need to find ways to make you notice the things that matter in the 360-degree sphere that is the VR “screen.” But the most interesting new techniques will continue Henry’s tradition of forging a connection between you and the characters. In Invasion!, for example, if you look down at your own body, you realize that you’re not just watching a bunny—you’re a bunny too. That changes the dynamic between you and the character. At one point in the experience, the bunny does a tiny dance; when Invasion! was first screened at film festivals, people wearing headsets tried to dance with the bunny, caught in a rare moment of species kinship. However, one common thread persists through the early VR experiences: you exist within the frame, and you might even be acknowledged by characters, but you have no agency. The story can interact with you—not the other way around. So the question becomes: Is there a way to bring you into a virtual experience to increase that social presence, so that you feel even more a part of the fictional world? There might be—and it’s already in our hands. But it’s going to take a bit of explanation first. MANUAL OVERRIDE: THE ROLE OF HAND PRESENCE In Chapter 1, we explained the difference between mobile VR and PC-driven VR. The former is cheaper and easier; all you do is drop your smartphone into a headset, and it provides just about everything you need. Dedicated VR headsets rely on the stronger processors of desktop PCs and game consoles, so they can provide a more robust sense of presence—usually at the cost of being tethered to your computer with cables. (And also at the cost of actual money: dedicated headset systems run hundreds of dollars, while mobile headsets like Samsung’s Gear VR or Google’s Daydream View can be had for mere tens of dollars.) There’s one other fundamental distinction between mobile VR and high-end VR, though, and that’s what you do with your hands—how you input your desires. In the world of video games, input happens via controllers, which can range from simple joysticks to Xbox gamepads to ultra-complicated keyboards. Even plastic steering wheels and guitars can be input devices, if you’re playing a driving game or Rock Band. When VR reemerged in the early 2010s, however, the question of input was open to debate. Actually, more than one debate. Part of the problem was practical. If you had a headset on, you wouldn’t be able to see what you were using with your hands, so you needed a device that was intuitive. In other words, a keyboard and mouse wouldn’t work. Even a conventional game controller might be too complicated. But the more interesting part of the input conversation revolved around extending the sense of presence. When you put on a VR headset, your head effectively becomes the center of a virtual space—but was there a way to create hand presence as well? In other words, could you bring your hands into VR? Video game controllers are basically metaphors. Some, like steering wheels or pilot flight sticks, might look like the thing they’re supposed to be, but at their essence they’re all just collections of buttons. When you press those buttons, that input is translated into some sort of in-game action, whether it’s “grab” or “honk” or “open.” But if you could somehow do away with that metaphorical layer, or at least create a new one that felt more natural, you could make someone feel as though the hands they see in VR are their own. The building blocks of hand presence already existed in various forms. The Nintendo Wii video game console had become a worldwide phenomenon in part because it used not a complicated gamepad, but handheld wands that were tracked in space: the console knew how you were holding the “Wiimote” and what you were doing with it, which allowed you to use it as a sword or a bowling ball or anything else that a game developer could imagine. If you weren’t careful, you could even use it to destroy your flatscreen TV in real life, as plenty of people found out. An ocean away from Japan, Microsoft had created an infrared camera called the Kinect that worked with Xboxes. A Kinect could scan the room you were standing in, identify you as a person, and track your body and hands so that they could work as controllers, or even be rendered in a game. (It didn’t always work perfectly, but that didn’t stop companies from making lackluster games for it.) And third, a company called Leap Motion had created a sensor that was essentially a more focused version of the Kinect. It didn’t try to track your entire skeleton, but by just concentrating on your hands, it was able to track them with remarkable precision, down to the smallest finger waggle. The dream use for Leap Motion was being able to use your hands, instead of a mouse, to control your computer; a few computer companies licensed Leap Motion’s technology, but no one was re-creating any scenes from Minority Report. Thanks to those kinds of conceptual predecessors, creating a very basic motion-trackable controller was actually easy. Now, both of the major mobile VR headsets come with a small remote-control device that’s essentially a tiny version of a Nintendo Wii controller. When you’re in VR, you can use it as a laser pointer to select and navigate experiences; as a game controller, it can become a fishing rod or flashlight. But although those remote-style controllers let you use your hands, they don’t let you have your hands. For that, you need to use controllers that work with more powerful desktop VR systems. These do a few very important things: 1. They render a simulation of your hand in VR that is based on the controllers’ position and orientation. It’s not done visually, so your virtual hands won’t be wearing your rings or even have the exact shape or skin tone as your own hands—but your brain is pretty easily fooled in this respect. Numerous studies have shown that your sense of “body ownership” can easily transfer to a virtual version of a hand, even if that hand looks markedly different from yours. In fact, your perception of your own virtual body can affect your real-world behavior: in one study, volunteers who were given an overweight virtual body moved their head more slowly than those given an underweight virtual body. (We’ll discuss these phenomena later in the book.) 2. These controllers are completely intuitive—you can give them to someone who has never played video games, and after a brief tutorial that person will know how to handle them. The foundation of this simplicity is having buttons that are placed on the controller to match up with how you might use your hand in real life. For example, in order to pick up an object in VR using the HTC Vive’s wand-shaped controller, you use a trigger-like button on the rear of the controller; the very action of pulling the trigger closes your hand, so the real-life motion matches what your hand is doing in VR. [image: image] A controller for the HTC Vive. Olly Curtis/Future Publishing via Getty Images 3. They bring your real-life hand movements into VR. The buttons on Oculus’s Touch controller, for example, are capacitive: like a touch screen, they know when your fingers are contacting them. If you execute simple motions—waving, pointing at something, or giving a thumbs-up—the controller translates those into a similar gesture for your virtual hands. [image: image] Oculus Touch controllers. David Paul Morris/Bloomberg via Getty Images When all these things come together—flawless tracking, intuitive “user interface,” an ergonomic design that allows the hand to assume a natural position, and gesture translation—hand presence becomes possible. Now it’s not just your head that’s in VR, but your hands as well. Before we move on, let me point one thing out: this is just the so-called first generation of VR controllers. The next iteration of the HTC Vive’s controller won’t be actively held at all but will strap around your hand, allowing you to hold and drop objects by closing and opening your hands. Last year, Leap Motion shrank its finger-tracking sensor down to something so small it could be embedded in any smartphone or VR headset. Trying it for the first time, I couldn’t stop moving my fingers around pinching small virtual boxes to stack them in a tower. Playing with blocks has never been so riveting. In other words, any VR system could conceivably do away with controllers completely for certain tasks, finally delivering on that Minority Report promise. Hand presence isn’t even necessarily limited to hands: HTC sells small wearable trackers that you can affix to any object, or any body part, to bring it into the Vive’s VR. People have attached them to Nerf guns, tennis rackets, even to toy baseball bats to take virtual batting practice. They’ve attached trackers to their cat so that they don’t accidentally step on poor Sir Flufferson during a spirited VR session. And by attaching enough to their hands, feet, and joints, people have had surprisingly lifelike dance parties in VR. None of this is easy—they’re all just experiments by developers—but it gives you a glimpse of how you can “virtualize” just about any real-world object. (Later in the book, we’ll visit a company that’s using a similar idea to create a next-level version of laser tag.) In 2017, Mark Zuckerberg posted to Facebook a photo album of his visit to Oculus’s super-secret research lab outside Seattle. This was a tiny bit upsetting, if only because I’ve been trying to get in there for years, begging their gatekeepers to no avail—but fine, I get it, he’s the CEO. One of the pictures showed him wearing a headset and a huge grin on his face, and some thin white gloves on his hands, one of which was making a gesture immediately familiar to any comic-book fan. “We’re working on new ways to bring your hands in virtual and augmented reality,” he wrote in the caption. “Wearing these gloves, you can draw, type on a virtual keyboard, and even shoot webs like Spider Man.” First off, Mark, it’s Spider-Man. Nerd demerits for you. Second, being a webslinger isn’t on everyone’s wish list. But being able to use your hands freely in VR opens up a huge new set of possibilities—possibilities that will turn VR from something you use for entertainment to something you use for just about any communication you can imagine. It sweeps away more than just the clumsy metaphors of game controllers; it removes every obstacle between your body and the virtual world. And when your body is there, it’s that much easier for your emotions to be there as well. [image: image] Mark Zuckerberg taking hand presence to the next level. Mark Zuckerberg/Facebook WAIT A SECOND—YOU WERE TALKING ABOUT STORYTELLING Great point! That’s exactly what I was doing. But now you understand what hands can do in VR, and what they might soon be able to do in VR. That second part matters because Henry and all those other VR experiences were created to be consumed and navigated with no controller at all. Any social presence that you experience while watching them comes in small doses. That’s changing fast, though. But before we look ahead, let’s take one more look around at what’s happening with VR storytelling. Every Hollywood studio you can imagine—21st Century Fox, Paramount, Warner Bros.—has already invested in virtual reality. They’ve made VR experiences based on their own movies, like Interstellar or Ghost in the Shell, and they’ve invested in other VR companies. Hollywood directors like Doug Liman (Edge of Tomorrow) and Robert Stromberg (Maleficent) have taken on VR projects. And the progress is exhilirating. Alejandro González Iñárritu, a four-time Oscar winner for Best Director whose 2014 movie Birdman won Best Picture, received a Special Achievement Academy Award in 2017 for a VR short he made. Yet, Carne y Arena, which puts viewers inside a harrowing journey from Mexico to the United States, is nothing like a movie, or even a video game. When it premiered at the Cannes Film Festival in early 2017, it was housed in an airplane hangar; viewers were ushered, barefoot, into a room with a sand-covered floor, where they could watch and interact with other people trying to make it over the border. Arrests, detention centers, dehydration—the extremity of the human condition, happening all around you. In the announcement, the Academy of Motion Picture Arts and Sciences called the piece “deeply emotional and physically immersive.” (The last time a movie received a similar special-achievement award from the Academy? That’d be 1996, for a little something called Toy Story.) With work like that, it’s little wonder that film schools are filled with aspiring writers and directors who don’t want to limit themselves to “film,” but instead want to push their creativity to the edge in a narrative environment where there is no edge. VR storytelling’s pipeline may have begun with a few documentarians and animation veterans, but it’s growing every day. Okay. So. A few years ago, I was interviewing then–Oculus CEO Brendan Iribe. (Being owned by Facebook, Oculus no longer has a CEO, so Iribe is now heading a major division in the company.) By that point he and I had spent hours together as part of my reporting on the company for a magazine story. Iribe tended to get so excited about VR stuff that Oculus was working on that he’d really want to talk about it—but he couldn’t talk about it candidly with me. Instead of asking to go off the record so that he could give me details, though, he had a habit of communicating his excitement in more coded ways. And this time, he did with a movie. “I just watched The LEGO Movie in 3-D,” he said. “I went with my girlfriend’s six-year-old. And I’m like, imagine when he’s looking at the movie and it’s right here”—he motioned around his face—“and he’s able to look around and get close. And when you get close, the little Lego guy looks and turns and says, ‘Hey back up a little bit. We have a movie going on here, back up.’ He actually looks you right in the eye. Kids are going to think the Lego is real.” We’ve seen part of that, thanks to Henry. And we know a little bit about the power of eye contact. But what about when everything else comes together? It’s starting to. In Asteroids!, Baobab’s follow-up to Invasion!, you play a service robot on an alien ship; there’s also a doglike robotic creature there, who gives you a ball so that you’ll play fetch with it—which you do via your hand controllers, your robotic tentacles snaking out as you toss the ball around the ship. It’s a small moment, but an effective one, combining as it does embodiment and direct interaction with a character. Does it change everything we thought we knew about film? Not yet, but it’s less frivolous than it seems.The future of VR storytelling is going to leverage our own ability, our own willingness, to believe that we’re someone else. That changes the nature of fantasy and escapism in some fundamental ways. Reading a book, we naturally imagine ourselves in the protagonist’s role; in movies, that gets more difficult because we’re looking at them. But to truly become a character—to see through their eyes, to take on their properties, to exist within their world—that’s an out-of-body experience like nothing else we’ve been able to accomplish without the use of some powerful psychedelics. Like those psychedelics, though, walking in those shoes and sharing those emotional connections promise to change the way we live our lives, long after the VR experience is over. That’s not happening now because, well, baby steps. Storytellers don’t want VR experiences to be too flexible while they’re still learning the rudiments of the form. “If you give people too much ability to interact with things, it’s often harder for us to tell a story,” Penrose founder Eugene Chung said when Allumette was first unveiled. As hand controllers get lighter and as obstacles to hand and social presence start to fall away, creators will be devising new ways to take advantage of our newfound freedom. The visual language of VR storytelling won’t evolve slowly the way that of film did; it will do so at light speed. And if we’re already experiencing tenderness and joy and grief just by virtue of being in the same space watching other characters, imagine how much more powerful those emotions will be when we start interacting more with characters. Or with real people—but that’s an idea for another chapter. 7 Rec Room Confidential The Anatomy and Evolution of VR Friendships IT WASN’T THE DRIVE that made Ben nervous. Driving from his home in Cincinnati to Birmingham, Alabama, takes just over seven hours, and he’d driven longer than that before. Besides, the twenty-four-year-old hadn’t taken a vacation in a while, so he was looking forward to the time on the road. His trepidation stemmed more from what was waiting for him at the other end of the trip: his new friend. In 2016, Ben had decided that he wanted a VR headset for Christmas—a nice one. He already had a desktop computer he used for gaming, so he asked his entire family to try something new that holiday. Instead of each person getting him a present, he wanted everyone to chip in a little bit to help defray the cost of the HTC Vive he wanted. Since Ben was kicking in $250 of his own money, it didn’t leave that much for everyone to contribute. The Vive plan worked, and by Christmas Day Ben was ready to enter the Metaverse. What he would do in there, though, was a question he hadn’t yet answered. So, like so many computer-savvy twenty-somethings, he consulted Reddit. A subreddit was already devoted to the Vive, and there Ben saw a post about a free game called Rec Room (see here). What he didn’t know then, of course, was that he was about to encounter one of VR’s most notable new worlds: a place that helped people overcome social anxiety, facilitated close-knit friendships, and would eventually inspire him to take a seven-hour road trip. WELCOME TO THE CLUB The Rec Room developers, a tiny Seattle company named Against Gravity, call it a “virtual reality social club,” which seems as good a way as any to describe a social VR app built around a gym-class metaphor. When you launch the game, you’re in your personal dorm room, with a loft-style bed against one wall and a dresser and mirror against another. Pieces of paper pinned to a bulletin board show you how to use your hand controllers to grab and use items or to teleport around. (Although you can physically walk around in Rec Room, you’re bounded by the headset’s cable and tracking system. For covering bigger distances, most VR experiences default to a locomotion system in which people use their controllers to specify a point they then “teleport” to. Earlier solutions, which relied on video game conventions like holding a thumbstick to “walk” in a given direction, were much more likely to result in motion sickness.) You can also select your clothes and customize your avatar, looking in the mirror to admire your getup. Rec Room gives users daily quests, some of which unlock new gear like a snazzy sea captain’s cap or a mod-style shift dress straight out of Mad Men. The options are remarkably broad, as well as inclusive—you can wear Pippi Longstocking–style braids and a goatee if you want, and having a beard doesn’t mean you can’t wear a miniskirt. However, there’s nothing photorealistic about it. Leaving your dorm takes you to the locker room, essentially a gathering place for all the other avatars of players who are online in Rec Room. A woman’s British-accented disembodied voice fills your ears, cheerily telling you about various things you can do: “when you’re ready to go play, follow the . . . arrows to the back of the locker room and teleport through an activity door.” Those activities range from P.E. staples like dodge ball and soccer to intricate themed games in which you and other players fight your way through waves of killer robots or medieval baddies. But even if you don’t choose an activity, the locker room is littered with points of casual connection: a basketball hoop, ping-pong tables, lounges with chairs. When Ben first walked into the locker room, he was dressed like the first person who had come to mind when he was designing his avatar: Donald Trump. Blondish nest of hair, suit and tie, the whole thing. “I trolled people a little bit,” he says. “I didn’t do it too bad, and I only did that for a few days.” He didn’t make any friends while he was doing that—but he didn’t get kicked out either, which is proof that he probably wasn’t being that annoying. Avatar Design You’ve probably noticed by now that there’s been a lot of talk about “cartoonish” avatars, but not a lot about realistic ones. First off, making a computer-generated face that moves realistically is tough. Like, really tough. even video games with hundred-million-dollar budgets struggle with it. the way your lips and tongue constantly rearrange themselves according to the words you’re saying, the way your eyes crinkle at the corners when you smile, the way light bounces off your skin, even the way your hair moves—each of these physiological elements poses brutally difficult technological challenges for software developers. but even if all of these things were unerringly perfect, there’s still the uncanny Valley. The what? Great name, right? in 1970 a Japanese roboticist named Masahiro Mori wrote an essay stipulating that the more lifelike a robot is—having eyes that blink, for example, or a metal skeleton covered with something that looks like skin—the greater affinity humans feel toward it. however, there comes a point at which a robot’s humanness becomes deeply unsettling. as Mori wrote: Recently, owing to great advances in fabrication technology, we cannot distinguish at a glance a prosthetic hand from a real one. some models simulate wrinkles, veins, fingernails, and even fingerprints. though similar to a real hand, the prosthetic hand’s color is pinker, as if it had just come out of the bath. . . . However, when we realize the hand, which at first sight looked real, is in fact artificial, we experience an eerie sensation. For example, we could be startled during a handshake by its limp boneless grip together with its texture and coldness. When this happens, we lose our sense of affinity, and the hand becomes uncanny. And when something humanlike is moving, rather than being still, the uncanny Valley gets even deeper. a fake hand making a grasping motion is eerier than a fake hand at rest—and a lurching zombie is far more off-putting than a corpse. For VR avatars, the uncanny Valley is incredibly difficult to cross: an otherwise photorealistic 3-D rendering of a human face could be ruined if the eyes aren’t absolutely perfect. Most software companies creating social VR, then, opt to steer clear of the valley altogether. That’s not to say that today’s avatars aren’t accurate or expressive, even in these early days of VR. When Facebook’s social Vr team was designing how people would look in spaces, it used a two-part golden rule: you should be comfortable with the way you look, and your close friends should recognize you at a glance. to that end, spaces examines your real-life photos and creates a stylized abstraction of your facial features, hair, skin tone, and accessories, such as glasses. My avatar in rec room looks like me only in that it has a shaved head and a beard; on the plus side, though, with its dozens of options, its wardrobe is way better than my Facebook avatar’s, which can choose only what color t-shirt to wear. When it comes to talking, things get a bit more complicated. trying to make lips move accurately can “get really uncanny,” Facebook’s Mike Booth has said. instead, Spaces uses a discrete number of mouth shapes and snaps from one to the other as it hears you talk. similarly, if you turn your head toward another person, Spaces will automatically shift your avatar’s gaze and create the illusion of eye contact. (in all likelihood, the next generation of headsets will actually track your pupils, which will make that eye contact much more responsive and lifelike.) Rec Room, for its part, does away with features entirely: avatars are noseless, and they all have the same eyes and mouth, though those can change to fit the expression that you sound like you would have. if you’re laughing or talking loudly, your eyes will shut and your mouth will look like a big, open smile. if you’re silent, you’ll have a small, pleasant grin on your face. But your voice isn’t the only thing that can change your avatar’s expression. some social platforms also use a limited set of expressions that are triggered by your natural hand actions. in spaces, if you shake a fist (using a controller, of course), your avatar will look angry; if you raise both hands to your face, your avatar’s mouth will open and it’ll look terrified. the emotions being conveyed aren’t nuanced, but they’re more effective than you might think—after all, we’re already used to communicating in broad visual strokes. Just look at your phone. Current thinking holds that there are only about twenty fundamental facial expressions. As Elli shapiro, who helped develop social applications for Sony’s Playstation VR, has pointed out, those expressions map astonishingly well to emoji; it’s easy to imagine the tiny texting-friendly version of “fearful” or “happily surprised” or “angrily disgusted.” (For those keeping score at home, those would be [image: image], [image: image], and [image: image].) Emoji are the perfect way to think about the current state of avatar expressions, in fact: there’s a finite number of them, and they translate pretty much universally. Currently, social VR in general hews to this cartoonish aesthetic: mouths that move in response to hearing your voice, eyes that periodically blink and look around, features that are charmingly, approachably stylized. as one recent study found, people communicating with VR avatars wearing an “enhanced” smile not only felt stronger social presence than those communicating with “normal”-smiling avatars, but used more positive words afterward to describe the experience. (Findings like these make altspace seem all the more interesting for its unchangingly stoic avatars.) As with everything else in VR, though, all of this is changing, and quickly. A number of companies have developed technologies that allow your real-time facial expressions to drive those of your VR avatar. they might not all cross the Uncanny Valley yet, but they’re at least starting to build the bridge.[image: image] See, Rec Room prides itself on its positivity. Those touches I mentioned before—being showered with confetti for high-fiving each other, or fist-bumping to invite someone to your game? According to Nick Fajt, Against Gravity’s CEO and cofounder, such touches are the whole point. The games themselves are simply lubricant. “I don’t think we really think of ourselves as a games company,” says Fajt. “The area we focus on is creating a community for people from all walks of life, and games are a great way to help a community form.” When you’re trying to create a community, though, trolls don’t help matters. Even mild ones can drive new users away. So all Rec Room users have a suite of anti-harassment tools at their fingertips—or, rather, at their wrists. If you look at your watch in Rec Room, a little menu pops up that lets you see which of your friends are online or take a picture of yourself, among many other things. You can also report any other player for acting up. As soon as you do, that player gets automatically muted in your headphones, and their avatar fades away to be almost invisible. Other players in the room are prompted to vote whether the bad actor should be removed from the room. Regardless of the outcome of the vote, your report gets logged by the Rec Room system; players who rack up enough reports can be banned. Taken in conjunction with Rec Room’s goofy, self-deprecating aesthetic, the message is clear: be nice. “If you look at the broader internet and social platforms,” Fajt says, “these are problems that all of those still deal with, and we’re in Year 25 of the consumer internet. If you’re not working on them now and you’re not taking them seriously now, they’re only going to grow.” Rec Room’s heavy users, whether through Against Gravity’s efforts or merely though self-selection, seem to have taken the suggestion to heart. One of them, Jon Ludwig, lives in Japan with his wife; the twenty-nine-year-old is a lapsed gamer, but it’s not uncommon for him to spend four or five hours in Rec Room on a weekend. “I can be shy in real life, but in Rec Room, I want to be the absolute best version of myself,” he tells me over Skype. “And the best version of myself is a little bit more outgoing, and a little bit more willing to talk to people and make sure people are having a good time—if they’re a little new, even complimenting them. Every time. I just don’t even think about it.” The feel-good vibe worked on Ben too. Soon enough, he stopped dressing his avatar as Donald Trump, and he started noticing the same people each time he was in Rec Room. The last time Ben had a real social circle of friends was high school. In college, he was a commuter student, driving thirty or forty minutes each way to get to his classes; for someone who describes himself as a “loner,” that made a social life next to impossible. “If I was going to a party,” he says, “I’d have to go home after class to get a shower, get dressed, then go back to campus, where I’m going to have to figure out where to stay for the night—it was just a giant pain in the ass doing stuff like that.” So he mostly kept to himself. Keeping to yourself isn’t really an option in social VR, though. I mean, technically speaking, it is, and in any lobby of a social app you’ll see people milling around by themselves, but Rec Room all but guarantees that you’re going to be talking to people. That’s exactly why most of the games are team-based, with cooperative elements; at Against Gravity, Fajt and his colleagues call the guiding principle “structured social.” Eventually, Ben met a young woman named Priscilla. Priscilla had started out in Rec Room even more awkwardly than Ben had; she didn’t talk much, and she’d often leave, crying, because she thought people didn’t like her. But by the time she met Ben, the twenty-seven-year-old was outgoing enough to tell him to add her as a friend. In her real life, Priscilla is a successful sports artist, creating incredibly detailed pencil drawings of University of Alabama football players. She’s also, in her own words, a “hermit.” She grew up in a tiny town in Alabama, but for the past four years, she’s worked from home and hasn’t ventured out much beyond that. “I just go to the post office,” she says, half joking, when describing her non-virtual social life. “As far as, like, having actual people that I talk to almost every day? No.” But now that Priscilla and Ben were friends in Rec Room, they could invite each other online and into specific games, and before long the two were hanging out virtually five nights a week. There were others too. All told, a group of anywhere from two to fifteen people began congregating almost daily. At home, they might a have drink or some weed nearby, while inside their headsets, they’d play paintball or one of the cooperative quests, then reconvene in a private lounge for some drunken 3-D charades and soul-baring. As the year wore on, the group’s closeness spilled over the edges of VR. Regulars congregated in any number of online spaces, from a Rec Room–devoted subreddit to a dedicated space on the chat platform Discord; many of them followed each other on Instagram using Rec Room–specific accounts, some of which never show the people behind the avatars. “A lot of the people in VR have a similar social anxiety,” Priscilla says. “I think that’s why we connect so well—and why it’s, like, way more special than anything else.” Ben agrees. “Everyone who has VR—at least people who actually spend a lot of time in a game like this—they’re trying to find something in the virtual world,” he says. “I’m like that as well. It’s great that we have a medium that we can actually go in, have something as intimate as this.” Priscilla and Ben took things one step further and started texting each other. Then they hit on an idea. Priscilla had been wanting someone from Rec Room to visit her in Alabama, and Ben was more than willing to tackle the drive. “I figured, hey, I can do that for a friend like I haven’t had in years,” he says. So he did. WHEN AVATARS MEET Taking online relationships offline is nothing new; between 2005 and 2013, one large study found that more than a third of all marriages began online. Thanks to social presence, though, VR represents an entirely novel variation. There’s simply no other communications medium—texting, emails, chat rooms, instant messaging, social media, even FaceTime—that can place two people in the same space in such an embodied fashion. In the case of Rec Room, you can get to know someone’s mannerisms and unique quirks and become real friends with them—even if all you’ve seen of them is an avatar with no nose. By the time Ben knocked on Priscilla’s door, he’d seen her nose, and she his: they’d traded photos, which had forced Ben to take his first-ever selfie. “That picture you sent me was just not working,” she says to him now, laughing about his awkward pic. It’s the next-to-last night of Ben’s trip to Birmingham, and we’re on a Skype video call. While I’d spent time with both of them in Rec Room—generally getting my ass kicked in paintball—before this I knew them only as their avatars: Priscilla as a dark-haired woman who was partial to a police officer’s cap, and Ben (or Blitzkrieg, his Rec Room username) as someone who wore an old-timey sheriff getup and had a big blond beard . . . along with a red cape and a top hat, because why the hell not? In person, though, they’re perfectly run-of-the-mill twenty-somethings. Priscilla’s pretty, with big dark expressive eyes to match her hair, and Ben is tall and broad, with wavy blond hair and a mild demeanor. Talking to them as flesh-and-blood humans feels more “real” than VR by most conventional measures—but, to my surprise, it’s also very much not. Much of that is due to that weird alienating gazelessness so endemic to video call options like Skype and FaceTime. Instead of looking at your phone or laptop’s camera, you look at the person you’re talking to, and they do the same—the net effect being that everyone’s eyes are slightly averted. In VR, our avatar’s eyes are the camera; to look at someone’s face when you’re talking to them is to give them not just your undivided attention, but the appearance of your attention. There’s a sense of distance that pervades our video call, despite the fact that their personal mannerisms persist. Ben still has a tendency to rephrase my questions when he’s talking, like a beauty pageant contestant, and Priscilla still tends to pause to gather her thoughts, and “I mean” is still her favorite phrase—but the social conventions of the real world make the conversation feel like an interview, instead of just organic conversation. When we’ve hung out in VR, the ease and intimacy between them has been palpable; once, in Rec Room, while Ben was talking to me about his job, Priscilla grabbed a marker and some yellow sticky notes, drew a bikini top and cleavage, and affixed them to Ben’s chest. It was funny, but I was struck more than anything by how comfortable the two seemed around each other. That’s not on display right now; they’re sitting in two chairs, and there’s little physical interaction between them. Of course, some of that might be attributable to the will-they-or-won’t-they vibe they’re giving off. When I first learned that Ben was heading to visit Priscilla, I assumed that his trip had an ulterior, if obvious, motive; the way he so carefully avoided nonplatonic references in our VR conversations only bolstered that idea. (At some point, when unsure of someone else’s feelings, we’ve all opted for self-preservation through plausible deniability.) And our Skype conversation now has steered clear of any mention of romance. When Ben first got to Priscilla’s place, he tells me, they hugged each other and she showed him around; then they watched comedy movies and fell asleep on the couch. (Priscilla wanted to watch The Notebook, but it wasn’t streaming.) They went to Buffalo Wild Wings, and Ben persuaded Priscilla to go hiking. Neither says anything about . . . anything. But the real world betrays them: they’re both blushing more than their avatars ever did. They’re clearly not going to tell me anything, so I ask whether they had talked about the possibility of romance. “I mean, it would be nice,” Ben says, laughing. “But I didn’t set expectations in my mind for really anything on this trip. I was just taking a vacation, hanging out with a really nice friend.” So . . . have things moved in that direction? Ben looks at Priscilla: “I’ll let you handle that one.” “Can we tell you later?” Priscilla says. As soon as the Skype call ends, she sends me a flurry of texts: ok so here’s the thing about that question you asked about the romance Blitz had feelings for me for some of the time before coming here that he ended up telling me I had a crush i was getting over so i didn’t actually reciprocate those feelings until his arrival here so long story short, we kissed:p It’s sweet, and it’s innocent, and it’s just like real life. It’s also temporary, just like so many real-life relationships can be: after keeping things going long-distance for a little while, the two decided they were better off as just friends. And that’s where things might have ended for our Rec Room friends. In fact, it was where things ended by the time I’d finished the first draft of this book. But it turned out that something a little more significant was in the making. (All together now: cut to me, hunched over my laptop in despair.) The “crush” Priscilla was referring to? That was also a Rec Room user—a guy named Mark. Mark’s been self-employed for the past dozen years, running a collection of search-based websites. After his mother retired recently, he helped her relocate to a small town outside Seattle. The town was great for seniors, but it wasn’t so great for thirty-somethings—especially thirty-somethings who work from home. Cue Rec Room. “This is pretty much my outlet,” he tells me the first time we meet in VR, “unless I want to drive two and a half hours to Seattle and then two and a half hours back just to go to a club.” Plus, it’s exercise: two or three hours a day ducking and jumping in VR paintball beats getting on the rowing machine. Mark was part of the same extended group of friends as Priscilla and Ben. On weekend nights, they’d all hang out, drink, play games, and tell embarrassing stories about themselves—“like you’re ten years younger than you actually are,” Mark jokes. He’s easygoing but has enough gravitas to counterbalance his silliness; it’s an appealing combination. Priscilla developed some feelings for Mark, but then her friendship with Ben blossomed. Months later—long after the Ben saga—I got a text from Priscilla. Well, a series of texts. Hey So I married someone from Rec room And it wasn’t blitz That someone, in case you hadn’t guessed, was Mark. After some time away from Rec Room, he had come around again, and over the course of the next few weeks his and Priscilla’s conversations took on a different tenor than they had before. They recognized themselves in each other; they confided in one another. Finally, unable to wait any longer, they decided to meet, and Mark flew to Alabama. Just in case, he brought an engagement ring with him. That was a Tuesday; on Thursday, they were engaged; on Friday, they were married in a gazebo high on a hill outside Birmingham. They broke the news to their Rec Room friends on Discord; the following week, with Mark back in Washington, they held a follow-up ceremony (and drunken reception) in VR. The Rec Room wedding was everything you’d imagine about a ceremony populated by cartoony avatars. The platform had recently introduced a “maker pen” that let users create 3-D objects, and it was put to great use: a virtual bouquet of flowers; a pink, three-tiered virtual cake. Priscilla wore a pale yellow wrap dress and a garland of purple daisies, Mark a dark suit and a top hat, red cuffs peeking out from under his suit jacket. The bridesmaids were in blue, the groomsmen in tuxedos, and they all gathered in a gazebo at dusk. Later, they danced—each of them in their own headsets, standing in front of their computers, 2,600 miles apart. Ben was there, too. After all, when you’ve spent that much time with another person, and your friendship goes that deep, you go to their wedding. Besides, it’s not like you’re shelling out for a plane ticket. This isn’t the first time people have started a relationship because of VR; this isn’t even the first time people have gotten married in VR. (The first time was back in 1994, when a San Francisco couple got married at Cybermind, a VR arcade where the bride worked.) Altspace hosted a VR wedding last year. But in both of those examples, the couple getting married had met in real life; they just tied the knot in VR because it was something new. Priscilla and Mark, though? That just might be the first time people got married in VR after meeting in VR. It won’t be the last, though. Not as long as our hands and heads are in there, helping our hearts connect with other people as we never have before. Copyright [image: ] FUTURE PRESENCE. Copyright © 2018 by Peter Rubin. All rights reserved under International and Pan-American Copyright Conventions. By payment of the required fees, you have been granted the nonexclusive, nontransferable right to access and read the text of this e-book on-screen. No part of this text may be reproduced, transmitted, downloaded, decompiled, reverse-engineered, or stored in or introduced into any information storage and retrieval system, in any form or by a