Extreme Measures

Last year, officials from the Food and Drug Administration (FDA) and the Defense Advanced Research Projects Agency (DARPA) met in Arlington, Va. to discuss the future of biological feedback. The sort of machines they were talking about—machines that could quantify physiological functions ?(blood pressure, skin temperature, muscle tension) and compute useful data from it—were first introduced in the late 1960s? by the mathematician Norbert Wiener at MIT. Soon the machines were in universities’ labs and hospitals. By 1980, the Biofeedback Certification Institute of America oversaw the best practices in the burgeoning field; the book Biofeedback: Clinical Applications in Behavioral Medicine came out the same year. By the 1990s, nearly every hospital in America had means for electronically measuring a patient’s response to external stimuli, and so did many doctors’ offices.

Biofeedback has been around for nearly 50 years, but it wasn’t until the last decade, in particular the past few years, that the machines used to measure physiology became mass market. In 2006 Apple and Nike teamed up to release the Nike+ platform, a combination shoe-embedded sensor and iPod adapter that tracks your run and delivers the report to Nike’s website or your smartphone. When DARPA’s scientists and engineers met with the FDA, they were thinking beyond shoe sensors and apps. The meeting last year was about putting biosensors inside people.

The scientists and engineers at DARPA are asked to think about the future, imagine what might be possible, and work to create it. Around the same time Wiener coined the term biofeedback, DARPA helped create ARPANET, a computer network that grew to become the global internet. In the last year it helped invent a very small robot that looks and flies just like a hummingbird.

Soon, we’ll be able to quantify our opponents’ frustration. We’ll be able to see them literally sweating it out.

At the meeting with the FDA last year, Daniel J. Wattendorf, a DARPA program manager, said that his goal was to measure biomarkers “on- person” in real time. A continuous monitoring device of the sort Wattendorf went on to describe—able to last a lifetime with little to no effect on its wearer—is not so far out. In fact, speeding up both the development and approval of such a device is the FDA’s role, and why the administration was there.

When such a device is developed, the FDA will have to gauge the efficacy of implanting sensors with no immediate curative properties in people. To work well, the sensors have to function for a very long time, which raises some interesting questions: What if someone wants to opt out soon after a biofeedback device is implanted or ingested or tattooed? Can someone decide exactly what they want monitored, à la carte? Who will host that data? And even more basic, what if a person’s body doesn’t react well to having a chip implanted in it? Despite these hurdles, there is currently a very large portion of the population willingly participating in an ongoing experiment that isn’t all that far removed from the kind of thing DARPA is working toward. They do not have chips implanted in them, but they do have a computer or smartphone—some even have wristbands and shoes that talk to one another. All are gaming.

* * *

Outside Seattle, across Lake Washington, an experimental psychologist named Mike Ambinder is measuring the heart rates, skin conductance, facial ticks, and eye movements of videogame players to deepen his understanding of what happens when we play—using biofeedback to inform game design. “Measuring sentiment,” he calls it. Ambinder works for Valve Software, the company that created the zombie shooter Left 4 Dead. Using heart rate and sweat monitors, he traces a Left 4 Dead player’s stress levels.

Left 4 Dead is dark and moody; and the thing that stands out about the game is how unsettling it is, how it’s designed to keep players off-balance. The zombie horde comes in waves, and these waves are diabolically unpredictable, even after playing the same level over and over. The unevenness makes the calm in-between moments just as stressful as the times dozens of undead are ripping at your face. Ambinder is interested in measuring how we feel—both during the face-ripping moments and those in between. The ultimate goal is to allow the zombie horde to respond to our stress and, at the right moment, up it. In-game response to player stress is pretty basic, says Ambinder. “It’s actually fairly trivial to create an accurate device that measures SCL, or skin conductance level—a correlate of physiological arousal,” he wrote to me. “In theory, a mass-market controller incorporating this technology could be done cheaply and with minimal disruption to existing form-factors … measuring facial expressions with webcams [is another] example.” In other words, what Ambinder is measuring in his lab could be repeated in a living room.

Tools for biofeedback are improving so quickly, Ambinder explained, that in a decade or so we’ll have come from knowing the difference between binary emotions (happy and sad) to real nuance—the difference between boredom, frustration, and bliss. He described how “particular emotions can become part of the gameplay—imagine a lie-detection game where a player needs to maintain a calm level of arousal and neutral facial expression to advance, or a competitive multiplayer game where your score depends in part (or in whole) on the level of arousal you’re able to incite in your opponents.”

In one of his tests, Ambinder found that the simple SCL biofeedback systems he had rigged were especially popular in multiplayer competition. This isn’t surprising. The fun in sports often comes from knowing you are frustrating your opponent, and what better way to know that someone is truly frustrated than to see data tracking their physiological reactions? ???????

We already infer this—we read people’s expressions on the court or the field—but soon, we’ll be able to quantify their frustration. We’ll be able to see our opponents, wherever they may be, literally sweating it out.

For game designers, the opportunity to measure a player’s pulse or smile presents something of a Pandora’s Box. Real-time biofeedback is something game designers have never had. Once the game is out in the world, it’s being tested and tweaked, adapting to the bodies of its players in real time. Designing our emotional reactions to a game is no longer simply an art, but a quantifiable science.

* * *

About 15 years ago, Rosalind Picard began to explore what she calls affective computing, which, she says, is a way to “teach computers to recognize emotion.” Picard is director of both the Affective Computing Research Group and Autism & Communication Technology Initiative at the MIT Media Lab. A professor of Media Arts and Sciences at MIT, she’s also the cofounder and chief scientist at a company called Affectiva. One sensor Affectiva makes picks up electrodermal activity—small changes in conductance sensed from the surface of your skin that can tell a machine that you are about to sweat before you do. Affectiva also sells facial recognition software developed by Picard that identifies 25 points on a user’s face and tracks those movements, identifying the likelihood that when two corners of a user’s mouth rise up and his eyes crease, he is smiling and probably happy. The Affectiva software works with a standard webcam.

One of Affectiva’s clients is using its facial recognition system to track the emotions of online shoppers as they move through the decision process. Another—a university—is using the software as a noninvasive method of testing how autistic children respond to external stimuli. Affectiva’s literature describes how the company’s software might be used in videogames: to “measure gamers’ actual emotional response to your game over the web” and “gain a scene-by-scene understanding of areas that are most engaging or not.”

Above: Rosalind Picard

There’s one gaming company that’s taken a particular interest in Affectiva’s research: Zynga. The pairing makes sense. Gamers, particularly in a Zynga game like FarmVille, are consumers, and Affectiva’s feedback systems have proven successful for companies like Boston Market and a market research group called Shopper Science. Zynga, which claims that some 60 million people play its games every day, supposedly employs more economists and analysts than game designers. Theirs is a calculated art, and their profit structure more closely aligns with a casino than a game company. Most of Zynga’s earnings come from a small number of players willing to spend a lot of money (in one instance $75,000 in a year on a single game, Bloomberg Businessweek reported) on its virtual items and special features. After OMGPOP’s game Draw Something became so popular Zynga bought the studio (for $180 million), one of OMGPOP’s engineers, Shay Pierce, quit. Pierce went on to announce why he was quitting on the games industry- focused site Gamasutra: “An evil game company isn’t really interested in making games, it’s too busy playing a game—a game with the stock market, usually. It views players as weak-minded cash cows and it views its developers as expendable, replaceable tools to create the machines that milk those cows.” Even so, by sheer profit Zynga seems to be winning.

“There’s no reason to think that entertainment consumers would be any different than any other consumer,” says Dmitri Williams, an associate professor at the University of Southern California’s Annenberg School for Communication. His research focuses on online games and their social and economic impact. “There are two goals,” he says of his work. “One is to understand what’s going on inside of these worlds. The second is to figure out if we can use these spaces as petri dishes for social sciences.” One of Williams’s former students, Dr. Rabindra Ratan, is using biofeedback to learn about our emotional response to our avatars in games. The studies can be as basic as watching for an uptick in skin conductivity while one’s avatar is under attack. More sweat means a deeper connection. After Ratan’s participants filled out a questionnaire about their experience, he had them watch a video of their character being beaten up. They still had a heart monitor on. Time after time, and even though they weren’t playing, their pace quickened.

This empathy toward our avatars, even after the fact, has enormous implications for how games are built. But it also points to how we might use games for things we are only beginning to imagine. Williams told me about how videogames and virtual worlds are now being used as a form of therapy. He gives, as an example, groups of veterans using virtual- reality goggles and Second Life to help treat post-traumatic stress disorder (though this is coupled with traditional, real-world therapy sessions). Online, a therapist can slowly dial up stimuli, which helps vets re-sensitize in a safer environment. Once biofeedback systems exist on controllers and screens, once everyone else outside of labs is allowed to tap this flood of extremely personal data, “You’d begin to see it used in all sorts of creative ways,” Williams says.

* * *

Not long ago I walked across Los Angeles, from LAX to downtown to Los Feliz, then back west on Sunset until I again reached theacific. When Rockstar released Team Bondi’s game L.A. Noire—with its sprawling and cinematic vision of late-1940s Los Angeles— and before I worked the Black Dahlia case and drove and drove and drove through the city, I walked. It’s not the best way through the game, but moving across the landscape any faster misses the immense amount of work that went into rendering the city. The game’s central action—finding clues at crime scenes and interrogating everyone in your path—is far less gripping. Yet it was impossible not to consider the possibilities a biofeedback system running through the game might allow.

The facial capture Team Bondi used on actors to build lifelike models in the game could be reversed, though crudely, using Picard’s facial recognition software and a Kinect. I imagined—during moments that were meant to be dramatic, that were supposed to command my close attention on the couch, sitting up at attention, my brow slightly furrowed in a look of concern, or at least interest. What if SLC sensors on the controller read that my palms remained dry, and gave me a slight edge for remaining calm during questioning or a car chase? What if, as I started sweating more, steering became more difficult, or the questions I could ask in the interview less cool-headed? What if, when I stood up during an interrogation, my character did too? And what if that made my threats all the more threatening? Imagine that there was no controller, no chip under my skin, yet the game still knew my position on the couch, my mood, how much my hands were sweating, and my heart rate—all things that are, or soon will be, entirely withinthe realm of possibility. I still might prefer walking through the landscape of postwar Los Angeles to the investigations and interrogations. But I might not.

There is also the ultimate question, the double-edged sword of it all: the data itself. Who owns it? What might it be used for?

One possible approach to seeing a heart beat could be through blood, which is remarkably good at absorbing light, even through skin. Our skin’s reflectivity changes slightly, concurrent with our heartbeat, so it’s possible to register a pulse not with sound or feeling but by sight. We can’t see it. But Ming-Zher Poh, a graduate student at MIT, invented a mirror that can. A fine-tuned instrument such as this camera, capable of measuring slight variations in light reflectivity, is surprisingly cheap. Poh used a simple, off-the-shelf webcam and altered the software so that it filters all ambient light and measures only the ebbs and flows from the face of the person staring into it. “Mirror mirror,” the user might say, and the camera behind the reflective glass captures the light bouncing off her face and turns it into ones and zeros. An algorithm Poh created translates the ones and zeros into a number of heartbeats per minute.

A camera-mirror like Poh’s is a marvelous tool, but for game designers it could destroy what they (and we) love about games. Williams put it most succinctly: “The great irony of feedback is that if you rely on it too much you get in the way of the art.” There is also the ultimate question, the double-edged sword of it all: the data itself. Who owns it? What might it be used for?

Immediately following her TED talk this year, Regina Dugan, DARPA’s outgoing director, had a brief Q&A with TED curator Chris Anderson. Near the end of her talk Dugan had a DARPA engineer fly the remote-controlled hummingbird and the crowd gasped and applauded. Then Anderson asked Dugan a few questions about the “scramjet,” a Mach 20 glider that was illustrative of the sort of audacious dreaming and repeated failure necessary for the breakthroughs that make DARPA famous.

“What do you picture that glider being used for?” Anderson said. Dugan: Our responsibility is to develop the technology for this. How it’s ultimately used will be determined by the military… The purpose of the technology is to reach anywhere in the world in less than 60 seconds.

Anderson: What’s the payload it could carry?

Dugan: We don’t ultimately know. We have to fly it first.

Anderson: But not necessarily just a camera? [A pause, followed by nervous laughter from the crowd]

Dugan: No. Not necessarily just a camera.

What might the datastream delivered by biofeedback’s payload be? What is its purpose? The question is especially relevant because, not long after appearing on the TED stage, Dugan announced she was leaving DARPA to join Google—another place, like DARPA or MIT, where a camera is not necessarily just a camera.