When Robots See Us Laugh

For years we’ve asked what benefits we get from computers. The answers have sometimes been unclear, but you could cover most of it with the crypto-colloquialism “a lot.” There is a corollary question, however: How do computers benefit from having us? It’s an open question whose spectrum of answers will be no less varied than its correlate, and the search for those answers will likely be accelerated by videogames. 

Computers have been watching us for years, trying in crude ways to anticipate our needs. Microsoft invented a talking paper clip to escort people through Word and Excel features (e.g., “It seems like you’re entering an address!”). Online games began to keep track of statistics to better match players of similar skill level. Mario Kart games had a system of item spawning tied to a player’s position in the race, giving more powerful items to players further back in the pack. The Elder Scrolls: Oblivion tied enemy strength to a player’s overall experience level. God of War tracked the number of consecutive deaths a player had in a certain area, which triggered an option to lower the game difficulty. This quickly evolved into invisible systems of difficulty adjustment managed by computer intelligence in Left 4 Dead, Resident Evil 5, and Madden NFL

In the last few years, computers have gained many new tools to monitor player behavior and adapt to it. Microsoft’s Kinect can both watch and listen to players as they play; Apple has Siri; Google has built-in facial recognition for Android, and multipurpose devices like tablets and mobile phones can track a person’s habits in both productive and playful software, with the potential to use one to inform the other. Though there is some trepidation about how and what information computers collect from us, the most fruitful debate for the future is not whether they should be allowed to store memories of our behavior, but how we can better teach them what to do with the information.

Heather Knight is a social roboticist and PhD candidate at Carnegie Mellon’s Robotics Institute who has spent much of her career so far trying to enhance the partnership between humans and computers. Knight’s most recognizable work has been with robot comedians, the subject of a 2010 TED presentation. The project relies on a small robot equipped with a camera and built-in microphone to serve as crude eyes and ears. It’s programmed to deliver a series of short stand-up comedy routines on a number of different subjects. 

When a card with a specific subject written on it is held up, the robot will access the particular sequence of jokes, and deliver them with accompanying physical animations for emphasis. Crucially, the robot can interpret information from its audience mid-performance and adjust its routine. The jokes have been systematized—given category ratings according to topic, length, interactivity, movement level, appropriateness, and hilarity. As the robot takes in audience laughter after each punchline, it can use these ratings to adjust the routine. If jokes high on hilarity and low on appropriateness are causing discomfort, for instance, it might shift to jokes with higher appropriateness ratings. As a system of adjustable outcomes it’s relatively simple, but it points toward a massive and largely unexplored subject: teaching computers to evaluate aspects of their human partners with an approximation of the senses that another human would have to rely on.

We will have to develop a new lexicon for the parts of ourselves that make us feel most vulnerable. 

Knight’s calls these “charismatic machines.” It’s a charming idea, a future improved by computers able to relate to us with sympathy and sensitivity. It’s also a subversive paradigm shift over our older assumptions about computers, which presumed we would, in the best cases, be made smarter, more productive, and more social by them. This idea of charismatic machinery doesn’t disprove that idea, but it provides an opposite end to the spectrum of thought about computer intelligence. On the one end, computers make us better; and on the other, we make computers better by challenging them with our irrationality.

This evolution will be revolutionary for videogames. We think of games as interactive, but the interaction is heavily skewed toward the player. The creator of a game cannot respond to player inputs in realtime, so they must instead spend months and years in development anticipating what players might do and then pre-script consequences for those actions. To make games work, the systems and their consequential inputs have been limited: shooting, jumping, punching, throwing. 

With tools to measure player interaction beyond binary button presses and single gestures, in combination with an array of secondary information about the player’s behaviors and histories in non-game applications, and a dynamic intelligence capable of responding differently to the same basic input when its background context has changed—this is the basis on which game design could break free from the didactic structure of old art and flourish along a new axis, empowered by a more equal exchange between creator and player. 

Before we arrive in that era, we will have to develop a new lexicon for the parts of ourselves that make us feel most vulnerable. The idea that machines are watching us, storing our habits and information in an imperishable database, assembling a body of evidence with which to betray us at some future public undressing, is a psychic block. There is another side to the fear of being intruded upon, in which we instead embrace them without fear of their capacity to turn on us later. We need not recoil from the discarnate intimacy of machines. History reminds us we can turn on one another easily enough without them. But with them, we could hope to find ourselves charmed anew, having found another way to leave a piece of ourselves in a medium that did not previously have the spark of life.

Illustration by Sarah Jacoby