The screen blinks. Instructions arrive. “Plug in your headphones, put away your device, and listen up…if you hear static, you’ve gone too far.”
HATFinder (2015) is a mobile-based spy game that’s not played on the screen. The initial instruction screen is the one and only time you’ll see anything on your phone. The rest of the time it’s played with your ears. Under the backdrop of a missing agent, it’s your role as a spy to investigate what happened. In short, the game is a sound-based scavenger hunt. While navigating a physical space—the game is specifically designed to be played at the UCLA campus—players try to find audio clips known as “historical auditory traces” (HATs) to reveal the game’s narrative and uncover the mystery.
Originally made for last year’s Inertia Conference, a sound and media conference in Los Angeles, the game is the brainchild of Lauren Burr, David Jensenius, and Mark Prier. It’s been described as an augmented aurality game. These games are part of a larger umbrella known as locative media—works relying on players navigating physical spaces to activate game events. And for games like HATFinder, these events are delivered primarily through audio form.
I caught up with Burr after her talk on “Overcoming the Visual Interface” at the Technoculture, Art, and Games research center in Montreal. She cites Janet Cardiff, a sound artist known for her location-specific “audio walks,” as one of the inspirations for the game. However, what is different in HATFinder is the presence of two different types of sounds: sounds that play on certain geographic zones while others are “sound trails” that act as audio breadcrumbs leading players to these different zones.
“When you put on your headphones, you’re hearing all these different sounds at once and they’re pulling you in different directions,” Burr explains. “It sounds really pretty but it takes a lot of concentrated listening to actually figure out how to manipulate the thing so you can get the information and the clues that you need.”
Burr and her team developed the game as a proof of concept to show the capabilities of HATengine. This engine is a Twine-like tool they developed to help bring down the technical barriers of creating augmented aurality games and sound installations. With the engine, artists and game makers can simply draw polygons on a map and upload the associated audio clips—turning any physical space into a playable and interactive environment. These works can then either become their own stand-alone apps or (by publishing and sharing the work through HATengine itself), they can immediately be played through the HATengine app.
Currently in development, the team hopes to expand the engine to include more ways to affect the listening experience—from the time of the day, the weather, to knowing whether or not you’ve played the game before. While not publicly available yet (though Burr says to contact her if you’re interested in trying the engine), her team hopes to eventually open it up so anyone can make their own HATFinder-like game.
HATFinder is available free on iOS. The engine behind the game, HATengine, is currently in development but open to anyone who wants to build similar sound/location-based works. For more information, contact Lauren Burr or visit the official website.