Mission

Versions is the essential guide to virtual reality and beyond. It investigates the rapidly deteriorating boundary between the real world and the one behind the screen. Versions launched in 2016 at the eponymous conference dedicated to creativity and VR with the New Museum’s incubator NEW INC.

Pitches, questions, and concerns can be directed to info@killscreen.com

We're always hiring and looking for new writers! For details, click here.

Kill Screen Versions The Meta

Virtual reality’s answer to the Terminator is already here, sorta

Virtual reality’s answer to the Terminator is already here, sorta

Sitting at a bar, listening to a pointlessly dreamier remix of a Twin Sister song, Xavier Snelgrove took his time answering a very important question. In their present state, which of the following fields is funnier: VR technology or image detection algorithms? After 20 seconds of hard thinking, he crowned VR the winner, saying it only got the edge as people look incredibly goofy using it. It was hard to argue with that. He brought up the recent photo of Mark Zuckerberg walking down a gangway, surrounded on both sides by people wearing Oculus Rift headsets, uncannily resembling that 1984 Apple ad. In turn, I brought up the picture of Palmer Luckey wading through a hallway of photographers with that open-mouthed expression Robocop makes when his memories start coming back.

The Stupid Hackathon in New York, which Snelgrove participated in, ended with a pinata. An Oculus Rift was used as the blindfold. “It makes a great sleep mask,” said Snelgrove.

The Poster for the Stupid Shit No One Needs & Terrible Ideas Hackathon
The Poster for the Stupid Shit No One Needs & Terrible Ideas Hackathon

Both fields, VR and image detection, had the piss taken out of them by Snelgrove at the annual Stupid Hackathon (full name Stupid Shit No One Needs & Terrible Ideas Hackathon). His favourite pieces from the event included a manufactured dick that dispensed Soylent portions in exchange for hyperbolic compliments about the food substitute, and The Glass Mattnagerie, a program made by a guy named Matt where virtual models of his nude 3D scanned body could be shaken around like a box of Glosettes.

Xavier’s piece runs parallel to his own work. Professionally, he works with emojis and algorithms, programming a way for our favorite emoticons to automatically detect when they could be best used as we type, based on the words we are using. In the Stupid Hackathon, he sort of built the opposite. Signifier is like having the computer-powered vision of the Terminator, but only if Schwarznegger was missing a few screws.

“What is technology’s role other than to take the most menial task and do it for us?” asked Snelgrove. “What is the task that we do the most day in day out? The answer is, take visual stimulus and turn it into signifiers in our mind. We do this all day and it’s exhausting. The idea behind signifier was to automate that task for you.”

The gist is that, using Google Cardboard and even Google’s own ImageNet-trained algorithms, you plop the visor over your eyes and the device will show you its interpretation of the world around you. It’s all words separated by topic classes hovering on a pleasing, goopy red background, leaving you to play the saddest game of reverse-Pictionary with an idiot-machine to get a basic idea of your surroundings.

preferring to call any plate it sees “burritos”

I started pointing madly in the bar for examples and Snelgrove said, for the most part, it’d do alright. It would know the decorative wheels and spokes on the wall are from a bicycle, fairly assess that his backpack is a “bag” or “purse.” I thought it was a long shot, but Snelgrove even felt it could identify the hefty machine in the corner as pinball. All that said, where it does slip up is a hoot.

In case you’re concerned about the robot revolution, you can relax. With these infant algorithms, the machinesand Signifierwould most often mistake you for a “barbershop” or “restaurant,” because the images have been trained on the contexts of certain photos and not actual human faces. It similarly struggles with food, preferring to call any plate it sees, whether it hosts a meal or not, “burritos.” If only we were so lucky.

Xavier Snelgrove, creator of Signifier
Xavier Snelgrove, creator of Signifier

“I don’t think we need to have a robot interface blocking us from interacting with the world around us,” said Snelgrove, “but I guess the commentary is that it’s happening to some extent anyway. Huge parts of our reality are getting filtered through someone else’s algorithm, Facebook’s news feeds, its new reaction function. It’s not a conspiracy exactly, our reality is just being abstracted, mediated.”

Though dopey, gags like these do bring up conversations. The point of Stupid Hackathon is to poke fun at the self-fellating tech world which constantly addresses each wearable as Prometheus’ next gift. Snelgrove cited self-driving cars as an example: a keen idea but clearly invested in by someone whose biggest problem in the world is sitting in gridlock wishing their hands were free to livestream their commute.

Snelgrove likes emojis, the semiotics of which have been developed by the culture that uses it, instead of the producers who would attempt to copy edit. In this light, image algorithms and computer detection rub him the wrong way, as they are bound to be empirically executed. Damned to the interest of the tech circle who makes it instead of the diverse masses who will be subjected to this mad science.

Silicon Valley can train its machines to categorize their West Coast surroundings, sure, but what of the poorer areas, the alienated and overlooked communities, or the cities on the other side of the planet? For this vision to succeed we must join hands and share all of our burritos with each other, for a better, more accurately categorized future.

Versions is brought to you by Nod Labs,
Precision wireless controllers for your virtual, augmented and actual reality.
More From Author