Mission

Versions is the essential guide to virtual reality and beyond. It investigates the rapidly deteriorating boundary between the real world and the one behind the screen. Versions launched in 2016 at the eponymous conference dedicated to creativity and VR with the New Museum’s incubator NEW INC.

Pitches, questions, and concerns can be directed to info@killscreen.com

We're always hiring and looking for new writers! For details, click here.

Kill Screen Versions The Meta

Driverless cars are now playing Grand Theft Auto

Driverless cars are now playing Grand Theft Auto

Autonomous cars are omnivorous, which is probably not an exciting thing to say about our future robotic overlords. Is it any better if I clarify that they are omnivorous in their desire for data? Only marginally so? So it goes.

One of the problems facing our future driverless cars is that they need to understand a staggering multitude of scenarios. Some—but not all—of these situations can be foreseen and programmed for. Others can be established by testing these cars on the road. But think of all the road tests that would have to be completed for you to feel confident in the sample size being used as your chauffeur’s braintrust. That would take eons, which is why training in virtual worlds is so appealing.

Here, then, is a story from the MIT Technology Review about driverless cars learning from the venerable Grand Theft Auto franchise (yes, that one):

Several research groups are now using the hugely popular game, which features fast cars and various nefarious activities, to train algorithms that might enable a self-driving car to navigate a real road.

There’s little chance of a computer learning bad behavior by playing violent computer games. But the stunningly realistic scenery found in Grand Theft Auto and other virtual worlds could help a machine perceive elements of the real world correctly.

This may seem a little extreme, but researchers are turning towards all sorts of information in their attempts to create smarter driverless cars. Previously, MIT Media Lab’s Moral Machine asked human participants to engage in a series of puzzles that made implicit moral judgments explicit so that robots could implement these decisions. Even though the GTA tests don’t require human input in the same manner, it is best understood in the shared context of broader efforts to collect all available information in machine-friendly formats.

This whole story is illustrative as an example of how our digital activities—including ones seemingly unrelated to other contexts—produce large swathes of information that can be used to teach robots to perform stereotypically human tasks. Facebook and Google have huge datasets that can be used to understand human speech patterns. Consequently, Facebook’s wit.ai can process basic phrases for uses like chatbots. Everything can be turned into a dataset—or rather, everything will be turned into a dataset.

Versions is brought to you by Nod Labs,
Precision wireless controllers for your virtual, augmented and actual reality.
More From Author