Mission

Versions is the essential guide to virtual reality and beyond. It investigates the rapidly deteriorating boundary between the real world and the one behind the screen. Versions launched in 2016 at the eponymous conference dedicated to creativity and VR with the New Museum’s incubator NEW INC.

Pitches, questions, and concerns can be directed to info@killscreen.com

We're always hiring and looking for new writers! For details, click here.

Kill Screen Versions The Meta

Inception and the art of recursion

Inception and the art of recursion

Christopher Nolan’s dream-within-a-dream concept for Inception (2010) is a revelation in understanding the spectrum between dreams and reality. The film itself is a nightmarish continuum of false awakenings that lead into layers of imagined worlds created by the collective subconscious of protagonist Dominick Cobb and his team of thought-architects.

In the film, Cobb’s heist-like strategy involves creating three levels of dreams within a collective subconscious that gradually penetrates the mind of his intended target, Robert Fischer. The team’s potent dream technology ensures their ability to plant an idea into the most vulnerable level of Fischer’s psyche using dreams, completely transforming Fischer’s future motivations regarding his inherited business empire. The essence of Inception is, itself, inherently recursive in the sense that all action is paused on one dream level as Cobb’s team attacks deeper levels of Fischer’s mind in the second and third dream layers. The action within the deepest dream level must then be terminated in order to escape towards the final awakening and the first layer of the dream sequence, prompting a sudden emergence from that single subconscious and a return to reality.

In terms of coding, the push would be the sedation needed to delve into a dream layer while the pop would be the kick needed to awaken or escape from one of the lower levels of the collective subconscious. The stack would be the variables forming Fischer’s psyche that are exhaustively manipulated by Cobb and his team within each dream level (basically the computer scientist concept of the push-down stack, as told in Douglas Hofstadter’s 1979 book Gödel, Escher, Bach.)

And all this analysis stems from exploring recursion as a phenomenon that can be cinematic, musical, artistic, linguistic, psychological, mathematical or embedded in creative code. What exactly is recursion?

inception-358

In the artistic or photographical sense, it is sometimes known as the Droste effect or mise en abyme en français, an image-within-an-image-within-an-image that should theoretically contain an infinite number of that same image-within-an-image-within-an-image. Pink Floyd paid a psychedelic tribute to recursive phenomenon with their album cover for Ummagumma (1969) and M.C. Escher seemed to have a natural affinity for recursive patterns, expressed perfectly in his fascinating art work. Then there’s geometric fractals, visual representations of scaled mathematical patterns that are recursive in their repetition. The Fibonacci sequence is another classic example of a numerical sequence where recursive thinking is absolutely necessary. The pattern is easily distinguishable since the next number in the sequence is always found by adding the previous two numbers. And those numbers were determined by adding the numbers before that, thereby creating layers or clusters of additive groups which form recursively. In coding, recursion involves a computer function calling itself in order to delve deeper into the layers of a problem or sequence.

In the linguistic sense, recursion could be thought of as a sentence within a sentence at it’s simplest form. And in a very very simplistic sense, recursion can be deconstructed as a smaller set of tasks or images or patterns that make up an even bigger set of tasks or images or patterns. From this perspective, it becomes easy to break apart real-world problems and situations and place them in the context of recursion, which is exactly how creative and recursive coding led to the beginnings of artificial intelligence in videogames.

“skin-of-an-onion”

Consider a game of chess: A human player would adapt an empathy-driven approach, naturally and intuitively interpreting the capabilities of her rival before making the best move possible. However, a computer-generated chess player determines the optimal move by first generating a tree data structure of layers of possible moves and countermoves before recursively searching for the best move possible. In this sense, the computer would actually end up analyzing many permutations and combinations of moves that the rival could make in future layers of the game before deciding upon a final move. The computer has essentially broken down the opponent’s thought process using a step-by-step approach, understanding recursively what the opposing candidate might do in the next move and the move after that and the move after that and so on (note that this describes a simplistic chess computer game known as minimax, as opposed to more complex and sophisticated computer chess systems that use heuristics as well to beat the best human chess professionals).

“Expert chess players use intuition and complex rules, but computers do not” says David Kosbie, computer science professor at Carnegie Mellon University. “Instead, they have simpler rules but then use recursion—minimax, in particular (with some optimizations)—to search very deeply, looking many moves ahead. The bottom line: computers use recursion and deep search to compensate for a lack of intuition, so they appear to be intelligent, while not using a human-like algorithm. Hence, this is artificial intelligence.”

N4olO

And this leads to the crucial question: Can a form of “artificial” intelligence generated from “original” intelligence be considered as an independent case rather than simply derived—a separate system capable of it’s own unique functionalities? The very core of the concept of AI is to be able to code computers to develop the same pattern-finding capabilities that humans already possess. And perhaps this code or the pattern-finding algorithm generated by the computer could eventually modify itself effectively, developing a unique learning behavior that remains independent of its creators, the humans. This is not a new concept in the science-fiction world. Two notable examples are Blade Runner (1982) and Ex Machina (2015), where our naive protagonists fall in love with lethal female robots. But that’s another story.

The conception of artificial intelligence is traced back to Alan Turing’s famous article for Mind, Computing Machinery and Intelligence (1950), where he introduces the imitation game as a test to distinguish between human intelligence and artificial intelligence. This is a three-player game that consists of a man, a woman, and a judge. The three individuals are separated into three rooms and can only communicate with each other via typewritten messages. The aim of the game is that the judge must determine who is the man and who is the woman by asking each person a series of questions. The man’s goal is to try and trick the judge while the woman must strive to help the judge make the right judgment call in this game of clues and guessing. Now what if the man in the game scenario is replaced by a computer? Would the judge be able to distinguish between a human and a machine if the computer was trained to effectively imitate humans? This is the central point of the imitation game, also known as the Turing test.

who is to judge that one reality is truer than the other?

It’s interesting to note that Turing makes a case to support the concept of a machine’s ability to think as part of the conclusion to his paper: “One could say that a man can ‘inject’ an idea into the machine, and that it will respond to a certain extent and then drop into quiescence, like a piano string struck by a hammer.” And this is exactly how Cobb and his team succeed in Inception, infiltrating the human mind with their idea rather than attacking a computer-generated psyche. Turing also uses a “skin-of-an-onion” analogy in his exploration of the mind’s architecture: “In considering the functions of the mind or the brain we find certain operations which we can explain in purely mechanical terms. This we say does not correspond to the real mind: it is a sort of skin which we must strip off if we are to find the real mind. But then in what remains we find a further skin to be stripped off, and so on. Proceeding in this way do we ever come to the “real” mind, or do we eventually come to the skin which has nothing in it?”

This is a particularly interesting concept to grapple with, as Turing is describing the mind recursively, with its infinite layers of skin that conceal some true reality within its depths (or nothing at all, we might just be hollow). Consider this same idea from the Inception perspective: The complexity that comes with delving into deeper dream layers creates greater dangers and obstacles for Cobb and his team. The greatest danger of all is death within a dream sequence, plunging the individual who dies within the dream into a state of limbo where they are forced to create a new world from the depths of their subconscious, incapable of all escape as they are submerged in a never-­ending state of sleep. In this sense, limbo becomes a new reality that is very distinct from the individual’s original reality. But who is to judge that one reality is truer than the other? It might not come as a surprise to learn that Nolan was particularly inspired by The Matrix (1999) when he concocted his dream worlds for Inception.

inception-411

It could be that the robots of the future have a recursively generated psyche, a coded ego, and a fitted soul developed by artificial intelligence and machine learning techniques. Perhaps they will walk among humans as equals, capable of emotions and love. But more importantly, we should question whether humans have understood the mind to the extent that we might be able to create a mechanical replication that imitates the firing of neurons, the intricacies and complexities of human neural networks. According to the best sources on machine learning at the moment, there are many collaborative explorations between neuroscientists and computer scientists as attempts to truly discover how the mind works, and then imitate these brain mechanisms with code. Today’s machine learning programs often use large databases of stored information that help the computer generate patterns or develop deep learning capabilities—completely unlike the way humans are programmed to learn and understand the world as they grow and develop.

While machine learning programs might be able to advise us on future stock options or track sex traffickers using public data, they are incapable of anything outside the realms of their initial inputs. And since nobody completely knows how the mind works yet, it would be impossible to teach a computer to think like a human using present-day technologies. There are still certain unknown variables, missing knowledge, misunderstood data—until then, it seems robots are doomed to play a looped version of the imitation game with humans and nothing more, trapped by their own incompletion.

Versions is brought to you by Nod Labs,
Precision wireless controllers for your virtual, augmented and actual reality.
More From Author