A Mind of One’s Own

Artificial intelligence is one of the hardest ideas to incorporate into a videogame. People on a real battlefield tend to hide rather than sprint, and it’s unlikely that sword-wielding demons would take turns attacking. From the earliest days, AI has been more of a command hierarchy than intelligence. MyCyberTwin is a company trying to change our understanding of AI. It’s created a web-based application that lets a person create an AI imprint of their own personality. The company has already worked with NASA, the National Australia Bank, and Second Life, and supported the Anarchy Online beta. MyCyberTwin’s John Zakos and his business partner Liesl Capper-Beilby are hoping to bring their moldable AI clay to everything from massively multiplayer games to microwaves. I spoke with Zakos about how.

How do you break down personality into something that can work with artificial intelligence?

We basically observe psychology principles about how thought manifests, how people communicate. Then we bring in classic AI techniques like natural image processing, noise representation, and fuzzy search algorithms to implement and model out some of the paradigms. What we set out to achieve was to make a CyberTwin, or a chat robot, that was easy for anyone to create. We wanted to be able to observe somebody’s personality. 

So you reach a questionnaire online and then select what you think your personality style is, and give it some examples of content you’d like it to use. It is all software-based. We wanted to make it accessible to normal users worldwide.

How many different uses do you think the technology will have?

The central concept is to have one CyberTwin, one digital person, one virtual agent that can be representative of the person and deployed into different environments. So the CyberTwin can be available in MSN, a web page, deployed into a virtual world or a mobile phone application. All those different environments are simply contact points for the person to talk to the CyberTwin. The actual CyberTwin is the same essential brain that’s available and processing the conversation. We’ve already run a research project where we tested CyberTwin in an MMORPG.

We’ve already seen branching dialogue trees tied to pretty nuanced facial animations in videogames. It seems like a reactive AI system to help make all those moving parts work a little more naturally could be powerful.

We knew that people were working on the actual representation of emotional states and expressions, so we thought we’d focus on boosting the capabilities from a personality and intelligence standpoint. We can also fully control the emotional state. We can say the character is now happy to this degree, or angry to this degree.

We focus on the core state. What is the personality? What should we say at this point in time? How should we come across? What questions should the CyberTwin ask? How proactive or reactive should the CyberTwin be, according to the state of the game? We control all that and parse the messages back [to the user], including the content and personality, along with the state of the CyberTwin and the emotional position it may have.

How have you approached the ethics involved in mimicking human intelligence and emotion?

Liesl has written a paper about ethics that’s available on our website. Her basic principle is that when you build a robot, you should build it to be as humanlike as possible and not just as friendly as possible. If you tell it to “Fuck off,” you shouldn’t program the robot to say, “Ah, I’m sorry,” or “Thank you, master.” We shouldn’t build robots that are just going to be ideal personalities of people that don’t really exist. That will condition a kind of behavior in humans that we don’t think is ethically correct or morally responsible. We want to think of building robots as humanlike as possible, not just in conversation, but in terms of ethics and morality in the way they interact with somebody.

Doesn’t that skirt the issue in some ways? I can very easily see scenarios where people could use something like this to openly manipulate another person. Like, maybe OKCupid could have a bunch of AI chatbots who flirt with users and make them feel more engaged and drawn into the network under the guise of being a real person.

Our position is that you should present [the CyberTwin] as a virtual agent, an artificial character. They’re presented that way to make it easier for the person to talk to, and it also shows respect to the real and virtual worlds. We’ve had clients who’ve wanted to deploy CyberTwin and present it as a human—like, click here to talk to our “assistant.” It wasn’t really presented as a virtual agent at all, and we found that there was a percentage of the audience that was really upset that this was presented as a human, then realized a day later that it was a virtual agent. They felt psychologically duped.

What is the next big problem that you’re working on with the technology? What’s the next big challenge for AI?

There are two parts to the answer. The first challenge is making these CyberTwins so prevalent in so many different environments that they become present in everyday life. The second is to not just make CyberTwins conversational agents, but agents that can make decisions. They can look at something and buy something on your behalf, or do an analysis of something and give you the results. It’s going beyond conversations and going to real decision-making. 

 

Illustration by Daniel Purvis