Timoni West

Director of Augmented, Virtual, and Mixed Reality at Unity Labs

July 30, 2019

Even as computers get smaller, it’s hard to imagine a future in which they aren’t made of metal and plastic. Beyond that, can you conceive of a computer without a screen? That’s what the future of spatial computing looks like. As the director of augmented, virtual, and mixed reality in Unity Labs, Timoni West is the ideal person to ask about a future of computers without screens. 

She spends her time thinking about how she can enable developers, artists, and programmers today to build for these experiences in the future, allowing you to interact with the computer in any way you want. Years from now, you might not even have a device to carry around—you could be wearing your computer, according to West. 

Want a peek into what she makes? She open-sources a VR extension for Unity, EditorVR, available on Github for you to download and customize. We spoke to West about digital art,  the democratization of design tools, and how Iron Man makes explaining spatial computing much easier.


How did you end up in this particular field? 

I started off making ASCII art on my parents’ old Brother word processor. I grew up on a farm and we didn’t have computers for a little while. Then I moved into making a lot of icons on my old Windows ’95 computer.

After I graduated from college, I basically went straight into web design and development. I did that for a very long time and moved over into mobile when smartphones started becoming more popular.

A few years ago I was working in consumer and social digital product design. I decided that I really wanted to work on making creation tools and start digging into the harder problems around how we enable people to make good experiences. 

Then this opportunity came up at Unity in 2015. That was just before all of the more robust VR dev kits were about to hit the market. We knew that it was gonna be a huge step towards the future. I joined and immediately started trying to figure out how we can make use of the new medium for 3D creation.

“If we wanna take spatial computing seriously if we really think that this is the future of how people will be interacting with computers, we need better inputs so that people can make better choices.”

How do you explain spatial computing to, say, your family back in Nebraska? What are the entry points? How do you describe what the field is and how different is it from other forms of computing?

Well, it can be hard when I get into the nitty-gritty. But it’s really easy to explain spatial computing because so many movies have it. Everybody gets it. Imagine a world where I have an Iron Man piece of armor in front of me and it’s all digital and I can take it apart and people are like, “Yeah, yeah. I’ve seen 300 movies that all have this.” 

That part is not so hard. Then it gets fun because everyone’s got this sort of cultural point that you can always start with. You can say: Okay, so imagine you are Tony Stark, and in your office you’ve got virtual windows everywhere. Now imagine that window is your Spotify player. How do you want it to look? Do you want it to be against the wall? Do you want it to be floating in space? Do you want it to follow you around? Do you want it to go in your pocket? If you pull up from behind your ear, do you want the Spotify application to show up?

I think people get that pretty quickly. Then they start thinking, “If I could do anything that I wanted with a computer and it doesn’t require any hardware, what would I do?” It gets hairy from their perspective which is kind of the job of me, my team, and everyone who works in the field now. 

Is there a north star for you? Is the goal to someday be able to create something that’s so simple that it knows your every thought and desire?

Yes, absolutely. I grew up in a really cool era of computing where computers were so ubiquitous that I was lucky enough to have one, even on my farm in Nebraska. They were cheap enough that even my parents could afford to get one. But they were also really kind of raw. You could explore, poke around, and really get into the inner workings of the computer. 

Nowadays, computers are much more robust, but they’re also a little bit more of a black box. What I would love to do is build up tooling that gives a lot of that control and ability to see behind the scenes back to the users. 

I also think that everyone has a dream world. Everyone has a thing that they’d like to make if only they had the ability to express it or do it. So I also want to make sure that we have tooling that allows people to build the worlds of their dreams.

I see it all the time now: There’s Harry Potter the movies, Harry Potter the books, Harry Potter the PotterCon. People are getting invested in these vast worlds that span many different types of media and I think that there’s something to that, that people really want to be in these places. With Unity, you can build the world you want. I don’t think a lot of people know that. My goal is to democratize that ability so that everyone can build and be in these worlds.

Game designers have been using Unity for quite some time, and now, as you mentioned, there are other types of experiences that were built using Unity, including feature films. 

 

Is there something about game design that’s special about its approach to a tool like Unity—something about the way that game makers think that is different from other creative fields, and that maybe has encouraged Unity to grow in a way that maybe it wouldn’t if it had taken root with a different creative profession?

Unity, like every computer program, inherently understands things in a precise way. If you’re creating a game, you’re creating a world where every object has a known position. Whatever game engine you’re using will know exactly where everything is. In a sense, it’s a constrained environment.

Game designers create robust world systems that make sense. They are the ones who decide what happens when you open a door, why you open the door, whether you can open the door, or whether the door opens when you’re in proximity to it. 

Fundamentally, they come up with a system that has to be consistent enough that the player can learn and then master the game. That makes sense for Unity and other game engines, because inherently, the programs value precision. Computers are precise.

Now people are making experiences for the real world, which is incredibly imprecise — or at least it doesn’t update its values all the time. Let’s say you want a ghost to walk into a room and sit on a chair. This means that your device needs to know what a door is and what a chair is. There has to be a fallback if there are no chairs in the room. 

Lastly, the ghost needs to be able to find the chair no matter where it is. The chair could be moved by an inch, it could be moved by two feet, it could be all the way across the room, and we still need to have that experience play out the same way.

These are all the things that we have to consider since computers are bad with imprecision. They’re not that good at the fuzzy vagaries of the real world.

These are all the things that we have to consider since computers are bad with imprecision. They’re not that good at the fuzzy vagaries of the real world.

That’s really interesting. We spoke to Paola Antonelli, who’s the head of R&D at MOMA. She’s been the senior curator of architecture and design. She thinks about these things in terms of interaction design and behavior, and she’s used this analogy: “When you design a chair, you might be designing for a handful of considerations: height, comfort, location. But when you’re designing around intent or behavior, that’s where things become incredibly complex.”

I imagine it’s hard because a user’s benchmark is the real world and they have expectations about how things would work in a virtual space. It just seems so profoundly difficult for an interaction designer working in 3D space to think through all the potential things that could fail, especially since the user would notice them immediately.

It’s interesting that you mentioned a museum. We think about that as a good example; it’s kind of a midway point, because it is a real-world environment, and therefore it’ll have some amount of fuzziness. An IKEA store is another great example, where there’s a lot of autonomy for the user once they’re in the space to move within certain constraints. 

But you really have already created the entire flow. You know where people can be. So you have to dictate what the user can do within these constrained spaces. It’s not the same as a park, where you can go anywhere you want. Museums have a flow, stores often have a flow, and certain types of theatrical experiences also have a very architectural flow for the space they fill.

That’s much more analogous to game design. I think of game designers as making their own world systems. I think of product designers, on the flip side, as designers who have to get their app’s system to work in the real world’s system.

 

You mentioned your background as an artist. What role do you think artists will play in immersive reality going forward? 

I think it’s really the job of the artist in this medium to be pushing people’s perception of what is impossible. So, for example, I think street art is probably the best analogy we have today, where people really take a new environment and thinking about what it is like, and then making it better. Putting their stamp on it, or making a statement.

But when people think about it, they go back to Tony Stark. It’s just a bunch of UIs. There’s no art there. Nothing interesting. She doesn’t have a little fake digital bonsai tree or something. And that, I think, will be the job of the artist. I must be real here, at least in America, we don’t tend to do architecture very well. We have tons of boring offices, and I think it’s the artists’ responsibility, and hopefully, the great opportunity, to take these spaces and make them magical. Make them art. Make the world a better place. We didn’t do it physically, we can do it digitally.

I would love for them to just take that and run with it.