Lana Z Porter: Interaction revolutionizes connection

Lana Z Porter: Interaction revolutionizes connection
Interview by Alex Westfall | Photography by David Evan McDowell

Lana Z Porter is always thinking about interaction. As the Creative Director of Research & Development (R&D) at The New York Times, she helps oversee the experimentation with and articulation of emerging technology in service of journalism. From recreating Manhattan landmarks inside Unreal Engine to training machines to answer snap questions about the COVID-19 vaccine, the nucleus of R&D’s work lies in strengthening the proximity between readers and stories. For Lana, it’s “one of the most meaningful parts of the work we’re doing.”

Here, we speak with Lana about her prismatic background spanning Theater, Anthropology, and Design, her fascination with fusing memory with imagination, and how her thinking about interaction has evolved to forefront human connection.

What was your environment growing up like? Do you remember your first creative impulse? 

I grew up immersed in the academic world of Cambridge, Massachusetts. My dad’s an academic, and my mom is involved in the literary world there. From a young age, the most elemental childhood playthings became a medium for me. I was occupied for hours and hours building landscapes out of blocks. I’m also told that I gathered building materials from the area outside my house.

Each school project became an opportunity to do something creative. I made board games, created a scale model of the Battle of Gettysburg, or built a model of a bridge. Any opportunity I could to make something tangible, I took it.

My parents were incredibly encouraging of my artistic side. My mom went back to school to get a Master’s in children’s literature and a huge influence and inspiration were the books we read. I also remember going to the Museum of Fine Arts with my parents and being blown away by Georgia O’Keeffe’s work. I was mesmerized by artistic expression in general.

You studied Anthropology, Science and Technology Studies, and Theater. Together, these fields seem to feed into the critical world-building that one does as a Creative Director. 

In the moment, none of these things felt connected to each other. Theater is a space of world-building and suspension of disbelief; it’s a particular skillset and way of thinking sequestered in one part of my brain. The other part was more academic—a practice of getting into the literature, doing research.

I went to college with the idea that I would study Archeology. Past worlds were fascinating to me. The Introductory Archeology course wasn’t offered in the fall, so I took Cultural Anthropology instead. I was totally hooked. Anthropology was a framework for understanding how people make sense of the world around them. It’s a practice built on careful observation, on describing and inscribing what you see and experience as a participant-observer. It’s also a non-judgmental practice, which spoke to me.

Anthropology, and certainly Archeology, is a past-facing discipline. I became interested in orienting this practice towards things that are happening in the present or even in the future. Many of the things that interested me—design, emerging science, and technology—were forward-looking. How do you apply that lens of the past and present to disciplines oriented towards the future?

Leaving school, I knew I wanted to do something creative, but I didn’t know what skills I could apply besides research and writing. I had been doing theater throughout school, especially set construction and design, but it felt like a separate part of me. I ended up working as a writer at a creative production company. It was working there and working alongside folks with a wide range of capabilities—from programming to product design—where I realized writing is a medium. But I also learned that there are so many other ways to express ideas. I learned that I wanted to be an active participant in the framing of these projects and ideas.

You received a graduate degree in Design Interactions. What exactly did this program entail?

When I was applying to grad schools, I heard about this program at the Royal College of Art from several people who said, “This program will blow your mind.” Its guiding philosophy is about how to use design to create tangible encounters with possible futures.

The process and methodology that goes into that are radically different from an anthropological approach, which is more past/present-oriented. Anthropology is about taking something that is already there and making sense of it in some way. But this program asked us to imagine something entirely new—to imagine that anything is possible, suspend your disbelief and envision what future scenarios we might expect to see, we might want to see.

As someone who had been super academic, it was challenging. I watched my classmates look at these briefs that were like, “Use a microbe to design a new experience.” I spent a lot of time grappling with why it seemed so easy for my classmates, many of whom had backgrounds in art and design, to wield their imagination so freely? I felt stuck, so naturally, I did what I learned to do in anthropology, which was to ask questions and try to understand. What is the nature of imagination? How does imagination work? In understanding that, maybe I could help myself get unstuck.

Most of my time at the RCA was spent grappling with, “What is imagination? How does it work? Why is it so important?” I talked to academics across disciplines, practitioners, cognitive neuroscientists, psychologists, computer scientists, trying to unpack the relationship between imagination and memory. It turns out that that relationship became the crux of my research.

There is a lot of rigor to the methodology of anthropology, and the design world felt a lot more radical and open. Design is a discipline built on imagination and speculation. That felt overwhelming, but at the same time, incredibly exciting. Those two things have combined and intertwined throughout my career.

Lana testing sunset lighting effects at her desk in the Design Interactions studio at the Royal College of Art. Image by Lana Z Porter.

A risograph print publication containing research findings, short stories, and outcomes from Lana's MA thesis, "Ethnographies of the Imagination," which explored the relationship between memory and imagination by tracing how the beach came to represent the concept of paradise in the western imagination. Image by Marcel Kaczmarek.

In your personal practice, how does your creative process begin? 

The start of a project is a head-down, research-driven process. At first, I cast a super wide net, starting from some hunch or brief. I do a lot of research; collect as many references as possible, let myself be drawn down different rabbit holes, and do a lot of note-taking and highlighting.

My creative practice is driven by this process of connecting dots, pulling salient pieces of information from potentially disparate places, synthesizing them, then thinking about why these things feel connected to me. The RCA taught me to start making as fast as possible, which I’ve struggled with. It’s easy for me to get pulled into the research and the planning part of a project and kick the can down the road in terms of the idea’s actual expression.

Your research interest in this compounding of the past, the present, and the future into one space feels applicable to how time unfolds in many immersive and emerging mediums. Do you bring this temporal thinking into your work at The New York Times?

When you apply XR or 3D modeling to a journalism context, it’s a different set of objectives. If we’re trying to realistically render a space, we must be capturing the moment in time that the reporter was there. There’s a sense of ground truth, or how that space was at that time, that needs to carry over into the storytelling.

At The Times, we are operating in a narrower band from a temporal perspective because it is so much about accurately representing a moment in time. There’s less of an ability to play around. That is the hat that I’m wearing now, and a lot of the work we’re doing is to try to explore new ways for journalists to bring readers along with them, to bring them closer to the story. That type of immersive experience can do that effectively, but there’s still the charge of journalistic integrity that we have to apply as well.

Generally, how does a project come to fruition within the R&D division? How does the team work together to see a project through?

The R&D team sits within the tech organization of The Times. It’s a multifaceted team; we have engineers and creative technologists who work on projects alongside producers and program managers. We have a strategy arm that looks at the potential use cases for various technologies that we think might be important in the near future, helping to scope out what those experiments can be. I have a slightly hybrid role as well. 

R&D focuses on solving existing problems and developing new capabilities in two categories: reporter tools—technologies that can allow reporters to gather news more efficiently or accurately. Then, there are news delivery tools—ways of harnessing new technologies to tell stories, for example.

Our team explores technologies that might be applicable in 18 to 24 months, so a relatively short time horizon. We are constantly experimenting and iterating and thinking about, “What are the tools and technologies that are going to add value for the newsroom that we can apply in service of journalism?” 

 

My role is twofold. On the one hand, it’s to help articulate the work that we do, both to Times readers and journalists within The Times. A lot of the work that I do is storytelling—using the techniques we’re developing to show how technological breakthroughs happen on our team and to demonstrate the potential of that technology. The other side of my role is designing the ways that people actually will interact with these tools, so a more UX/UI set of responsibilities around things like browser-based 3D experiences, or conversational UX for news Q&A. It’s both articulation and design.

We don’t want to be academic about the way that we describe our work. It’s applied work, but it’s not remedial. We’re trying to find that sweet spot for how best to express the work in a way that doesn’t assume too much but is also accessible.

There are certainly times when we get excited about the possibility of a technology, and we’ll try a couple of different experiments. Maybe it doesn’t work, or it doesn’t reach the quality benchmark that we think is required for something to become a viable newsroom tool. Failure is a natural part of our process.

We are inspired by the ideas that come from the newsroom. The story should come first. For us, that means that the tools we’re developing need to be good enough to be seamlessly integrated into newsroom processes with minimal friction. Our work should never get in the way.

I can imagine that it’s an incredibly time-bound, fast-paced production schedule. What are some current projects you are excited about?

R&D operates by running experiments. There’s a whole body of work we’ve been doing around 3D storytelling. We’ve been building a library of components that allow us to tell 3D stories in a much more efficient way, keeping in mind device and bandwidth constraints.

These stories are typically not accessible, so this library gives us advanced building blocks for quickly assembling these kinds of stories.

In the 3D vein, we’ve been looking at how gaming engines like Unreal might unlock new capabilities for recreating and experiencing journalistic scenes and information. Interactive 3D content is becoming more and more accessible. Unreal offers a lot of exciting possibilities for 3D storytelling—whether it’s creating super precise, realistic simulations that put readers in the shoes of a reporter at an exact moment in time or visualizing data in more dynamic ways, or even enabling multiplayer experiences around big news events like elections or the Olympics.

This is a whole new world for journalistic storytelling, and with that comes a lot of unknowns and ethical questions. We want to explore these questions now, while the capability is somewhat nascent so that when limiting factors like computational power become less limiting, we’re ready.

In our experiments with Unreal, we’ve been testing what we can achieve in terms of realism with effects like dynamic lighting, as well as what we can achieve in terms of interaction with things like navigation and points of interest. We want to develop repeatable, reusable UX components so that it’s easier and quicker to build these kinds of simulations in the future.

R&D uses Unreal Engine to create detailed interactive simulations of journalistic scenes in 3D. The team used a 3D scan of the Strand Bookstore that the team captured in December to test effects like Ray-traced shadows, global illumination, reflections and ambient occlusion and establish repeatable interaction patterns around navigation and points of interest.

Left, building a pipeline to bring photogrammetry models from Reality Capture into Unreal and optimize R&D's workflow in Houdini. The team automated optimization in Houdini to retain as much detail from the raw model as possible, including geometry silhouettes and texture resolution, while reducing the original triangle count by 98%. Right, creating a POI (‘point of interest’) in the 3D scene using Unreal.

Floor collision in Unreal. The green area represents what is walkable/navigable by the user in the scene. All images in this cluster by R&D in collaboration with Composition X.

One of the challenges of Unreal (and large-scale 3D environments in general) has been the size and computational load required to make these experiences accessible to Times readers. Technologies like pixel streaming might make it possible to render simulations in the browser, but we still have a ways to go in terms of performance and cost. But we’ve been able to do a lot of the groundwork to understand the optimal settings and workflows to maximize model quality and limit file size and poly count.

We do a lot of work with natural language processing and looking at how we can use ML (Machine Learning) to help connect readers to reporters. In August, we launched a Q&A experience on the Coronavirus FAQ page that’s powered by a tool we developed called Switchboard, which allows us to cluster incoming reader questions and match them with reporter written-answers. We have been iterating on that technology for the last six months.

I am constantly in awe of the work that we’re doing. For someone who struggled to use my imagination in the space of total possibility, the breadth of work that’s happening on our team is the best evidence I have that anything is possible. The tools that we’re developing, the technologies that we’re becoming fluent in, have so much potential.

The Coronavirus FAQ in the Editor’s Picks section on The New York Times homepage. Still captured by Jimmy Chion.
A visualization of reader questions mapped on a 3D graph. This helps R&D understand how the algorithm is grouping related questions
The Q&A module on the Coronavirus FAQ page is powered by Switchboard, a tool created by R&D to group incoming reader questions and match them with reporter-written answers in real-time.

You mentioned the importance of demonstrating the creative potential of a particular technology to journalists. Have there been instances where a journalist has used one of the tools you’ve developed in a surprising way? 

It’s a constant feedback loop. We’ve done a lot of work with photogrammetry—capturing large-scale environments and then allowing readers to experience them in the browser. We released a piece on our website to demonstrate how we think photogrammetry could work. It was explaining the process of streaming these models on the web. From there, we saw some amazing work come out of the newsroom.

Michael Kimmelman, who writes about architecture and the city, did two wonderful pieces—one on Jackson Heights, the other on Doyers Street in Chinatown. The Doyers Street model integrated archival imagery to allow readers to see the before and after. To your earlier question, that was a nice way of cementing multiple temporalities in a single experience. 

We are excited when we develop a capability that we feel is ready for newsroom use, and it goes out into the wild, and new things happen with it. The archival imagery is a totally new use case that we had not initially designed for. But it works well. That kind of feedback and iteration then informs our thinking about how to make these interactions and experiences more robust and more seamless.

Architecture critic Michael Kimmelman’s virtual walking tour of Chinatown, “Chinatown, Resilient and Proud,” published December 2, 2020. R&D used photogrammetry to convert over 4,000 photographs of Doyers Street in Chinatown into a 3D model. In this frame, an archival photograph of the street from 1974 is aligned in 3D space. Archival photo by Michael Evans/The New York Times. Photogrammetry by Sukanya Aneja, Mint Boonyapanachoti, Jon Cohrs, Niko Koppel, Guilherme Rambelli and Benjamin Wilhelm. Lidar capture by Dallas Bennett. Designed by Umi Syam. Edited by Sia Michel. Produced by Alicia DeSantis, Jolie Ruben and Josephine Sedgwick.
A point cloud of the Doyers Street model in Reality Capture. Image by R&D Creative Technologist Mint Boonyapanachoti.
Here, Boonyapanachoti adjusts the camera path through the scene, in which archival photos are placed in 3D space. Archival photo by Irving Browning/The New York Historical Society. Photogrammetry by Sukanya Aneja, Mint Boonyapanachoti, Jon Cohrs, Niko Koppel, Guilherme Rambelli and Benjamin Wilhelm.

Since you’ve been at the New York Times, how has your conception of interaction and interactivity shifted? 

As is often the case in the sandbox that is school, I thought about interaction as an artistic expression with a lot of creative potential. In school, the utility of the work that you do is not usually at the forefront. At least it wasn’t for me. It was about exploration and about my own understanding of the world. The mission of The Times is to seek the truth and help people understand the world. Part of being in this role and being at this institution has been a process of realizing that now more than ever is when we need to help people understand.

For me, interaction has become a way to create novel experiences and new forms of connection. Understanding the world around us has never been more important. The problems that we face require more than explanation. I think we need to find new ways of connecting with ideas and connecting with each other. A huge benefit of R&D is the ability to work outside of the constraints of the day-to-day news cycle and think about how new forms of interaction can strengthen the connections between readers and the information that we’re presenting. Bringing readers closer to the story is, for me, one of the most meaningful parts of the work that we’re doing. Interaction design is integral to how we facilitate those experiences.  


Credits

Edited and condensed for clarity. Interview conducted in February 2021. Photography by David Evan McDowell.

Lana’s website

Follow Lana on Instagram

Follow Lana on Twitter