Rebecca Evanhoe
Rebecca Evanhoe practices, teaches, and writes about conversation design, a key UX practice that is taking on fresh importance in the age of chat-based AI applications.
Since the publication of her book Conversations with Things (co-authored with Diana Deibel) three years ago, the tech and media worlds have fundamentally transformed, but the conversation-design principles that she teaches remain as relevant as ever.
We talked about:
the conversation design and UX writing courses she teaches
reflections on the book she co-wrote several years ago, "Conversations with Things" and the changes in the conversation-design world since
how the focus on principles in
a framewwork set out in their book that helps designers decide on whether or not and how to ascribe personality to a chat agent
her identification as a UX designer
how she's incorporating LLMs into her course curricula
her take on the misappropriation of the term "prompt" in new practices called "prompting" and "prompt engineering" and their divergence from traditional use in the conversation design field
the differences in the conversation designer role in the LLM world compared with NLP
the linguistic concept of "conversation repair" and how it manifests in "bot land"
how to adjust confidence level in conversation design
how intent classification in NLU works
her preference for humans and human conversation
the importance of including people with a humanities background in conversation design
the ongoing importance of humans in the content and conversation design process for our ability to think strategically about how to maximize the success of conversational technology
Rebecca's bio
Rebecca Evanhoe is an author, teacher, and conversation designer. With degrees in chemistry and fiction writing, she's passionate about how interdisciplinary thinking can combine arts, humanities, sciences, and tech. She teaches conversational UX design as a visiting assistant professor at Pratt Institute, and co-authored Conversation with Things: UX Design for Chat and Voice (Rosenfeld Media, 2021).
Connect with Rebecca online
Video
Here’s the video version of our conversation:
https://youtu.be/xJkB03uH8ek
Podcast intro transcript
This is the Content and AI podcast, episode number 19. We're all talking to computers a lot more these days - telling Alexa to set a timer, asking Midjourney to create an image for a party invitation, or prompting ChatGPT to draft an outline for a slide deck. Rebecca Evanhoe is an expert on the interaction design practices that guide these conversations. Three years ago, her book "Conversations with Things" set out a principles-based approach to conversation design that remains super-relevant in the age of large language models.
Interview transcript
Larry:
Hi everyone. Welcome to episode number 19 of the Content and AI podcast. I am really happy today to welcome to the show Rebecca Evanhoe. Rebecca is really well known in the conversation design world. She's a conversation designer. She's the co-author of the really excellent book Conversations with Things that came out a few years ago, and she teaches conversation design and other kinds of design work at Pratt University in New York. So welcome to the show, Rebecca, tell the folks a little bit more about what you're up to these days.
Rebecca:
Yeah, hi Larry, it's nice to be back. Yeah, these days I am teaching, I think you said conversation design, and specifically this semester I'm teaching a class in UX writing, which I love because it doesn't matter what kind of writing I'm teaching, it's like a chance to think about language and celebrate how cool language is with my students. And yeah, I've been teaching, I am doing some work at a cool place that I won't get into here. But yeah, it's been a really interesting couple of years.
Larry:
Yeah, because we last talked right before your book came out, I think it was maybe a few months before the book came out. And since then, I mean conversation had been a thing. I had talked to Phillip Hunter and several other content designers before I had you and Diana on the show, but it seems like I'm going to guess that more has happened in the last four years than in the four years before you wrote the book. Is that accurate?
Rebecca:
I think that's definitely accurate. Yeah, our book came out in April of 2021, and I think that ChatGPT became publicly available in November of 2022. So our book has been amazingly well received, tons of enthusiasm. It really seems to be sticking around and people are finding it useful. But if you control F and search our book, there is not one mention of the term large language model. And I think there's only one mention of natural language generation.
Rebecca:
It's been interesting to look at our book through the lens of the fact that technology keeps changing. And I think, and other readers think as well that it's based enough in principles that it really applies to any conversational technology, or at least that's the hope. And when I think back about the things that have happened in the last few years, when the book came out, I remember people kind of wanting us to put a couple things in the book that we didn't.
Rebecca:
People really wanted more information about how you build an Alexa Skill or a Google Action. Those were very visible at the time. People also wanted us to put a list of prototyping tools for conversations into the book. And we didn't, and I think things like that future-proofed it a little bit, because Alexa Skills and Google Actions... Like Google Actions aren't around anymore, Alexa Skills are very much de-emphasized. And a lot of the prototyping tools that we had a few years ago were acquired or were kind of sunset. So yeah, I think we made some lucky decisions to future-proof it. But certainly it doesn't have a mention of LLMs, which is-
Larry:
Well that's really... I got to say, this is super interesting because I remember from, we talked on the Content Strategy Insights podcast about this, that you and Diana both emphasized principles. And I don't know that you specifically stated that, but in retrospect it's like, yeah, that's way better than focusing on any specific technology or practice. Can you talk... I remember you covered those really well in the book. But is it possible to do a quick overview of some of the guiding principles? And maybe more to the point, how are they helping you through the arrival of LLMs and generative pre-trained transformers and all that stuff?
Rebecca:
Absolutely. I think that one of the concepts from the book that has become even more important today is the idea... And in our book we call it level of personification, and it's in the personality chapter. So I think a lot of people are thinking more about personality design, but also specifically how much of a character, how much of a mind the AI is sort of presenting itself as.
Rebecca:
So is it presenting itself as a fully realized character that's your friend and it refers to itself as I, or is it behaving more like a machine? So the example that I always give is if you have a remote control where your voice is the input, it doesn't need to be named Sandy, and it loves, Thanksgiving is its favorite holiday, and... It doesn't need to be a character in mind. So thinking through that spectrum really for any AI experience you're creating, I think is really important. How much of a person should it present itself as? I think that becomes a lot more visible.
Rebecca:
And an example that I would give for the LLM world, it's like, if you talk to chat GPT or Claude, those bots use I. And you can ask them a little bit about themselves and they'll tell you, they'll generally clarify like, oh, I'm an AI so I don't have feelings, but I can describe feelings or talk to you about feelings, stuff like that. But then there are other LLMs that they don't have any personification at all.
Rebecca:
So for example, Perplexity AI is a platform that is an LLM and you can talk to it and it does all the LLM-ey stuff, meaning you could ask it to summarize, you can ask it to give you bulleted lists, you can ask it to imitate a turn-taking conversation with you, but it doesn't really present itself as a character at all. And I think those kinds of decisions are still very much ones that conversation designers should be involved in, because that level of personification really impact user expectations, how they're going to behave toward it, and then how successful those interactions are going to be.
Larry:
Yeah, that's really interesting. How do you make that decision? Because I can picture making the wrong decision for good reasons. Oh, we like our customers, we want to be close to them, so we're going to act like their friend, where it would probably be more appropriate in a business setting to not be that way. Are there sort of guidelines around that, how you decide the kind of personality?
Rebecca:
Yeah, I mean in our book there's sort of a framework that walks through a lot of the facets of it. But generally I would say I think people over-personify these interactions. They think that having a character must be more interesting and fun, and they forget that people want their thing fixed, they want their task completed, they want their problem solved. And people also forget that lots of people are very happily solving these problems already through an app, through a website. People do like to solve their own problems, and conversations are not necessarily easier and more efficient unless they're designed to be so.
Rebecca:
So yeah, I think one of the things that we think through in the framework is first defining interaction goals that are independent of the conversation. So an example that I always use is, if you're making a voice bot that takes orders for a drive-through,