Listen

Description

Wouter Sligter

Figuring out how to best adopt new technology is difficult at any time for any organization. AI tech rachets up this challenge to new heights.

Wouter Sligter helps companies understand the capabilities and limitations of LLMs and related technologies to create trustworthy experience-delivery platforms.

Transparency is a key element in implementing solutions that evoke and support the authentic human experiences that underlie these systems.

We talked about:

his background as a UX-focused designer and his shift to conversation and AI design
the growing number of business use cases that his work supports as well as the growing palette of tech tools that he has to work with
how he creates authentic and trustworthy experiences with LLMs and adjacent tech
the benefits of RAG (Retrieval Augmented Generation)
the growing number of platforms that support building AI experiences
the huge failure rate of conversational AI implementations, and how better design might improve the success rate
the importance of being genuinely customer-centric when implementing AI projects
how his background in language and music helps his AI design work, in particular the benefits of "being comfortable with the uncomfortable"
the importance of companies being transparent about their AI implementations
how localization manifests in the AI world
the growing acceptance of chatbots by consumers
his advice to jump into AI now, beginning with due diligence about how you'll implement it in your organization

Wouter's bio
Wouter Sligter is a Senior Conversation Designer and Generative AI Engineer. He has been a committed team lead and has consulted for a large number of Conversational AI implementations, most notably in Finance, Healthcare and Logistics. He has an innovative mindset and a sharp sense for understanding user needs. Wouter always looks to improve the conversational user experience by following iterative design patterns and verifying outcomes through data analysis and user research. Both predictive NLU and generative LLMs and SLMs are part of Wouter's toolkit.

Wouter has a background in ESL and IELTS teaching at language centres and universities in Vietnam. He has developed a strong awareness for language and cultural peculiarities, with native fluency in English and Dutch and good conversational skills in Vietnamese, German, and French.
Connect with Wouter online

LinkedIn
YouandAI.global

Video
Here’s the video version of our conversation:

https://youtu.be/Ak0liSLR8_0

Podcast intro transcript
This is the Content and AI podcast, episode number 25. One of main reasons that people have taken so quickly to AI tools like ChatGPT is their conversational nature. People like talking to each other - and to computers. In human conversation, we've developed skills and instincts that help us determine the trustworthiness of the person we're talking with. In tech-driven conversations, we often have reason to mistrust. Wouter Sligter helps companies build conversational systems that express the authentic humanity of their creators.
Interview transcript

Larry:
Hi everyone, welcome to episode number 25 of the Content and AI Podcast. I'm really delighted today to welcome to the show Wouter Sligter. I met him in Utrecht in the Netherlands. He's in the co-working space we both work out of. There, he is a conversational AI consultant. He does conversation design and he's a generative AI engineer. He has his own company called You and AI Welcome, Wouter. Tell the folks a little bit more about what you're up to these days.

Wouter:
Hi Larry. Very good to be here. Thank you for inviting me. What am I up to? I think you mentioned the three things that I like most doing and that I do most often. I've come from being a self-employed freelance designer really, when in 2018, Facebook started with their chatbots on Messenger. I jumped in and quickly caught on and got a lot of clients worldwide, really building chatbots for them. At that time, I was mostly working on the content side with what you see is what you get kind of flow builders and slowly got pulled into the tech side as well.

Wouter:
I worked for enterprise as a consultant for a few years in the Netherlands, and then I decided last year to go back to being freelancer, and that eventually culminated in now having my own company, You and AI, with which I'm doing all kinds of outsourcing work from Vietnam. Of course, lately a lot of work is involving generative AI LLMs like RAG implementations and fine-tuning. In my bones I'm still a Uxer, so I'm always looking to build stuff that actually works for people rather than only playing around with tech that no one uses. There's really my strong point, I think.

Larry:
I love the way you say that. I have many engineer friends, but they're really prone to just building stuff because they can. We're both designers and I love human-centered design and human-driven design decision making. One of the things you said in there, you kind of reminded me of your heritage because you come out of conversation design kind of UX and conversation design specifically within that. That field has evolved. All these new generative AI tools have a conversational or a chatty kind of interface, but you've been working with that kind of interface, but the bones underneath these interfaces are way different now. Five years ago, it was all NLP and kind of flow building tooling. Can you talk a little bit about the transition and your skillset and the demand for your kind of talent over the last five years?

Wouter:
Right. Yeah, so I think in the beginning, because most of my work was involving Facebook, there was a lot of demand for the marketing use case, the sales use case, like getting cold leads to convert and to some extent also customer service. Then when the enterprise-level companies jumped in, the customer service field became much bigger. I think now today, that's still the major use case for most conversational AI. But now with the LLMs and all the generative AI functionality that we have, the possibilities have become so much bigger. There are so many more use cases that can successfully or let's say an acceptable level of quality be implemented or be used.

Wouter:
Right now, I'm getting all kinds of stuff in, so it can be like fine-tuning for reading Excel sheets, fine-tuning for creating posts on LinkedIn, but also still the follow-up of let's say the fall-backs on the traditional NLP bots where traditionally it would say, "Oh, sorry, I don't know that." We now often use LLMs to fill in those gaps and pull from the company website or company knowledge base to answer even those questions better than they could ever before.

Larry:
That's right. You're just reminding me, I kind of phrased it as an either/or an evolution in development, but we haven't left the old stuff behind. Like you just said, you're still the fallback if an LLM or another agent fails. You have just a bigger palette of conversational tools to work with, it sounds like. Is that accurate?

Wouter:
Yeah, definitely. Definitely. And that makes my job so interesting because we started with the rule-based stuff and then NLP came in and then we thought like, well, now it's getting really interesting and a little bit difficult. Now we're at a stage where we have these LLMs that produce or don't produce the output that we expect with all kinds of hallucinations and technical challenges, which I think make my job so much more interesting, but also more challenging in a way because you need to explain to everyone what every bit of tech does and make sure that the clients who are actually using it understand why we're using that tech so that they can also explain to their stakeholders why things work or why they don't work.

Larry:
When we talked a couple of weeks ago... Oh, I'm sorry. You were going to say something?

Wouter:
Yeah, yeah, no, go ahead. I can keep talking for ages about this stuff. I'm actually trying-

Larry:
I'd love to circle back to something we talked about a couple of weeks ago when we were preparing for this. One of the implications of that, what you just said, this evolution of the tooling, you go from rules-based, like it's all guardrails all the time in a system like that to NLP, which has intense understanding and all those utterance magic to these crazy hallucinating LLMs. I mean, I'm exaggerating of course there, but there's been an evolution in that practice. One of the big things that comes up just every conversation and every conference and event I go to is the importance of trustworthiness and authentic, because these things are conversational.

Larry:
They sound like a human, but it's not always authentic sounding. So there's this sort of combination of things, at least I conflated them in my mind, this notion of authenticity and trustworthiness. Can you talk about how you instill those kinds of... How do you help people trust these experiences as you're navigating them through?

Wouter:
There's a lot of levels on that question. Let me just pick one first. I think that when a business, when an organization chooses to work with the kind of AI that we have now, then they need to decide if they're comfortable with that level of risk that they're allowing in their applications because we know that LLMs are not perfect. They do hallucinate even if we put the guardrails in place. Actually, you have to decide for each use case and each implementation which level of risk you are comfortable with as an organization. For an internal use case, it might be okay if 85% of the answers are correct, but for a customer-facing use case, you might want to see 90 or 95 or even 100% depending on the context. I think that's one important thing to note.

Wouter:
With that extra level of quality really, of output quality,