Minter Dialogue with Lidewij Niezink and Peter van der Putten

This podcast episode, I welcomes back two distinguished guests, Lidewij Niezink and Peter van der Putten. We had an engaging conversation that explores the nuances of empathy in both human and AI interactions.

Episode Highlights:

Lidewij Niezink shares her extensive experience in empathy within psychology and her recent ventures into AI. She discusses her work at Hanze University of Applied Sciences (in the Netherlands), where she is pioneering the integration of AI in applied psychology, particularly focusing on the use of large language models and chatbots to enhance students’ learning and professional development.

Peter van der Putten brings his deep expertise in AI, tracing his journey from the late 1980s to his current roles as an assistant professor at Leiden University and director of the AI Lab at Pegasystems. He delves into the fascinating concept of “artificial x” and how AI can be used to foster meaningful customer interactions.

– As a trio, we attempt to establish a first principles definition of empathy. We explore what it is that makes us human, which is important as a foundation before we discuss AI. We examine the intriguing overlap between empathy and AI, questioning why it’s a topic, and what the combination looks like. In the process, we look at how AI can help us understand ourselves better. We also discuss the potential of AI to assist in therapeutic contexts, addressing the shortage of human therapists, and some of the ethical considerations involved in humanising AI. The conversation touches on the importance of transparency in AI, the challenges of encoding empathy into machines, and the need for a clear understanding of the end goals when developing AI applications.

If you enjoyed this conversation, you might also be interested in our article, “Sorting through the Maze of Empathy in order to train Gen-AI — A First Principles Definition,” on which we collaborated. 

Please send me your questions — as an audio file if you’d like — to nminterdial@gmail.com. Otherwise, below, you’ll find the show notes and, of course, you are invited to comment. If you liked the podcast, please take a moment to rate it here.

To connect with Lidewij Niezink:

To connect with Peter van der Putten:

  • Check out Peter van der Putten’s profile and research at Leiden University here
  • Check out Peter’s AI manifesto (and more) at Pega.ai
  • Find/follow Peter van der Putten on LinkedIn
  • Find/follow Peter on X (formerly Twitter)

Other mentions/sites:

Further resources for the Minter Dialogue podcast:

RSS Feed for Minter Dialogue

Meanwhile, you can find my other interviews on the Minter Dialogue Show in this podcast tab, on Megaphone or via Apple Podcasts. If you like the show, please go over to rate this podcast via RateThisPodcast! And for the francophones reading this, if you want to get more podcasts, you can also find my radio show en français over at: MinterDial.fr, on Megaphone or in iTunes.
Music credit: The jingle at the beginning of the show is courtesy of my friend, Pierre Journel, author of the Guitar Channel. And, the new sign-off music is “A Convinced Man,” a song I co-wrote and recorded with Stephanie Singer back in the late 1980s (please excuse the quality of the sound!).

Full transcript via Flowsend.ai

Transcription courtesy of Flowsend.ai, an AI full-service for podcasters

Minter Dial: Well, well, well. You know, it’s not very often I have repeat guests on my show, but this time I’ve got twofer. I’ve had both of you on my shows previously. I know you both. And what’s funny, or somewhat crazy, is that both of you have a lot of links and yet didn’t know each other until this very moment where we’re getting together. So, let’s start with Lidewij. We tell us a little bit of who you are and what are you up to these days.

Lidewij Niezink: Thanks, Minter. Yeah, we didn’t meet yet, and I’m very much looking forward to this conversation with the three of us and explore what we have together. My work’s in empathy. I’ve been working on empathy in psychology for the past, more than 20 years, but more recently, I’m also diving into AI. I work at the Hansen University in the north of the Netherlands, where I’m teaching. And one of the teaching projects I’m working on at the moment has to do with AI and how to use AI in applied psychology, which is quite a tricky subject in all sorts of ways, but very, very interesting and very creative. So, we’ve been working on a project with students over the past year where we developed with them ways to use AI. And then we’re talking mostly about LLMs, mostly about chatbots, into the development of their studies and their profession. Very interesting work. And so, empathy on the one hand, and AI on the other, and the fact that I’m really very passionate about empathy and developing empathy, both in terms of theory, but especially also in terms of practice. And so, getting my foot in the door, as we psychologists used to say, into how do we, can we, and how do we work with empathy and AI? And can that be beneficial to us as human beings? And can we maybe learn something new about empathy? Which is always good, because empathy science tends to turn in circles.

Minter Dial: Beautiful. And certainly this notion of therapeutic AI and the notion of AI with empathy as an aid or an assistance, if you will, to the need for more therapy because of the lack of enough therapists, human therapists, so deeply interesting, deeply useful. Peter, what about you?

Peter van der Putten: Yeah, so, well, first and foremost, thanks for bringing us together here. My background is in AI. So, I started to study AI. Well, the end of the eighties, 1989, and there was, yeah, I had both. We all know the memes from Hollywood about robots taking over, but for me, it wasn’t just like the fear of AI, it was the fascination with AI as well, because AI is very much something that is about the f of fascination, in the sense it’s fascination with ultimately not the AI, but with ourselves. And of course, intelligence is one of the key questions around humans, where we think that we are different from animals, etcetera. I’m not so sure, actually whether the difference is that big, but there’s other things that make us humans as well. So, ultimately, it’s a healthy, narcissistic fascination with ourselves and with living beings, and also how that can actually extend to non humans. I’ve been in this area ever since I have 1ft in academia. So, I’m an assistant professor at Leide University in the Netherlands, where I study all these things. I call it artificial x. X anything that makes a human. And I’m also director of the AI lab of an American software company called Pega, where empathy is one of the topics in the sense that we use, well, our clients use AI a lot to have, let’s say, meaningful conversations with their customers. So, how to be relevant in customer interaction and meaningful as well.

Minter Dial: Well, what’s fun? I mean, sort of funny for me, I like to connect dots. I also like to connect people. Is that, on the one hand, Peter, you came from AI and you’ve landed on empathy, or at least humanity. And Lidewij, you started with empathy and you’ve landed on AI. And both of you in the north of the Netherlands, there’s got to be something else going on. The dots are connecting. Yeah. This idea of how we can learn to about ourselves through this machine technology and what does it tell us about us, why we do what we do, if we try to make it more human, better in customer experiences or whatever type of application, what does it say about us? And I think that’s probably what maybe gets all three of us, our juices flowing. So, you, Lita Weill, also another thing I wanted to make sure we talk about, which is you were also the co founder of the empathy circle, which is an amazing practice that I’ve also, I’ve been doing a lot of. And I’ve always appreciated and always want to credit you for founding that. And both of you being PhDs, are both, of course, highly published. So, I want to make sure that people who are listening know that we are talking to two ponts, as we say in French, of the business. So, let me ask, let’s say, Peter, first, because you talk about this artificial x, and I think it’s relevant for us to start by saying, well, if it’s not artificial, what is it that actually makes us human? What is the x factor that makes us human in your mind?

Peter van der Putten: No. Yeah, that’s a good question. I think it does like to go on the slide tangent. It does. Like I said in the intro, it does kind of explain our fascination a little bit with the AI because I don’t think it’s indeed just about the art visual. I do think it’s about us having kind of this healthy narcissistic fascination for what it is, what makes us human. Right? So, that predates, let’s say, robots and computers and AI, that we have Greek philosophers musing about it 2000 years ago, but probably even neanderthals sitting around a campfire thinking about all of these things. What are emotions? What is creativity? What is intelligence? Maybe they communicate it in a slightly different way than we do. But I think these questions are really, really old. Yeah. So, what makes a human? That’s a big question. You know, like I do want to, you know, maybe rule out certain things. And that’s another way to attack a definition is to say, well, certainly it’s not just intelligence. Right? So, and that may sound like a no brainer, but when you read a lot of the news and when people talk about AI, it’s all about, oh, you know, the robots are becoming superhuman, etcetera. Then I’m thinking two things. One, whatever Kasparov already got beaten by a chess computer in the nineties. So, that in certain areas, these machines are becoming smarter than us. That doesn’t devalue us as humans. The other thing is, we’re so much more than just chess computers. We’re even so much more than just being intelligent. You know, we’re emotional, we’re religious, we’re jealous, we’re humorous, we are, you know, we feel we’re helpless, narcissistic, many, many different things that kind of make us human. And I do think it’s interesting to study that through that artificial lens, not because I think I necessarily, we’re there yet, and anything can be expressed by these artificial beings. But I think it’s a way, it’s a nice way to, like, at the heart, I’m maybe a scientist or researcher to model it. If we can somehow. Yeah. Make it artificial, we can try to understand it, model it, try to understand it a bit more and also try to understand a little bit better what it is, what we don’t understand. And I do think it’s nice to be open to thinking that these superhuman qualities could also occur in animals and plants or maybe in even other things. Or at least we project those other qualities on these other things.

Minter Dial: It becomes a relevant question in our world, where in certain studies, we should say there are different ideologies, and these ideologies are, one could call philosophies, with a moralistic intention behind it, and wondering whether we can or can’t look to nature and animals as part of who we are, or are we separate from them, and then what is our identity and how do we establish our uniqueness? So, Lidewij, I was going to sort of lean on a little bit what Peter said, where Aristotle said that we are rational animals. I tend to think that we are irrational animals. That is our distinction, I think, with animals anyway, rather than the rationality. But where do you lean into this notion of uniqueness or the specific qualities that make us human?

Lidewij Niezink: Well, I was, while Peter was chatting just now, I was thinking about this, and I was thinking, is there really, what is there really, that is uniquely human to us humans? Right. And I mean, if we’re taking it from the empathy angle, definitely not. We’re not the only ones experienced and expressing empathy. Lots of studies showing that many animals have some form of empathy or experience some form of empathy and act on those forms of empathy. Consciousness, also not distinctly human. Irrationality, also not distinctly human. We see lots of animals doing very irrational things. Not all the time. There’s principles of evolution that maybe can explain these irrational behaviors. But not sure if irrationality or rationality is what makes us human. Actually, I have to admit that I really. I’m really not sure what makes us distinctly human. If there is anything that we really have that other animals do not have, I’m not sure.

Minter Dial: Well, just like consciousness, it’s kind of a long debate, hard and soft questions of consciousness. And how do we prove it? I mean, we in animals both have pain. And this becomes a deeply relevant question, Peter, when we start looking at the notions of whether it’s unique to humans versus if it’s not unique to humans and animals, then how do you sort of draw the distinction between us and machines?

Peter van der Putten: Yeah, good question. So, first and foremost, I think also there’s always lots of debates about, let’s take a robot example. You know, like when people say, oh, is the robot truly intelligent, truly emotional, truly empathetic? But I think what might be an important insight, it’s not so much about whether the other object is truly x. Yeah. The other aspect to it is the projection that we have as humans. And we tend to actually project all these qualities onto others all the time. So, there’s a particular. Well, Daniel Dennett, American philosopher, he has this theory he calls the intentional stance. So, he says, let’s spark this whole question whether intentions, emotions and thoughts and intelligence are real. It’s an interesting discussion, but it’s a very tough and ill-defined discussion, even if you would limit it to humans, let alone talking about animals, plants or Teslas. Right. But he says, like, I’ll park that for now. I’ll just say from an evolutionary perspective, you know, we actually project those intentions because it’s useful. It’s adding to our evolutionary fitness. Yeah. So, if I’m walking through a forest 50,000 years ago and there’s a yellow, black striped object, furry, jumping out of a tree and saying, raah, I don’t go like, are you a subject or are you an object? No, I run, you know, because I project upon you. You seem like a hungry, you know, a hungry animal or something like that. Let me. Let me run. And so, we are really hardwired to project all these things, not just intelligence, but also intentions, emotions, desires, feelings onto the other. Because that helps us to understand. Well, helps us to understand the world. But with understand the world, I mean, to predict what might likely happen and act accordingly. And that applies to our fellow humans that we interact with, maybe our dog or our cat, but also when we are in traffic and we see a car driving around or when our computer stops doing what we wanted to do, or then we start to project these things onto these other objects as well. In that sense, that’s another way to say it’s interesting to talk about whether intentions, emotions, et cetera in non humans are real. But to some extent it doesn’t matter. Then it’s just sparking that whole discussion. And it’s saying we are hardwired to project it onto the other because it helps us to relate, it helps us to interact. It helps us to predict what’s going to happen.

Minter Dial: So, listening to you, Peter, this notion of projection, it reminds me of the discussion. There’s a chap called Thomas Telving, a Danish author who wrote a book called killing Sophia. And he talks a lot about the idea of anthropomorphizing machines and this projection of who we are into machines. And I. And even if the machine doesn’t necessarily feel pain, is it okay to kick a mechanical dog or a humanized robot? And what does that say about our society? And then I wanted to, Lita, we talk to you and bring that sort of wobble into this idea because as a psychologist, you’re generally not supposed to project yourself into a person in front. I mean, at some level, they’re supposed to find their solution in their own world. And so, I wonder how you deal with projection in the space of psychology. And where do you fall out in this idea of projection and the anthropomorphization of machines as part of our quest to understand our humanity.

Lidewij Niezink: Yeah, it’s true that projection is like one of the big stumbling blocks of all sorts of psychological practices. So, we’re not just talking about therapy, right? We’re talking about all human interaction. And there where we start trying to understand other beings, and we do that referencing to our own thoughts, feelings and past, often we start to project ourselves onto the other. And we can see already so far with AI as well, that we do the same with AI, right? We start projecting Personas onto a chatbot. And so, the question of kicking the dog is actually the robotic dogs. Actually, I think an interesting thing because it circles back to what Peter was saying about the big f of fascination with our own selves, right? And so, this idea of what does that say about us? Because my first response would be, go kick that robotic dog. Please kick the robotic dog instead of the real dog. Besides, right? All you do is technical demonstration, probably, maybe lose some memory. But I think that projecting ourselves onto animal, other humans, non-humans, it’s what we automatically do. And so, if that’s the automatism, probably you don’t need to be a psychologist to become aware of that and to start wondering whether that is really the adaptive response to all that happens around you. Right? So, having conscious about your own self and taking a bit of distance in order not to immediately project, but to create some more space, open up and get a feel for what the other has to bring to the conversation or to the experience, does that answer your question?

Minter Dial: Well, it’s a continuing. There’s no specific answers to these questions, right? I mean, I am certainly not going to judge them, but it makes me think of you guys, the academics. But the idea of the state of being observed changes the nature of the object being observed. And so, there’s that projection of us projecting onto it and the impact that that has on the person or the thing that is being observed. It all sounds very woo woo. And as both of you being academics, a lot of what we’re talking about is this messiness of humanity. I mean, you know, so does that, is that something that we can academically study, messiness, Peter?

Peter van der Putten: Yeah, no, absolutely. Because, yeah, if we, let’s say, if I go back to one of the qualities, let’s take intelligence for sake of the argument. And then, of course, one of the metaphors is like, oh, we’re rational agents, right? So, we need to make a decision and we need to weigh the options, and then we pick the option that gives us the best, you know, whatever the best outcome. But we all know that people are not behaving like that. We’re not rational agents. The way we have to equivalent choices that people need to make, the way we frame that particular choice, for instance, there’s this bias of this mechanism of loss aversion. We hate to lose something. So, if I frame it more in terms of a loss, then oops, then I’ll probably go for that particular option. Even if both choices are actually the same. That kind of proves from a psychological perspective that we are not rational agents in the sense of that we’re weighing different options and picking the best one doesn’t mean that you can’t study it or that it’s unpredictable. Dan Ariely calls it that we’re predictably irrational. We’re using other biases and heuristics that kind of explain how we make our decisions. Thankfully. Thankfully. So, because it’s not just limitations of humans, it actually allows us to maybe make the decisions much quicker. We don’t need to have an LLM in our head with hundreds of billions of parameters to just make a quick decision whether to have, you know, eggs for breakfast or something else. But we can still study it. But that’s an example where, yeah, I don’t know, like where you were going necessarily with messiness, but that’s an example where there is messiness in the sense that we are not rational agents. But it doesn’t mean that we can’t, that our behavior is still kind of unpredictable. We just use very different ways of making decisions than the simple computer.

Lidewij Niezink: And maybe I can add to it Minter that the whole life is a messy business. And so, even if we can research messiness through pattern recognition and so on, we ourselves, the scientists, are also particularly messy in what we do. And so, I’m also wondering, where do you want to go with that messiness? Because isn’t it just an inherent part of life and of experiencing?

Minter Dial: So, where I land in this thought, I write about four major paradoxes that I think characterize the challenge of life, and specifically the challenge of life in business, because that’s a big portion of our lives, and one of them is the need for order. As human beings, and certainly scientists, we’re looking to make sense of things, including messiness. But as much as we are trying to impose or create order and understanding, categorizing of things and putting them in boxes. There’s chaos, as certainly the quantum theory would suggest. It is an endemic piece of our lives and dealing with the shit and the craziness. And I would characterize it in much more laic terms, messiness, which is the chaos. So, that’s sort of where I was, that’s where my mind wobbles. And all of this comes back to being relevant when all three of us also have a relationship with businesses that are run by, let’s say, deeply rational, performance led type of people. And I don’t want to make any more generalizations than that, but that struggle with things like imperfection, emotions and messiness. And so, I mean, it feels like it’s such a necessary part of us, what we’re doing, but then we always want to codify it and put some boxes and numbers on stuff and measure it. So, that’s sort of what I was curious about. Any reactions? Peter?

Peter van der Putten: Yeah, I feel your pain. That’s 1 second. Yeah, I do think that in detail, let’s say in business we’re trying to simplify and bring things back to simple KPI’s. Or if you can’t measure it, you can’t manage it. But on the flip side, there’s these things where people say the moment you introduce a metric, it becomes meaningless. But I do think you can also model at different levels. So, if we say, yeah, let’s take a marketing example. So, the old way of doing marketing is to say people with green hair and blue eyes, they’re going to get product x, right? So, very kind of top down course, grained rules, whatever. You know, like very simple model of the world, not even based on any level of feedback, whether this is something truly relevant to a customer or not, a much lower level of, well, and then the chaos, of course, is that customers are different, right? So, every customer is a segment of one. Even for a single customer, circumstances could change overnight. If I lose my handset after a couple of beers out in a pub or my credit card gets stolen, I’m a completely different Peter. But you can actually use that chaos by saying, well, let’s not try to impose our model on the world of, let’s say, simple models of what customers, what’s relevant for a particular customer. But let’s interact with customers and learn, in this case with AI, what are the things that would resonate with a particular client in a particular circumstance. That’s a very practical example where, yeah, you’re actually using the chaos. It’s not real chaos, because it’s actually the reality of how customers behave. And you’re leaning into it and you’re actually learning from it. But in that case, that does require that we’re no longer just using the rules and intelligence in our head, but that we also start to leverage AI, for example, to do that really at that fine grained, single customer level.

Minter Dial: I want to park that thought and keep it for later, but I want to circle back down and lean into the notion of empathy. I have neither empathy or AI as any specialization in the background, yet do I feel linked into it. And in the fear component, the other F. Peter there’s certainly a lot of people that fear that robots are going to take over my job, much less the world, and what allows me to keep my job. And so, there is a string of words that people use that include things like creativity, intuition, abstract reasoning, consciousness, and empathy as being a value added proposition that the human being naturally, somehow naturally has. And then, so if we say that it is a natural component of the human being as well as animals, as you said, Lidewij. Fey, and I wanted to quote some work that you’ve written with your co founder of empathic intervention, Catherine train. In an article in Psychology today, you wrote, empathy has many definitions, and even within one field of study, they’re far from consistent. With increasing interest, empathy has become an umbrella term for many processes of shared experiences. And I think we probably all three of us, share a frustration with the vagary and the lack of a strong definition of empathy. So, do you think there’s a way for us to tighten it up and create a singular, strong, not some sort of paragraphs, long definition, but if you had to tackle that lead of a, where would you start in trying to create an original like the meta, the first definition of what is empathy?

Lidewij Niezink: Well, thank you for that question, Vince. Well, I really. So, I think we need to do justice to the fact that empathy is many things. And it’s not because empathy is many things. It’s because empathy encompasses many different skills, ways of interacting, ways of tuning into other people. And so, yes, I have been frustrated with the fact that empathy is such a vague, messy term and has been for basically probably ever since the beginning of the coinage of the term empathy. Right? So, we’re talking about seventies of the last, not the last century, the one before. So, it’s been a messy all the way along. But what I think what is really the center of being empathic is holding space. And so, this is something that I’ve been using more and more over the past few years, saying empathy is many things, while holding an experiential space for others. So, you hold the space to tune into what is happening with the other, whether it’s human, whether it’s animal, but maybe whether it’s artificial intelligence, but maybe for now, for the sake of, let’s stick to humans. So, holding that experiential space and then making use of those many skills that you require when you try to empathize, which is feeling into, you’re using your felt sense, right? You’re using your body, you’re using nonverbal communication, you’re using perspective taking, so you use your mind, you use imagination. So, there’s many, many, many different ways of tuning into what the other is trying to express or is being is probably is. And so, in all of that, I think what empathy is really about is holding that space. So, not projecting, not concluding, not judging, not running away with what the other person is expressing, but just staying with it for now, staying with it, absorbing. And then the second thing. So, holding space. And the second thing I always say is, empathy is a means to an end. So, that also defines a what empathy is in the situation, which is, what are you using it for? Why are you empathizing? What is the purpose of your empathy? And in defining that psychopaths use empathy for very different reasons than therapists do or than we do in our intimate relationships. So, if you have an idea of the end to which you’re using it, you can also become more skillful in what you’re actually using to turn into, to tune into.

Minter Dial: And just to be clear in that intentionality, you’re not necessarily saying it has to be good to the extent that what you’re doing is you’re defining, or you’re asking for people to define what is the end you are trying to achieve without saying it has to be for good.

Lidewij Niezink: Yeah. No, it’s not. It’s neutral. The skill is not inherently good or bad. It’s not inherently good or bad. It can be used for good, can be used for bad. Or, you know, there’s been a lot of discussion with Paul Bloom over the past decade who was saying, oh, empathy is bad. You know, it’s not the empathy that is bad. It’s what are you using it for? What’s the end? What is it a means for love it?

Minter Dial: Peter?

Peter van der Putten: Yeah, no, I agree with many of these things. So, ultimately, it’s a. It’s what you use it for. What makes it good or bad. What makes it some evil persuasion versus being very kind of a good, empathetic, let’s say person or system. I love it how you were saying it’s a tuning into the other. And that requires IQ is some form of intelligence to understand what the other wants. But that’s not enough. It also requires maybe something like an EQ so that you can lean into these. That’s why I liked it when you said, well, you need to feel the other almost. Or on the flip side, yeah, you do need to understand the other person’s emotions. But what sets apart then more towards you were saying the objective is important. What do you use it for? And I think that kind of leads towards the biggest, let’s say expectation that people have of something else or someone else. It’s not just that they intelligently understand us or that they feel what our emotions are, but the biggest expectation that we have is bit of a moral. It is a bit of a moral expectation. Right. If you’re interacting with me, are you doing that for, let’s say, the right purposes? Right. So, I think that sets the part. Yeah. Let’s say the social pet who has a very high IQ, very high EQ, but yeah. Is using that, let’s say empathy for, let’s say, only his or its own purposes. Right. Versus really deliver on the moral expectation, you know, like, yeah, are you doing what’s right for, let’s say, both of us here. Yeah. And I think that’s the biggest. When you just would walk into the street and ask a random person, say, well, what does empathy mean to you? Is, well, to put yourself into someone else’s shoes. But what does it mean? That’s in the category of don’t do to others what you don’t want to have done to yourselves or something like that. So, that, yeah, there’s a base intelligence expectation, actually. I agree, like the emotional expectation is even more important. But the biggest or key expectation that people have is the moral expectation.

Lidewij Niezink: Yeah, it’s interesting because, yes, the definitely that moral expectation plays a. Plays a big role at the same time, you know, it’s, again, it’s quite limiting when you, when you say empathy is putting yourself into another person’s shoes or, you know, in psychology with distinction, self perspective and other perspective. Right. So, if you put yourself into their shoes, you’re still thinking about how would I or what would I in those shoes. Well, if you really want to expand, and I think that is one of the purposes that interests me most in empathy, if you really want to expand your perception of the world, your thinking, your way to connect. Then you really want to absorb something foreign from the other. Right. Something that you do not necessarily, maybe not even recognize. Something that is not part of who you are. And that’s where you start expanding. And I’m not sure that it’s always necessarily moral in that sense. You know it’s, it’s. But it’s it’s. Yeah it is. Maybe again no it’s not. No I’m not saying this. Sorry. No it’s.

Peter van der Putten: I agree. It’s not about projecting yourself onto the other. It’s about projecting the other on the other.

Lidewij Niezink: Yeah exactly. Well yeah projecting. Yeah observe.

Peter van der Putten: Even projection is maybe has a negative connotation but for sure it’s not projecting yourself.

Lidewij Niezink: Yes absolutely.

Peter van der Putten: The other. On the other. You know like. Yeah on purpose. I’m using a formulation which is not quite right but no no it’s great.

Lidewij Niezink: And so, learning something from the otherness of the other. Right.

Minter Dial: That sounds like othering.

Lidewij Niezink: Yeah sounds like othering.

Minter Dial: Tricky. So, let me contextualize some thoughts that I have as im listening to this. So, in the one hand were looking at the intention and not specifying good or bad which I would argue would be more ideological approach to empathy. And then the idea of well what are you in, what are you trying to do? And you could sort of. And in Peter’s world with Pega systems the idea of making my customer relationship management AI better at interacting with, having more meaningful conversations with customers in order to get to the heart of what they want in order to serve them better that seems like a pretty good intention. Not necessarily moralistic but it has a capitalistic anyway underneath that of course you might start getting into a little bit spitting of hairs and I’m not saying just selling shampoos but let’s say when you move from selling shampoos to selling cigarettes or selling guns and so you have a capitalistic intention that’s good for shareholders. I feel like the ethics needs to be brought into question but I wanted to ask you Lidewij and Peter for you to rebalance on this afterwards. For me there’s, there’s the intention of the emitter that’s to say the person who is attempting to be empathic with some other being typically right. Or you know it could be an animal, it can be a human being and all that. But the receptor of that empathy may or may not be aware of the intention or even the empathy as it stands. For example I could be looking at somebody through a mirror or a mirrored window let’s say they don’t see me. I see them. And I could be, I think, trying to be empathic because I’m holding a space where I’m understanding what they’re doing, but they don’t even know that I’m observing them. Yeah, I wonder, is that ever a space called empathy?

Lidewij Niezink: Yeah, I guess so. Yeah, I guess so. So, that’s actually what I wasn’t going to say just now before, is that because in my reasoning there, I came to the conclusion. So, it’s also inherently self serving. And this is what happens here. If you have a mirrored glass in front of you, then you’re empathizing is a self serving practice in the sense that it’s definitely not benefiting the other in that moment. It might, though, afterwards depends on what you’re aiming to do with empathizing with somebody in another room that isn’t seeing you.

Minter Dial: So, the specific example I wanted to mention was doing market research where you might have a bunch of customers sitting around a table being monitored or moderated by somebody, and you, as the marketer in the background, are just observing what’s going on. And if you can apply empathic understanding, then you might end up with a better shampoo bottle that doesn’t slip in the shower because my hand always gets slippery, she said in the market research. So, it’s a rather delayed understanding of that person, that marketing team, that company, was being empathic with me because, oh, I’m in the shower now, and slippery hands, it’s not slipping, it doesn’t slip anymore. So, I don’t know if I can ever, as the receiver, actually understand that empathy was actually emitted. So, I wanted to just. Peter, what do you think about that? Because in your work with Pega, I’m thinking that this must be a part of your conundrums, because you’re always dealing with asynchronous communications. You send out a message, try to be as empathic as possible. It’s received at a certain time at, you know, in another context. And that’s part of CRM issues and trying to be empathic in the way you communicate.

Peter van der Putten: Yeah. So, for that reason, actually, you know, like, we focus a lot about, you know, on interaction in the moment. You know, like not just like the typical kind of whatever marketing stuff where you’re. Or harassing people with emails, but really, let’s say, leave it more to the customer when they want to engage with you. And at that point in time, being more contextually, being more contextually relevant. I agree that the empathy can be there without like, let’s say, having an immediate impact, but ultimately, let’s say more from a customer point of view, I will only feel that empathy. Yeah. If it has some form of consequence. Right. So, and I think it’s also good to. Yeah. That you don’t just try to be smart, but actually that you. That you also, let’s say I’m making a certain, I open up my mobile banking app and I’m getting some smart recommendation around whatever my credit card points are going to expire or I’m getting an overdress fee if I don’t put some more money in my current account, or I can save some money if actually I’m paying for my kids to have to go to cultural classes and I can actually get money back from the government. Now. Anyway, you can imagine that we could make all kinds of smart recommendations. I think it’s also important not just whatever to make the recommendation, but to also give some element of. Yeah, but how did we get to this actually? You know, like making the empathy sometimes a bit more explicit. Yeah. In this example, when we’re giving smart AI recommendations, that there is a reason why we’re doing it, so that you can make that empathy, that empathy also a bit more explicit. We’re not just doing it because it’s January and we’re telling everyone the same thing.

Minter Dial: Well, so in there, what I’m hearing is there’s the intention and then there’s also some level of transparency, which becomes all the more relevant. We talk about AI lead of a. How do you respond to that?

Lidewij Niezink: Well, definitely transparency. Right. Very, very important. I think very important. If that’s, I think so far, if we’re moving towards AI and empathy with this, I think what we really know is at the moment, there’s no genuine thought or feeling in AI. It doesn’t exist. So, recognizing that AI empathy is functional is that means to an end. And being transparent and maybe even open source to the point of being open source about how that comes to be, is the way I would love to see this go forward, especially if we really want to program some sort of empathic skills into AI. We want to do the best we can and instead of all sitting on our own little island and doing again what we’ve been doing over the past century and mega capitalism for a long time, namely, this is my product and I need to make the money and I’m going to keep the secrets for myself. I think opening up to a really transparent way of building and sharing also from an ethical point of view, sharing for those who want to know how that works, seems to me like the way forward.

Minter Dial: I’m just going to circle back with you Lidewij, again. So, what I see is that yes, we need to understand how AI is operating. Transparency in the way the decisions are brought, what type of data they’re using, and all this on top of that AI has a bandwidth which is generally far greater, far longer, more patience. And all this if we code it correctly than we yet we tend to I think hold AI to a higher standard than we hold ourselves. Because if you were to ask any marketer what are your intentions all the time? And they had to express their intentions before they give you a bogo buy one, get one free, I’m not so sure that that would be very effective from a customer standpoint. Are you trying to screw me? You know I’m Gillette, I’m trying to sell you blades that you now have a life indenture to me, to, you know, to subscribe to my razor blades. Well that transparency we’re not so quick to ask of ourselves. But we want to ask it of the AI. And I was wondering how you sit with that Lita Weill. And then we’ll get to you Peter.

Lidewij Niezink: Yeah, so again, if there’s no reasoning, its a bit strange to ask that type of question to an AI, right? If there’s no reasoning, no cognition, no consciousness, then what are you trying to do? Why are you trying to sell me this knife? Is a very strange question to ask because there is no answer to that other than prediction of next words. There is no motivation behind what we’re doing. So, I’m not talking about that type of interaction with AI. I’m talking about the programming. If I want to sell you an empathic AI, I think I need to be very transparent about what it is that is making this AI empathic. So, what is the empathy that is programmed in? What are the principles that the AI is build on that we call empathic?

Minter Dial: And just to finish, do you when your studies, where you’re going with your students, you talk about this idea of holding experiential space and are you now in a mode where you’re able to take that and even with all the other elements, non conclusion, non judging, you know, feeling into it, are you finding that enables you to craft a brief for coder? Because I’m sure you’re not doing the coding because I mean at some level that’s where we’re at. I mean I am anyway. I’m sort of talking conceptually above. So, where are you in that study and the work you’re doing, and then we’re going to get to you, Peter.

Lidewij Niezink: Yeah. So, yeah, that’s exactly what I’m interested in. And to make it very practical. And I think, for me, I’m not sure we can build empathic AI, and I’m not even sure we want to build empathic AI, but I do definitely think that we could use AI to become much more clear about what it actually is to be empathic. And so, to use AI to get our own empathy principles straightened out, and to use AI to remind us in human to human interaction or in machine, machine to human interaction. You know, these are the things, if we want to be empathic, these are the things that we want to do. For instance, we don’t want to immediately give advice. All empathic AI that I’ve been running into so far is particularly skillful in advising. I get a lot of advice based on very little data that AI knows very little of me and yet is able to give me good advice straight. Right. And so, I find that, I find that very interesting, and I think it’s really missing the point. It’s not, you know, that’s not what this is about. So, yeah, I’m ready. I’ve been thinking about, and I’m ready to really think this through with developers because definitely that’s not what I can do. Right. I can’t do the programming, but I can think along in terms of if we want to program this principle, what is it actually that we’re looking at? Programming.

Minter Dial: Petter, where are you in this whole world?

Peter van der Putten: Yeah, first and foremost, Levite, it’s really interesting to think about what are important principles in these kind of empathic relationships and how can we actually formalize them or make sure that we can deliver on them. Of course, then they also ultimately need to be used for good purposes. Not bad. But it’s interesting indeed to see how we can formalize them. I think with AI, it’s maybe a little bit of a nebulous term nowadays, because people on one side, they immediately think about chat, GPT, and large language models and things like that. But there’s also these other forms of AI, maybe jet CPT. You could see it as right brain AI. Creative. Aih, but we also have the left brain AI, which is much more about, I don’t know, making predictions of, let’s say, customer behavior, but also putting your own rules and policies and things on top of that. That would ultimately guide systems towards making decisions, decisions in terms of, oh, there’s a customer service issue, what could be the problem, or a marketing interaction or a customer interaction? What could be relevant to recommendations to this customer? Now, in this left brain world, I do think actually you can get quite far and be more explicit about how do you get to that automated decision in general, so that you have control as a human, but also in individual interaction, to get some, at least some form of automated explanation of why a particular recommendation is being made. Certainly we are, we are aiming to do that when we use our systems that you can see, well, you’re getting this recommendation because such and so also you have then more on the enterprise side, you have control over that by balancing what’s good for the customer with maybe some corporate goals, goals that you also need to deliver. On the other side is more the right brain, creative AI. And that’s slightly more challenging because you have these huge models and how do you reverse engineer what they actually mean? Or how do you instruct them to do the right thing? For instance, how can you instruct them to advise, I don’t know the answer. Right. So, how to avoid that, they start to hallucinate all kinds of advice which might not be relevant. So, again there. Yeah, one of the things that we focus on is to see if we can connect the constrain that AI a bit more and say, well, thou shall only talk about whatever, you know, self service tips from the bank, from bank x, and not talk about anything else. And also then shut up if you don’t know the answer. Right. So, really constrain actually the boundless creativity of the average generative AI model that you use, whether it’s GPT or Gemini or whatever, to make sure that it’s a bit more modest in its answer.

Lidewij Niezink: Can I ask you, Peter, does that work? Because that’s what I now with all my chatbots, I instruct them, admit when you don’t know, before I start any type of conversation with them, but they don’t know. I’m not sure what it actually is that I’m prompting them to do when I say admits that you don’t know, when you do not know.

Peter van der Putten: Yeah, yeah. Like, at risk of getting maybe a bit too technical, but you can actually take generative AI, but constrain it on a particular domain. So, in the example that I gave with whatever, I lost my credit card and I want to know what I need to do as a customer. And I’m a customer with Bank X, so I’m not interested in bank why or random hallucinations of a model. We could really give it let’s say all the self service documentation of that bank. Of course, as a customer, I can’t be bothered to look for it on the website and I get way too many search hits. And I just want the answer to my question. But what you can do then is you can say, oh, you take a particular corpus of documents here on a particular domain, in this case, all the self service documentation of the bank, and you put the gen-ai on top of it. And if a customer has a random question like I lost my credit card, what do I need to do? It actually searches through that corpus, finds documents that are most similar, and then it’s sent on to the generative AI and say, well, based on this question and these search hits give me the answer. And if it’s not in the search hits, just refrain from answering. So, I can ask it, how did I lose my credit card? And I get the bank x answer, not the bank y answer. I can also ask it, who’s John Travolta? And it will say, I don’t know because couldn’t find it in the search results.

Lidewij Niezink: Yeah. Okay, so based on the, on the database behind it.

Peter van der Putten: Yeah, yeah, exactly. You can. So, it’s actually, you know, in that sense then using these kind of large language models more as clever reasoning and summarization and reasoning engines as opposed to the fountain of all knowledge.

Lidewij Niezink: Yeah, which is actually, sorry, Minter, I’m getting very excited here, which is actually, you know, that in itself I would say is an act of empathy. I had this, I have, I live in the southwest of France on an ADSL that is struck by trees. And so, lots of problems. And so, I often have my router going off and usually when I’m in class and things like that, lots of stress nowadays when I contact that customer service, my frustration is very often that I get, if I get people finally on the phone, which usually takes a while, then they start rummaging through their book of what to do in what situation, right? And they give me exactly what I could have found on their website. And nowadays I just send them, please open a technical ticket, there’s a problem with the line and I get an immediate response. And so, this is actually, I think, really an example of AI being used for empathic purposes. Namely, we do not have to go through, and we also don’t need to put workers through having to get all these documentation together to provide an effective answer. While the AI is so good at finding the answer straight and being succinct.

Minter Dial: In helping you on when it’s well constrained. And I love that example because something that I like to talk about is that empathy doesn’t need to be one way. That’s to say people in power, let’s say a business or the boss, it’s always up to them to be empathic. And yet in your approach with your ADSL supplier, you are actually being empathic with them because you’re immediately going for the jugular of what you need, or at least you’re not trying to make them go crazy. The poor customer service agent. I want to, in reflecting and listening onto what you said before, we’ve talked about intention and you talked Ali Dubai about this idea where it gives me an answer immediately that sometimes is good, maybe not always good. And this idea of measuring, it strikes me that an area of measurement will be in what we are trying to achieve and if we can contain what our objective is, not to hit a home run, but to hit a solid base hit, you know, get to first base. That’s what we’re trying to achieve. Did I get to first base? That’s not a sexual discussion, but, you know, because that’s in Americans, that’s how that goes. But get to first base with this customer. And if that was achieved, then we can measure the effect of the empathy, not the empathy itself. And I just wanted to throw in one other topic, and maybe, Peter, you could reply to this, is that this notion of using empathy to create trust, as opposed to get the customer to buy a product or have a specific, more materialistic objective, how do you approach that in your work as an academician or with Pegasus?

Peter van der Putten: Yeah, that’s a good question.

Minter Dial: So.

Peter van der Putten: I might address both. So, more from the business point of view. Yeah, like, I’m sticking maybe to that same kind of bank or making relevant recommendations. I mean, this has broader implications across different things you would like to do as a business, but it’s important then that, let’s say if I stick to that bank example, it’s important that you are not there just to flog bank products at customers and try to make, as you know, try to make an immediate buck in this conversation, trying to sell you this new credit card. Right. So, you really would need to take an approach where you look at your strategic vision. Like, I don’t know, one of our clients, they say it’s all about enhancing and securing the financial well being of our clients. So, as a customer, you go a little bit like, yeah, yeah, yeah. But the key point, of course, is people go like that because they don’t always either. You don’t behave like that or you don’t feel it like that. Right. So, it’s really important then that you make that felt in every single interaction and that these recommendations that you give that, yeah, you select them from a very wide library, not just like trying to sell you something, but also dealing with all kinds of surface issues or even proactively delight you with fixing issues before they arise or maybe even, yeah, talking about things that having nothing to do with your own products and services, but that have everything to do with your mission statement. In this example, if we nudge our clients towards government benefits that will help ensure their financial wellbeing. Yeah, I’m not selling them a banking product, but I am enhancing their financial well being. And this way, and if you do that, then in such a way that, that its not this simple rule that you spoke about like some top level rule, but something that has a high iq, a high EQ, its very personalized in the moment. Its balancing whats good for the customer. Its not just what goods for the bank to really decide on what to talk about and also to listen and take the feedback of these clients into account. Do they ignore it? Do they hate it? Do they like it? It learn from that as well. And then you’re operating in, let’s say in a more empathic, in a more empathic mode. Yeah. So, yeah, like in my research, I were looking at empathy in different types of ways as well. You know, like we are looking at, I spoke about the right brain AI, you know, like doing a project with my colleagues at Leide University, Max Van Dyne and Tom Kaufman Dijk around theory of mind. So, theory of mind is this concept. Can we, it’s, let’s say certain cognitive ability. You can test that in kids, for example, or other people if you give them small stories where they need to be able to take the perspective of someone else, for example. And you can actually, interestingly enough, that ability arises when, when kids get, when kids get older, they get better at this theory of mind. We’re doing research where we’re looking at like, what if we give these stories to these large language models? How do these models perform? Yeah. And is that just a function of that? These stories and tests are out there on the Internet, or if we create new ones, do they still do a good job? Or if we make them increasingly more difficult, do they still do a good job of that perspective taking? Yeah, we’re not making any ontological statements that the AI has gone sentient here, but it is interesting to actually study that, or on the flip side, we do want to warn for over reliance on the AI. We’re looking at conformance. And when do people conform to an AI assistant? Even if the AI system might not be, you know, giving you the most empathic or intelligent smart recommendations, maybe it’s just trying to fool you. Yeah. It is important to understand that. Yeah, we are actually, like I said, like, we’re hardwired to project. We’re actually also quite hardwired to conform, even if it’s peer pressure, not from humans, but from, let’s say, machines or robots. So, we’re also researching that, like, the way you frame any AI, what is the impact on conformance to that AI? Even if the AI gives you the wrong advice or research we did with student Donald Schroeder was around the form factor. What’s the impact of the form factor on conformance? She looked at conversational assistance, just text versus robotic voice versus a very human sounding voice. And we could see that the human sounding voice led to way higher levels of conformance. That’s not good or bad on its own, but it’s important to know that people behave like that. You’re muted.

Lidewij Niezink: You’re muted.

Minter Dial: My brain was whirring. Maybe you could see my whirring. Yeah. I wanted to just circle back to this notion of trust and getting the data, because if you want to have empathy, generally you need to understand the other person’s context. In order to get that context, you generally need to have access to the information that’s behind the context. And I wanted to throw in this other piece, which is, it strikes me in our divisive society that there are plenty of people that don’t trust other people, and those same people might also trust more a machine. Even if, as your point, Peter may not give me the right answer, but I might have a bias towards a machine, that it might listen to me better, hold my space, better, have less bias, better than human beings. And so, I mean, whether from a psychologist standpoint, from your empathic division standpoint, with businesses, or in your study, Lidewij, I know this is kind of vast, but where do you think we can go with making or encoding empathy into a machine where we know we need to develop trust, get the data, we need to have transparency to say who we’re doing, what our intentions are? Do we need to have all of these elements encoded before we can create an empathic being?

Lidewij Niezink: Well, I’m not sure I’m overseeing the vastness of this question, Minter. It’s huge, right? So, yes, I think we do. So, the things you name here, I think we do, but I think we need to build it one by one. So, there’s many levels of principles, probably, of coding that you want to get into before you start getting into interaction with human beings. Right.

Minter Dial: What sort of principles? Let’s talk about the principles. I think that if we could land on or end on these principles, that we need to be, start focusing on to avoid not being avalanched by, you know, as you describe. You know, there’s a ton of questions. Go for it.

Lidewij Niezink: Well, so I think the one thing that Peter was also mentioning, the one thing that we really need to build into it before anything else, probably, is this capacity to listen. And it’s almost like we think because we’re prompting, so we can’t be interrupted by the AI, they can. Only when we press enter can we at least. Again, I’m sorry. I’m very chatbot oriented within the chatbot context, because that might be different for the other half of the brain type of AI’s. But for now, in our interaction with AI, the listening is something that I think that we really need to program into it. And this is what I was mentioning with giving advice. It’s not a good sign when an AI is giving you advice after you’ve been describing a very short situation, saying, I’m lonely. And the AI starts immediately with, oh, I’m very sorry for you. It must be very difficult for you to be lonely and notice how chatty these bots are. Amazing. Endlessly. Anyway. And then I would advise you to go and do this. You could try and do that. You could. This AI knows nothing about my loneliness. Nothing. Right. It knows, apparently, something about loneliness that is, it has a huge database about what loneliness is, but it doesn’t know me. And so, programming the listening into it seems like one of the principles that we really need to get very clear, and that doesn’t seem to be, to me, in my fairly human brain, it doesn’t seem to be too difficult, right, to program this into AI’s. Peter, I see you agree.

Peter van der Putten: Definitely. I saw you. Whatever. In the left brain, it’s all these machine learning systems that, look, learn from data. Right brain. Then it’s a bit more challenging in the sense that typically the AI that use, it’s kind of baked already, or we don’t even control it. We use whatever GPT from open air or Gemini from Google. But even there, there can be elements of learning. You don’t need to kind of whatever retrain models, you can just keep a bit more of context of what have we discussed so far? And maybe indeed like being AI, being the stupid AI’s that they are, do more of a job of playing back like okay, so you told me all these things, this is what I understood. Is that correct? Please correct me if I’m wrong. I do agree with you. We shouldn’t use AI to put mansplaining on steroids, right. Or AI’s planning if you wish. So, I think the art of listening, that’s certainly something we need to build into it.

Lidewij Niezink: Yeah, yeah.

Minter Dial: Well then it sounds like it’s the art of listening also of reformulating and asking questions to probe and knowing that we also have to worry about. Well I might have endless amount of bandwidth and micro-processing power and memory up in the cloud, but you Peter, live in a 24 hours day and by gum I want the answer that’s another f word within a certain number of minutes because I have to go pick up my kid. And so, that first principles, I’m understanding it from you, is this art of listening and probing and reformulating to make sure that I understand. Is there a second principle that comes to mind, Peter?

Lidewij Niezink: Do you want to give it a try first?

Peter van der Putten: Yeah, good one. In the beginning I was talking a bit about having, you know, like talking more like as the receiver of the empathy. Right. What do I expect of the other? Yes, the other should have a high iq so that they can understand what I’m saying. Yes, they should be able to lean into my feelings, but I do have the moral expectation. Right. So, somehow we need to make sure that we can, especially if we build AI systems that are supposedly to not just be smart but also emotionally intelligent, but they also need to be, we need to be able to reflect certain values into that system and somehow. Yeah. Work that into, worked into these interactions that we’re having and being also. Yeah, yeah. Explicit at least to some degree, and transparent about what those values are. Yeah, and yeah, of course you do need to do what’s right for the customer or the citizen. Of course there’s also objectives that whatever the government organization has or the company has that you’re interacting with. But at least it’s good to be somewhat, see if we can make, make that somehow a bit more explicit so that these things are being taken into account and that the balance can be made that it’s not just, it’s not just the user of the AI systems whose needs are being methadone.

Minter Dial: If I may just reformulate what I heard, Peter. And then I want you to lean in, Lita, before we close off, what I heard there is that we need to, as a group, before we even really start to encode, we need to as a group understand what are our values and what is our strategy, what are our intentions behind this whole effort. And as a group, including our marketing team, our coders, our agency, our sales, everybody have a sort of a consistency with regard to what is our, as you required. You talked about a little bit more the moral ambition that we have within this that we all share so that the machine can be an accurate reflection of who we are.

Peter van der Putten: Yeah, yeah, I would agree with that. And then of course there’s some everyday realism that is part of that. Because whatever, if it’s a business, I don’t mind that my bank is profitable. You know, like, I understand that they need to be a business and hit certain goals, but in a particular interaction, they need to balance what’s good for them and what’s good for me as well. Right. So, it’s important to indeed to be clear on, yeah, on those, on those, on those values. Yeah. And typically actually they already exist. You know, it’s written down in some kind of annual report.

Lidewij Niezink: It.

Peter van der Putten: But they’re not always living up to it. Right. So, she can really build it into the, you know, the fine grained interactions that you have with your millions of customers and, but, but be true to it. Yeah, I think that’s, that’s, yeah, then that’s, that. That’s a good thing. Yeah. So, yeah, absolutely.

Minter Dial: I’m going to get to you, Lita, there. But it does sound like we go back to the initial idea, which is that, that the work on encoding maybe empathy into AI has a greater benefit and reflection back onto who we are because it makes us sit in on, well, we’ve written these three values. Well, what the heck do we mean by them in our annual report? Let’s talk about that and embody them first within us before we even start to try to replicate it. Lidewij and do you have any other principles that we’re missing out? I mean, if we can land on two or three principles, I’d be very excited.

Lidewij Niezink: Yeah. Okay. So, well, you know, with age advancing, I get more and more simplistic in life and I think this one thing, the one thing that we need to do is maybe, yes, okay, we’ve got business values and got strategy, we got vision and we got, and we all agree with it, but we don’t. But this is who we are and we can, yeah, we can spend lots of time, spending lots of money and lots of brainpower together on, you know, what. What do we stand for? This is our Persona, this is what we want to build into before we start coding. But again, the aim of empathy, you know, empathy as a means to an end. I think each and every gen AI application, in whatever way, is a means to an end. And so, the empathy here is really about what is the end? And if we know the end of this application, what do we actually want to do with this application? If we know the end, then let’s reason back to what do we need to build into it? So, I think that’s really how I see it more and more. And so, keeping in mind that we’re building tools, tools that help us to perform, to understand, to synthesize data, I am really excited about AI because I’ve got a very messy brain. It’s a big sieve. And, you know, I never know what if I’m going to remember what brilliant thoughts I had five minutes ago. So, having that AI behind me, that is pumping that data, right, and is able to help me to make sense, I think it’s great. So, that’s the first thing, one of the other principles that I think we really need to build into AI. In the world of empathy, we’re still very often discussing only two types of empathy, namely cognitive empathy and emotional empathy, right? And I have. I strongly disagree with the principle of having cognitive and emotional empathy, and even with the idea that these are two different things. But in terms of programming, I think it’s very important that the emotional empathy, so the understanding emotions, if we want to build empathic AI’s, whatever that’s going to look like. Understanding human emotions, if it interacts with humans, I think, is real basic, basic principle that needs to be met. And we can do that. We have, like the facial action coding system, facts that were started by Paul Ekman and is now Erica Rosenberg, is doing a lot of work on that. We can recognize emotion in the face with a very high predictive accuracy. So, AI can recognize emotions in all sorts of ways. I think that building that into AI that pretends to be empathic seems another basic principle that I think we should look at. But the means to an end is, I think, the most important thing, because we’re spending lots of time on the. Maybe not that interesting visioning, you know, company Personas.

Peter van der Putten: I think it’s important to, like you say, to edit, to operationalize that ultimately into doing something. So, you could say empathy is not a noun, it’s a verb, you need to translate all that empathy and these higher level those into, into doing it towards certain outcomes or certain ends that you have. I very much agree with that. So, when I talk about these values, not just like, oh, let’s sit down and at these very vague high level vision sessions, and not then just put it in a PowerPoint and in the drawer and not do anything with it. I really mean like operationalizing them into these AI’s nowadays, how they behave so that they, at least to some degree, that we balance the needs of the customer or the various stakeholders, and that we follow the certain vision that we’re supposed to have, let’s say, as a company. So, in that sense, I think the principle of, for lack of a better term, empathy is not just a noun, but I also a verb, you need to do it. I think it’s a good one as well.

Lidewij Niezink: And so, maybe we’re not holding AI up to higher standards than we hold ourselves, but maybe we could use AI to hold ourselves up to our own standards, right?

Peter van der Putten: Yeah, it’s slightly off topic, but I have another kind of student who’s about to graduate, Alexander Cernaris. We actually did an exercise where we did this kind of checks and balances thing. We took annual reports. We used Gen-AI to extract like, hey, what kind of principles are, you know, does the bank type down in its annual report as its high level principles? So, first we extract those principles with Gnaizen, and then, yeah, we get a little bit cheeky. We take all the marketing public, a whole bunch of marketing publications that whatever the bank has on its website, and then we use the gen-ai to check these articles against the principles. And we did that for, well, we built that system that can kind of more or less completely automatically do that, like extract the principles and apply them and check them. Them. And we ran that for three banks. They had overlapping principles, but also different ones. One of the overlapping principles that they all had, that they were working towards a sustainable future, well, it will be no surprise that that was the least aligned to principle when we actually started to look at what do these banks actually talk about? It’s not accusing them of ill intent or anything like that, but it’s an example of a creative form of research where. Yeah, where we go from the top level fluffy vision thing down to, okay, but at least what are you communicating in your own, let’s say, into your own corporate marketing? Ultimately, you want to do that all the way down into all the actions that you take as an enterprise.

Lidewij Niezink: Very nice. I’d love to read that when that’s written up. Yeah, nice. Yeah.

Minter Dial: Well, since time is unlimited, is not. Is not unlimited. I wanted to thank you both for a stimulating little round trip there. Obviously probably could go on for hours more. I certainly feel like I had a few more zones I wanted to delve into, but it sounds like we have convergences in our studies and that even though we come from very different backgrounds, we could get there. But I do feel we need to militate and have people corral around rather than have sort of this vague understanding of it. Because if it’s vague, it’s going to be very hard to encode a machine. So, I’m going to hopefully pull together some, let’s say, synopsis of what we’ve done, and maybe it’ll even turn into, thanks to perplexity or some other gen AI in quicker fashion, some kind of smart production out of what we said. I will be in touch with you more. Any last words? Lidewij, for you on this topic and anything else you’d like to say.

Lidewij Niezink: Yeah, so, no, I just really want to thank you, Minter, for having that conversation, the three of us. I found it really interesting and stimulating, and we talked about this before, and maybe I just put it out in the world there that we are looking really for the people who are willing to put their thoughts, their minds and their resources into envisioning. Let’s start with a gen-ai application that has some of the empathy principles in it to see if there’s something that we can build that really helps us. Human beings empathize more with each other in productive ways, in whichever that is. Very much understanding that the nuclear bomb was a great invention with the disastrous consequences that same thing could happen with empathy. So, empathy has to be understood as not morally good in and of itself. And yet I do think it’s very important that we keep on building our empathic skills that we really needed in our human interaction, if with the right intentions and that. So, we’re looking for people who are interested in getting this out, trying this out, and in the most transparent way possible.

Minter Dial: And how can they reach you, Lyda?

Lidewij Niezink: They can reach me through our website, and that is empathicintervision.com. and they can reach me through email, through LinkedIn. LinkedIn. I’m very active, so very welcome to find me on LinkedIn. And we’ll drop an email somewhere in the notes.

Minter Dial: Indeed, I’ll put all that in the show notes. Hey, Peter, your turn.

Peter van der Putten: Well, I’m for sure maybe the first person that would like to accept that offer because that sounds really interesting. Yeah, yeah, no, and I think in general, you know, like, I think with, even with all these kind of high tech things like AI, it’s always interesting. It’s very useful to start with the human perspective and then branch off from there. I think hopefully this conversation was also kind of a proof point of that. Yeah. And that’s both very much aligned with how I think about it in applying it more, let’s say, in the business world. I did write a short manifesto about what are the responsible ways to use AI. So, if you go, well, that’s maybe if I can do a short commercial. If you go to Pega AI, you will find. You’ll find it manifesto, but also, like, yeah. People want to know more about these different lines of research. Just Google me and you’ll find the research that we’re working on at university and always open to collaborations there as well.

Minter Dial: Yeah. Both of you are not just friendly, you’re Google friendly. Both of you have sufficiently findable names. Hey, listen, many things. I’ll put all those, I’ll collect those show notes elements for you to make sure everyone can, can grab you wherever and however so we can progress this little world that we’re living on. Hey, listen, many thanks, Vila. Danke. I think that’s something like that, right?

Lidewij Niezink: Something like it, yeah.

Minter Dial: Thanks, guys.

Lidewij Niezink: Thank you.

Peter van der Putten: Thank you.

Minter Dial

Minter Dial is an international professional speaker, author & consultant on Leadership, Branding and Transformation. After a successful international career at L’Oréal, Minter Dial returned to his entrepreneurial roots and has spent the last twelve years helping senior management teams and Boards to adapt to the new exigencies of the digitally enhanced marketplace. He has worked with world-class organisations to help activate their brand strategies, and figure out how best to integrate new technologies, digital tools, devices and platforms. Above all, Minter works to catalyse a change in mindset and dial up transformation. Minter received his BA in Trilingual Literature from Yale University (1987) and gained his MBA at INSEAD, Fontainebleau (1993). He’s author of four award-winning books, including Heartificial Empathy, Putting Heart into Business and Artificial Intelligence (2nd edition) (2023); You Lead, How Being Yourself Makes You A Better Leader (Kogan Page 2021); co-author of Futureproof, How To Get Your Business Ready For The Next Disruption (Pearson 2017); and author of The Last Ring Home (Myndset Press 2016), a book and documentary film, both of which have won awards and critical acclaim.

👉🏼 It’s easy to inquire about booking Minter Dial here.

View all posts by Minter Dial

 

Pin It on Pinterest