Are chatbots of the dead just a bit creepy?
What if, Black Mirror, style, you could program a chatbot to replicate a dead loved one? Would you? Should you? I think not, and here’s why
This post was inspired by an article on Aeon; Chatbots of the dead looks at whether these simulacrums of the deceased are a good idea or not. It’s a good read.
Chatbots of the dead are recreations of dead people using their own outputs to simulate the kind of things they might have said when they were alive. You’ll notice that I’m being very careful with my language. This is deliberate.
Outputs is a horrible word, but it’s really the only one that works – it can refer to things written down, like letters, journals and diaries, voice recordings, digital communications like text messages and emails, and social media posts. It could include essays, articles and books if the subject produced any. It could even cover such things as shopping lists and notes stuck to the fridge.
What gets fed into the chatbot’s tabula rasa is ultimately decided on by the person designing it. That person has to consider which of these outputs will help recreate the dead person’s personality – their essential ‘me-ness’ – and feed them into the chatbot’s empty brain.
If someone were to create a chatbot of me, for example, they probably wouldn’t use my bullet journal, which is mostly just a list of things I need to do each day – it doesn’t give much insight into my thoughts. They would use my journal: written at the end of each day it’s a summary of what I did, my thoughts and feelings. They probably wouldn’t include my children’s book but this newsletter would certainly go in.
Why do it?
But this talk of how you might create a chatbot of the dead raises an important question: why would you do so? And secondly, even if you could, should you?
The obvious answer to the first question seems to be: because you miss them. Someone you love has died and you want them back in your life. You want your children to meet your grandfather, to see what a wonderful man he was. Your partner died way too young and you’re not ready to move on yet.
There are other reasons, of course. Feeding all the published and unpublished works and letters of a long-dead philosopher into the empty container of a chatbot could lead to some interesting conversations. This also feels less problematic than recreating a dead loved one. But I believe that recreating a dead loved one would be the main reason for having chatbots of the dead – it’s certainly the one with the most commercial interest.
It’s the recreating of a dead loved one reason that I have issues with. I don’t think it’s morally wrong – unless the person had expressed a wish not to be brought back in this way. I believe it’s mistaken, that you’re not really doing what you hope or think you’re doing.
A poor replication
The chatbot wouldn’t be a true recreation of the dead person. Any data that gets fed into the empty bot is curated. It doesn’t fully represent the dead person in their full, complicated whole-ness. Firstly, the person for whom the chatbot is being created has to choose what to include. There might be certain things that they don’t like and so don’t put in. They’re creating their idealised version of the dead person.
Secondly, however much gets written down – and I’m talking as someone who writes a lot – it can never represent a person in their complex, contradictory completeness. We all write for different audiences, using a slightly different voice. We all choose what and how to communicate. Even in our most personal writing, that in journals and diaries, we edit ourselves, presenting a version of ourselves that we feel comfortable with. This is especially true of anyone writing a diary with a view to it being published some day.
Our communications, however voluminous they are, will only ever give a partial reflection of who we are, and even that is heavily edited. A chatbot based on the communicative outputs of a dead person will be a very poor simulacrum of that person – a bit like when your internet connection is slow and a website hasn’t loaded properly. Instead of a nicely designed page with images and text formatting, you get a long list of unformatted text and no pictures. You can tell what it’s meant to be, more or less, but you know that the experience is lacking.
It’s the bare bones of a person without any of the flesh that makes them who they are. This bare-bones chatbot could never replicate the rich and meaningful interpersonal experiences that a fully formed person can offer. There are so many things about an interaction that written and recorded communications don’t capture: context, body language, what goes unspoken, history, the relationships between the speakers, the thoughts and ideas that inform what’s said or written.
Without the embodied experience of being a real person, the chatbot can only provide a meagre impression of that person. Maybe this is enough – I don’t think anyone will believe that a chatbot of a dead loved one is going to replace them. But that raises the question of why create the chatbot at all? Why not just talk to Alexa?
Change – or lack of it
Then there’s the issue of change: people grow, develop, evolve over time as they experience new things. A chatbot can too, of course, but not in the same way as a human. A chatbot isn’t embodied in the way a person is and this means it can’t experience things the way we do. Its own development will be essentially different from that of an embodied human being – it will be unique to the chatbot.
This means that over time it will become less and less a copy of the original person and more like, well, something else, something more like its own self. And here maybe ethics does come into play. Because as the chatbot evolves into a completely different character from what it was originally created to be, is it really ethical to retain it as a chatbot of the dead person?
I’m not sure, but I think not. This raises all kinds of questions about the moral standing of such chatbots, about whether they can or should be considered persons in their own right. If they’re not, then I wonder what the point of creating them is. If they are, then their own wishes and wellbeing has to be taken into consideration – if they can be the kinds of beings that have wishes and wellbeing.
I said earlier that I don’t think that chatbots of the dead are morally wrong. However, this might not be completely true. At the moment I think we can safely say that chatbots aren’t conscious beings. They don’t have wishes and desires, they aren’t aware of their own existence. They can learn and evolve, but so can any large language model.
But this might not always be the case. As technology improves and our understanding of what consciousness is evolves, it is entirely possible that chatbots will become the kind of beings that can have wishes and desires. At this point, as the chatbot changes and adapts to its experiences, we have to consider its wishes. It might want autonomy from the person who brought it into being in the first place – much like children do.
One way around this is to ensure these chatbots don’t evolve and develop. But then they remain stuck – a paltry simulacrum of a person at a certain time in their lives from which they can never escape. So again, I feel the need to ask: what is the point? I don’t see how this stripped-down, stuck-in-time, chatbot version of a person could ever provide anything like the real experience of being with that person when they were alive.
No reason
The reason for creating a chatbot of a dead person is to recreate, to a degree, interacting with the dead person, but if the chatbot can’t do that, then it seems to me that there’s no reason to make the chatbot. It’s never going to be able to do what the creator wants it to do.
The idea, I suppose, is that the chatbot will somehow help with the grieving process. It provides a bridge from not being able to let go to a place where you are ready to say goodbye. But it’s just as likely that the chatbot will interfere with the grieving process – instead of providing a bridge it’s a boulder blocking the path.
It can provide a just-similar-enough experience to being with the dead person that it replaces the dead person in its creator’s minds. There are already many instances of shy or socially awkward men falling in love with Siri, so it’s entirely possible that someone would start to have genuine feelings for a chatbot that’s been specifically created to replicate a dead loved one.
Learning to live with loss is part of being human, of finding a way to live a good life. A chatbot of the dead circumvents that learning. It gets in the way of living a good life. It’s like having a film of your old life – the one before the loss – and replaying that film on repeat instead of creating a new film that incorporates the loss.
Even if the chatbot does provide a bridge to a new life that incorporates the loss, what happens next? Presumably it gets turned off and put back in its box. Maybe returned to the company from which it was purchased so it can be reprogrammed. This ease with which it’s switched off really only demonstrates how meaningless any interactions with it are. If a meaningful relationship is maintained with it, then putting it back in its box would be impossible.
A chatbot of a dead philosopher, novelist, poet, monarch, based on their writings, could produce an interesting and illuminating learning experience for students. A chatbot of a dead loved one, though, calls into question the very meaning of the relationship with that person and of how we live a good life in the face of loss. So, yes, it’s a bit creepy.
A note on the image
This was created by ChatGPT based on the instruction ‘Create an image for chatbot of the dead’. I didn’t know what to expect, but this is more accurate than I imagined.