There is an ongoing societal debate about AIs and their level of functioning. There are many forms of this, such as whether we are talking about current AIs or hypothetical AIs, and what particular facet of the mind we are interested in in the moment. The most prominent recent example of this is the issue of whether Googles’ LaMDA is sentient, and the consensus in that particular case seems to be that it lacks the properties that would get us to call it sentient.
These discussions seem to all have the same form to me, which in this blog post I’m arguing is misguided. The discussions seem to be about trying to establish the capabilities of a particular AI, in order to determine whether it meets the criteria of intelligence / consciousness / sentience / emotions / etc. We have entire ontologies of capabilities that we might look at, such as the vibrant debate about whether we should look at intensional vs extensional attributes.
Do submarines swim
But this seems pretty wrong to me. There is a common thought-provoking question of the form “do submarines swim”. This question seems to get at something very different: it’s not difficult because we are unsure of the capabilities of submarines, but because we realize that we are unsure what we want the word “swim” to refer to. We could easily define “swim” to include submarines or not, and it becomes a question of what the goal is of making such a determination.
I believe this is a better way to think about the “Do AIs think” debate. It’s clear that even if we had a perfect understanding of how AIs operate, there would still be the open question of how we want to define the word “think” (or “sentience” etc). We treat these questions like we have a clear idea of what we are even asking, but I don’t think there’s anything close to agreement about what “consciousness” is or even if there is such a thing as a “self” or free-will.
And again it seems pretty straightforward to define consciousness in a way that includes sufficiently powerful AIs, and it’s pretty simple to define it in a way that doesn’t. It comes down to a question of which definition better answers other questions, and in this case I believe the underlying question we are trying to answer is loosely “at what point do beings have enough similarity to humans that they deserve to be treated like one”.
But we are very much in a heated societal battle on this topic on another front: at what point a fetus becomes a human. Like submarines swimming, this doesn’t seem to be a debate about the exact capabilities of fetuses of a certain age, but about what qualities we want to hold as being sufficient to be human. If we don’t know when humans become “human”, what are we hoping to get from arguing the same question about AIs? We don’t even know whether we should treat animals as having human-like subjective experiences, and this is something that has been debated probably since we had the ability to.
What is blue
And it’s not like we even have answers for these questions in far more prosaic cases even when limited to humans.
Famously, we have no idea if humans perceive colors in the same way. We can scientifically define the color blue, and we can measure if humans can distinguish the color blue versus other colors, but we have no way of knowing if our subjective experiences of blue are the same. I’m not sure this is even a question that could be answered with more advanced technology, since we currently have no idea how to bridge the gap between mental activity and subjective experience (the hard problem of consciousness).
If we have no idea if humans see blue the same way, how can we possibly answer whether AIs and humans feel pain in the same way?
Conclusion
All of this leads me to believe that asking whether AIs think / are conscious / are sentient / have emotions / etc is a bit misguided: I think we can give up now on answering those questions as posed. Instead I think this whole thing is a debate about what beings deserve to be given rights, and how tightly do we want to define “humanity” because we can always do so to include or exclude target groups.
This seems like more of a philosophical or even religious question than a question about AI. In particular my own personal views are influenced by Buddhism, where I think that there is no particular line that can be justified so we should try to treat all beings well. But this seems to be a deep question about values, not a question about the capabilities of particular neural network architectures.