The wrong question about AI

The progress of AI is interesting and important to follow, but current discussions around it are too focused on the increasing benchmark performances and if machines can ever get smarter than humans (superintelligence). Counter-arguments to the promise of achieving so-called «superintelligent» computer programs can typically take the form of «humans are still better than AI at this», or «AI can never replace humans at that», but they are missing the point. Asking whether AI can outperform humans is the wrong question. The real question is: What are appropriate uses of AI? Or simply, what do we actually want to automate?

One of the main reasons for developing AI technology is to increase efficiency through automation. We want the computer to solve certain tasks on its own. The problem we now face is that automation technology is so advanced that there seems to be no limit to what we can automate. Given enough time, money, and resources, we can make machines that are, or at least appear to be, more efficient than humans for virtually any task, especially if we only evaluate them on task efficiency and ignore any ripple effects. This of course depends entirely on what metrics you use to evaluate efficiency, but any concretely defined metrics can theoretically be optimised for when training an AI model.

From a societal point of view, it is pointless arguing whether AI or humans are better at a certain task without first asking whether the task is suitable for AI in the first place. This topic was brought up already in 1976 by Joseph Weizenbaum in his book Computer Power and Human Reason, and has only become increasingly relevant. Generative AI-models’ ability to process and generate text leaves practically no area of society out of reach. This is exactly when we need to not only worry about performance and reliability of AI systems, but how we actually want to use them.

Consider for example AI chatbots for keeping lonely elderly people with company. This suggestion comes not only from outside the target group but also from elders themselves. With limited personnel and resources for elderly care, why not have an AI companion that can easily be individually personalised and is always available? There is no doubt that the current AI chatbots can generate human-like, empathic conversation, but do we really want to use computers as company for our elders?

The question is hard, because it forces us to search for our common values. Does it matter if we replace care-givers with machines? If studies show that it works (according to certain metrics) and it’s difficult to employ enough people, then why not?

While a chatbot might be better than nothing, we can certainly do much better. Interpersonal relationships are a cornerstone of both society and our individual lives, and of all the things we would like to automate, it should be far down the list. The challenge of human loneliness is one area where political solutions must take precedence over technological ones, such as investing in care workers and community programs. And similar dilemmas appear when AI-systems are used as therapists, companions, or other forms of direct replacements for human relations.

While Weizenbaum had the luxury of discussing hypothetical scenarios, we are facing concrete challenges. When new technologies appear, we can always choose whether to use them or not. The answer depends not only on the capabilities of the technology, but what we value as a society. Once the choice is made, it’s often hard to reverse it.

Just because we can automate human connection doesn’t mean we should. Let’s use machine intelligence to automate machines, not humans.