“The hand mill gives you society with the feudal lord; the steam mill society with the industrial capitalist,” Karl Marx once said. And he was right. We have seen over and over again throughout history how technological inventions determine the dominant mode of production and with it the type of political authority present in a society.
So what will artificial intelligence give us? Who will capitalise on this new technology, which is not only becoming a dominant productive force in our societies (just like the hand mill and the steam mill once were) but, as we keep reading in the news, also appears to be “fast escaping our control”?
Could AI take on a life of its own, like so many seem to believe it will, and single-handedly decide the course of our history? Or will it end up as yet another technological invention that serves a particular agenda and benefits a certain subset of humans?
Recently, examples of hyperrealistic, AI-generated content, such as an “interview” with former Formula One world champion Michael Schumacher, who has not been able to talk to the press since a devastating ski accident in 2013; “photographs” showing former President Donald Trump being arrested in New York; and seemingly authentic student essays “written” by OpenAI’s famous chatbot ChatGPT have raised serious concerns among intellectuals, politicians and academics about the dangers this new technology may pose to our societies.
In March, such concerns led Apple co-founder Steve Wozniak, AI heavyweight Yoshua Bengio and Tesla/Twitter CEO Elon Musk among many others to sign an open letter accusing AI labs of being “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” and calling on AI developers to pause their work. More recently, Geoffrey Hinton – known as one of the three “godfathers of AI” quit Google “to speak freely about the dangers of AI” and said he, at least in part, regrets his contributions to the field.
We accept that AI – like all era-defining technology – comes with considerable downsides and dangers, but contrary to Wozniak, Bengio, Hinton and others, we do not believe that it could determine the course of history on its own, without any input or guidance from humanity. We do not share such concerns because we know that, just like it is the case with all our other technological devices and systems, our political, social and cultural agendas are also built into AI technologies. As philosopher Donna Haraway explained, “Technology is not neutral. We’re inside of what we make, and it’s inside of us.”
What is being insistently communicated to the public today is that the conscious machine is (almost) here, that our everyday world will soon resemble the ones depicted in movies like 2001: A Space Odyssey, Blade Runner and The Matrix.
This is a false narrative. While we are undoubtedly building ever more capable computers and calculators, there is no indication that we have created – or are anywhere close to creating – a digital mind that can actually “think”.
The danger of AI is not that it is an impossible-to-control digital intelligence that could destroy our sense of self and truth through the “fake” images, essays, news and histories it generates. The danger is that this undeniably monumental invention appears to be basing all its decisions and actions on the same destructive and dangerous values that drive predatory capitalism.