35.2 C
City of Banjul
Tuesday, November 5, 2024
spot_img
spot_img

On artificial intelligence’s revolution: learning but not as we know it

- Advertisement -

Bosses don’t often play down their products. Sam Altman, the CEO of artificial intelligence company OpenAI, did just that when people went gaga over his company’s latest software: the Generative Pre-trained Transformer 3 (GPT-3). For some, GPT-3 represented a moment in which one scientific era ends and another is born. Mr Altman rightly lowered expectations. “The GPT-3 hype is way too much,” he tweeted last month. “It’s impressive … but it still has serious weaknesses and sometimes makes very silly mistakes.”

OpenAI’s software is spookily good at playing human, which explains the hoopla. Whether penning poetry, dabbling in philosophy or knocking out comedy scripts, the general agreement is that the GPT-3 is probably the best non-human writer ever. Given a sentence and asked to write another like it, the software can do the task flawlessly. But this is a souped up version of the auto-complete function that most email users are familiar with.

GPT-3 stands out because it has been trained on more information – about 45TB worth – than anything else. Because the software can remember each and every combination of words it has read, it can work out – through lightning-fast trial-and-error attempts of its 175bn settings – where thoughts are likely to go. Remarkably it can transfer its skills: trained as a language translator, GPT-3 worked out it could convert English to Javascript as easily as it does English to French. It’s learning, but not as we know it.

- Advertisement -

But this is not intelligence or creativity. GPT-3 doesn’t know what it is doing; it is unable to say how or why it has decided to complete sentences; it has no grasp of human experience; and cannot tell if it is making sense or nonsense. What GPT-3 represents is a triumph of one scientific paradigm over another. Once machines were taught to think like humans. They struggled to beat chess grandmasters. Then they began to be trained with data to, as one observer pointed out, “discover like we can” rather than “contain what we have discovered”. Grandmasters started getting beaten. These days they cannot win.

The reason is Moore’s law, the exponentially falling cost of number-crunching. AI’s “bitter lesson” is that the more data that can be consumed, and the more models can be scaled up, the more a machine can emulate or surpass humans in quantitative terms. If scale truly is the solution to human-like intelligence then GPT-3 is still about 1,000 times smaller than the brain’s 100 trillion-plus synapses. Human beings can learn a new task by being shown how to do it only a few times. That ability to learn complex tasks from only a few examples, or no examples at all, has so far eluded machines. GPT-3 is no exception.

All this raises big questions that seldom get answered. Training GPT-3’s neural nets is costly. A $1bn investment by Microsoft last year was doubtless needed to run and cool GPT-3’s massive “server farms”. The bill for the carbon footprint – a large neural net is equal to the lifetime emissions of five cars – is due.

- Advertisement -

Fundamental is the regulation of a for-profit OpenAI. The company initially delayed the launch of its earlier GPT-2, with a mere 1.5bn parameters, because the company fretted over its implications. It had every reason to be concerned; such AI will emulate the racist and sexist biases of the data it swallows. In an era of deepfakes and fake news, GPT-style devices could become weapons of mass destruction: engaging and swamping political opponents with divisive disinformation. Worried?

Join The Conversation
- Advertisment -spot_img
- Advertisment -spot_img