ChatGPT, the chat-based artificial intelligence tool developed by the San Francisco company OpenAI, wowed the world when it was released in November 2022.
In the intervening months, schoolchildren and students have used it to cheat on essays and exams, while shrewd businesspeople have overhauled the way they work to take advantage of its ability to generate content to precise prompts. It’s gone well beyond the average user who, say, tinkered with seeing how far it could get an AI facsimile of Shakespeare to write an episode of EastEnders.
But the AI revolution then pales in comparison with OpenAI’s latest release, GPT-4, which was unleashed on the world on Tuesday (14 March).
GPT-4 supercharges ChatGPT: it can handle and understand images it is shown, as well as text. It is just the “latest milestone”, OpenAI says, in its development, but it is a powerful one. GPT-4 “exhibits human-level performance on various professional and academic benchmarks”, the company claims.
The headline stats are impressive. It’s results for the bar exam, to gain entry into the American legal profession, are in the top 10 per cent. It’s capabilities are far beyond those of ChatGPT – which already astounded the general public. It’s even able to create one-click lawsuits.
While that’s interesting for would-be lawyers, of more interest to you and I are GPT-4’s softer skills. It shows prowess, when combined with other AI tools, in generating music that wouldn’t be out of place on a Spotify playlist. It transformed a simple sketch into a working website in an instant. It can even create a version of the video game Pong within 60 seconds. Others are testing it to try to develop a matchmaking service more powerful than Tinder. If there was any inkling of doubt before, GPT-4 blows away the caveats: we’re deep into the AI era.
“It’s exciting and worrying how swiftly generative AI is being folded into search engines, apps and other systems – systems that reach billions of people every day,” says Mark Surman, executive director of the Mozilla Foundation. “The release of GPT-4 adds to the excitement – and the worry. We need to start asking: can we trust what we’re building? Who will create the guardrails?”
Indeed, one major concern around GPT-4 is contained in the company’s academic paper accompanying the model’s release. Because of “the competitive landscape and the safety implications of large-scale models like GPT-4”, OpenAI won’t be sharing anything about how GPT-4 works.
Our lives can be made easier with the rise of artificial intelligence tools, but a lack of transparency into how GPT-4 works means we’re unable to identify the ways in which we’re being played by the system. AI models are only ever as good as the information that trains them. That data is gleaned from the internet, which often reflects the worst elements of our society – our foibles and our bigotry, our political fringes and cultural divides. Minority voices are played down; minority viewpoints are quashed.
This was a minor concern when ChatGPT was a fun toy to use. It became more worrying when an updated version of GPT-3.5 was integrated into Bing, Microsoft’s search engine. And now that GPT-4 is being unleashed on the world, and presented as the solution to many difficulties our lives, it’s a pressing issue that needs to be fixed.
Read more:
“AI is invisible – that’s part of the problem,” says Wendy Hall
Can AI automate human creativity?