Science is supposed to be the pursuit of truth, but there might be decidedly something unscientific and possibly even dangerous about the commercialization of artificial intelligence (AI) over the past several months.
The era of ‘move fast and break things’, the longtime mantra of Silicon Valley giants, is now facing a severe challenge from the AI technology.
Last year’s launch of OpenAI’s ChatGPT, which became the fastest-growing app in history when it hit 100 million users in only two months, showcased the technology’s lucrative potential and spurred companies into action.
However, leading AI experts have urged companies to take a cautious approach and warned about the risks and dangers posed by this ground-breaking technology.
Tech firms, including Google and Microsoft, are pouring billions into the AI research with Alphabet adding $115 billion in value after unveiling new AI tools. Amazon has announced the launching of its own in-house AI model known as Titan.
But where is this race leading to?
The former CEO of Twitter and Tesla CEO, Elon Musk, lamented that he had committed mistakes in forming the company that became OpenAI, the originator of the game-changing ChatGPT artificial intelligence company.
He had also regretted about ChatGPT, saying he’s a ‘huge idiot’ for letting go of OpenAI.
Musk thinks the world is woefully unprepared for the impact of AI. The technology will hit people “like an asteroid”, he said revealing that he had used his only one-on-one meeting with the then President Barack Obama to push for the AI regulation. He proposed a six-month ban on artificial intelligence to ensure better planning and management.
Even though Bill Gates had said he was “scared” about the technology falling into the wrong hands, he had rejected Musk-backed plan to pause the AI research, saying the technology may already be on a runaway train.
On May 16, OpenAI CEO Sam Altman during a Senate panel hearing had urged US lawmakers to regulate the AI, describing the technology’s current boom as a potential “printing press moment” but one that required safeguards. “We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” he had said in his opening remarks before a Senate Judiciary subcommittee.
He said the potential for AI to be used to manipulate voters and target disinformation are his “areas of greatest concern,” especially because “we’re going to face an election next year and these models are getting better.”
Sen. Richard Blumenthal explained that it could just as easily have produced “an endorsement of Ukraine’s surrendering or Vladimir Putin’s leadership.” That, he said, “would’ve been really frightening.”
The new AI tools, which have been developed by several tech firms in recent months, met with backlash from their critics for their potential to disrupt millions of jobs, spread misinformation and perpetuate biases.
Former diplomat Henry Kissinger, 99, says he wants to call attention to the dangers of AI the same way he did for nuclear weapons and warns it’s a ‘totally new problem’.
Author Yuval Noah Harari argues society needs time to get artificial intelligence right.
Geoffrey Hinton, known as the “godfather of artificial intelligence”, has decided to blow the whistle on the technology, raising concerns over its use. The 75-year-old is particularly concerned that these tools could be trained to sway elections and even to wage wars. He recently quit a high-profile job at Google specifically to share his concerns that unchecked AI development could pose danger to humanity.
Hinton has highlighted four possible dangers in the coming years: Military applications, misinformation and disinformation, jobs lost and the rise of dictators. His concerns are shared by the Center for AI Safety, an organization dedicated to reducing the societal-scale risks from artificial intelligence.
What causes alarm
Our human brains can solve calculus equations, drive cars and keep track of the characters in ‘succession’, thanks to their native talent for organizing and storing information and reasoning out solutions to thorny problems. The roughly 86 billion neurons packed into our skulls — and, more important, the 100 trillion connections those neurons forge among themselves — make that possible.
By contrast, the technology underlying ChatGPT features between 500 billion and one trillion connections. GPT-4, the latest AI model from OpenAI, knows “hundreds of times more” than any single human.
Hinton says maybe it has “much better learning algorithm” than we do, making it more efficient at cognitive tasks.
He suggests that a global agreement similar to the 1997 Chemical Weapons Convention might be a good first step toward establishing international rules against weaponized AI.
In March, more than 1,000 researchers and technologists had signed a letter calling for a six month’s pause on AI development because, they said, it poses “profound risks to society and humanity.”
What would smarter-than-human AI systems do? Malicious individuals, groups or nation-states might simply co-opt them to further their own ends. What’s not clear is how anyone would stop a power like Russia from using AI technology to dominate its neighbors or its own citizens. Hinton says AI chatbots, for instance, could be the future version of election misinformation spread via Facebook and other social media platforms.
And that might just be the beginning, Hinton had said. “Don’t think for a moment that Putin wouldn’t make hyper-intelligent robots with the goal of killing Ukrainians.”
“Humans are more important than money,” says Yoshua Bengio, one of the pioneers of AI technology. He says he feels “lost” because of the direction that the AI is headed in.
Humanity now is at the mercy of a vast and uncaring universe. As I write this, I’m reminded of Byron’s terrible tale of apocalypse and despair in his poem “Darkness”.