Elon Musk wants to pause 'dangerous' A.I. development. Bill Gates disagrees—and he's not the only one
An open letter calling for a development pause on A.I. systems like ChatGPT has more than 13,500 signatures — but Bill Gates and A.I. developers are pushing back.
If you've heard a lot of pro-A.I. chatter in recent days, you're probably not alone.
AI developers, prominent A.I. ethicists and even Microsoft co-founder Bill Gates have spent the past week defending their work. That's in response to an open letter published last week by the Future of Life Institute, signed by Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, calling for a six-month halt to work on AI systems that can compete with human-level intelligence.
The letter, which now has more than 13,500 signatures, expressed fear that the "dangerous race" to develop programs like OpenAI's ChatGPT, Microsoft's Bing AI chatbot and Alphabet's Bard could have negative consequences if left unchecked, from widespread disinformation to the ceding of human jobs to machines.
But large swaths of the tech industry, including at least one of its biggest luminaries, are pushing back.
"I don't think asking one particular group to pause solves the challenges," Gates told Reuters on Monday. A pause would be difficult to enforce across a global industry, Gates added — though he agreed that the industry needs more research to "identify the tricky areas."
That's what makes the debate interesting, experts say: The open letter may cite some legitimate concerns, but its proposed solution seems impossible to achieve.
Here's why, and what could happen next — from government regulations to any potential robot uprising.
What are Musk and Wozniak concerned about?
The open letter's concerns are relatively straightforward: "Recent months have seen A.I. labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control."
AI systems often come with programming biases and potential privacy issues. They can widely spread misinformation, especially when used maliciously.
And it's easy to imagine companies trying to save money by replacing human jobs — from personal assistants to customer service representatives — with A.I. language systems.
Italy has already temporarily banned ChatGPT over privacy issues stemming from an OpenAI data breach. The U.K. government published regulation recommendations last week, and the European Consumer Organisation called on lawmakers across Europe to ramp up regulations, too.
In the U.S., some members of Congress have called for new laws to regulate A.I. technology. Last month, the Federal Trade Commission issued guidance for businesses developing such chatbots, implying that the federal government is keeping a close eye on AI systems that can be used by fraudsters.
And multiple state privacy laws passed last year aim to force companies to disclose when and how their A.I. products work, and give customers a chance to opt out of providing personal data for A.I.-automated decisions.
Those laws are currently active in California, Connecticut, Colorado, Utah and Virginia.
What do A.I. developers say?
At least one A.I. safety and research company isn't worried yet: Current technologies don't "pose an imminent concern," San Francisco-based Anthropic wrote in a blog post last month.
Anthropic, which received a $400 million investment from Alphabet in February, does have its own A.I. chatbot. It noted in its blog post that future A.I. systems could become "much more powerful" over the next decade, and building guardrails now could "help reduce risks" down the road.
The problem: Nobody's quite sure what those guardrails could or should look like, Anthropic wrote.
The open letter's ability to prompt conversation around the topic is useful, a company spokesperson tells CNBC Make It. The spokesperson didn't specify whether Anthropic would support a six-month pause.
In a Wednesday tweet, OpenAI CEO Sam Altman acknowledged that "an effective global regulatory framework including democratic governance" and "sufficient coordination" among leading artificial general intelligence (AGI) companies could help.
But Altman, whose Microsoft-funded company makes ChatGPT and helped develop Bing's AI chatbot, didn't specify what those policies might entail, or respond to CNBC Make It's request for comment on the open letter.
Some researchers raise another issue: Pausing research could stifle progress in a fast-moving industry, and allow authoritarian countries developing their own A.I. systems to get ahead.
Highlighting A.I.'s potential threats could encourage bad actors to embrace the technology for nefarious purposes, says Richard Socher, an A.I. researcher and CEO of A.I.-backed search engine startup You.com.
Exaggerating the immediacy of those threats also feeds unnecessary hysteria around the topic, Socher says. The open letter's proposals are "impossible to enforce, and it tackles the problem on the wrong level," he adds.
What happens now?
The muted response to the open letter from A.I. developers seems to indicate that the tech giants and startups alike are unlikely to voluntarily halt their work.
The letter's call for increased government regulation appears more likely, especially since lawmakers in the U.S. and Europe are already pushing for transparency from A.I. developers.
In the U.S., the FTC could also establish rules requiring A.I. developers to only train new systems with data sets that don't include misinformation or implicit bias, and to increase testing of those products before and after they're released to the public, according to a December advisory from law firm Alston & Bird.
Such efforts need to be in place before the tech advances any further, says Stuart Russell, a Berkeley University computer scientist and leading A.I. researcher who co-signed the open letter.
A pause could also give tech companies more time to prove that their advanced AI systems don't "present an undue risk," Russell told CNN on Saturday.
Both sides do seem to agree on one thing: The worst-case scenarios of rapid A.I. development are worth preventing. In the short term, that means providing A.I. product users with transparency, and protecting them from scammers.
In the long term, that could mean keeping A.I. systems from surpassing human-level intelligence, and maintaining an ability to control it effectively.
"Once you start to make machines that are rivaling and surpassing humans with intelligence, it's going to be very difficult for us to survive," Gates told the BBC back in 2015. "It's just an inevitability."
DON'T MISS: Want to be smarter and more successful with your money, work & life? Sign up for our new newsletter!
Take this survey and tell us how you want to take your money and career to the next level.