AI’s Role in the 2024 U.S. elections: benefits and concerns
As the 2024 U.S. election season draws closer, apprehension grows about the possible influence of artificial intelligence (AI) on the […] The post AI’s Role in the 2024 U.S. elections: benefits and concerns appeared first on ReadWrite.
As the 2024 U.S. election season draws closer, apprehension grows about the possible influence of artificial intelligence (AI) on the democratic process. Some worry that AI might inundate the internet with deepfakes and misinformation, leading to a bleak future where public opinion can be easily manipulated. Nevertheless, two experienced political campaign veterans believe the worst AI-related fears might not occur and that the technology could potentially benefit democracy by engaging more voters. Positively utilizing AI, they argue, could support targeted and efficient political messaging that encourages more citizens to participate in the electoral process. Moreover, AI-driven tools could also play a vital role in combating misinformation, analyzing trends, and detecting manipulative content to preserve the credibility of political discourse.
Deepfake fears and concerns around upcoming elections
Back in 2018, director Jordan Peele teamed up with BuzzFeed to create a video featuring a false speech by then-President Barack Obama to demonstrate the potential risks of using technology to manipulate public opinion. The Buzzfeednews link has some of the best examples of deepfakes all in one link. As the 2024 election season approaches, some pessimists think such situations might become a reality. Experts warn that advancements in deepfake technology may make it increasingly difficult for the public to discern real information from fabrications. As a result, there is growing concern that malicious actors could use this technology to manipulate public discourse and even influence the outcome of elections.
Experts weigh in on the potential dangers of AI-generated content
Oren Etzioni, an AI expert and emeritus at the University of Washington, shared his concerns about the possibility of AI-generated content depicting fictional events, such as President Joe Biden being taken to the hospital or bank runs. These AI-generated scenarios could lead to widespread misinformation and severe societal consequences, as people may believe and act upon these false narratives. Addressing this issue, policymakers and technology developers should collaborate to create guidelines and safeguards that can prevent the malicious use of AI in spreading fabricated content.
Former Google chairman discusses AI-generated misinformation
Eric Schmidt, former Google chairman, also expressed concerns about AI-generated misinformation during the upcoming election, describing it as one of the most significant short-term threats posed by the technology. “The 2024 elections are going to be a mess because social media is not protecting us from false generated AI,” Schmidt stated. To address these growing concerns, experts and researchers urge tech companies and governments to work together to establish regulations and safeguards against AI-generated fake news. Strategies include investing in further AI advancements to detect and eliminate such content and promoting digital literacy among users, ultimately ensuring a more credible and reliable flow of information.
Featured Image Credit: Produtora Midtrack; Pexels
Deanna Ritchie
Managing Editor at ReadWrite
Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.