Screenwriters, Authors, and Journalists Respond To Threat Of AI via @sejournal, @kristileilani

Writers from various sectors respond to the threat of generative AI technology. Explore the reasons why screenwriters, authors, journalists, and others writers want regulation. The post Screenwriters, Authors, and Journalists Respond To Threat Of AI appeared first on Search...

Screenwriters, Authors, and Journalists Respond To Threat Of AI via @sejournal, @kristileilani

The rapid development of artificial intelligence is presenting new challenges for writers of all kinds, leading screenwriters, authors, and journalists to respond to the threat of AI.

Although generative AI tools are not advanced enough to completely replace humans at the moment, their unregulated use has the potential to affect writers’ livelihoods negatively.

Hollywood Writers Strike Over AI-Generated Content

The Writers Guild of America (WGA) has been in a heated battle with Hollywood studios over using AI to generate content.

Writers felt they were not benefiting from studios’ profits, and AI directly threatened their compensation and working conditions.

The WGA argued that AI should not be used to rewrite or generate literary material without compensating human writers.  They wanted regulations to protect writers’ livelihoods as AI becomes more advanced.

However, studios have resisted these demands, only offering to discuss technology changes through annual meetings.

Executives appeared to be content to let the strike, which began in May, continue until union members could no longer pay their bills.

The WGGB Drafts Recommendations For AI Developers

The Writers Guild of Great Britain conducted a survey where 65% of respondents said increased AI use would reduce their income, and 61% were worried about being replaced.

In response, the union published recommendations for AI developers. In addition to only using a writer’s work with express permission, AI companies should maintain transparent training logs, label AI content, credit authors, and establish independent AI regulation.

The Guild argued that while AI is not yet sophisticated enough to match human creativity, it still poses risks like decreased opportunities and fair pay that need to be addressed through new regulations.

WGGB also noted the study from OpenAI that writers face the most risk when it comes to advancing AI technology compared to other career paths.

Bestselling Authors Send Open Letter To AI Companies

The Authors Guild has also raised concerns about AI systems like ChatGPT being trained on books without permission or compensation.

As writers, we know the profound impact our work has had on the development of AI models. We are calling on AI leaders to stop using our works without our consent, compensation, or credit.

Please join us in signing our open letter. https://t.co/sVF42eZGJt pic.twitter.com/EigVsGRafn

— The Authors Guild (@AuthorsGuild) July 14, 2023

Over 9,000 authors – including bestselling authors like Dan Brown, James Patterson, Magaret Atwood, Suzanne Collins, and Michael Chabon – have signed an open letter to AI companies about compensating writers fairly for their contributions to AI training data.

In the letter, the Guild argued that AI output trained on copyrighted work is derivative, and writers should be paid royalties.

We understand that many of the books used to develop AI systems originated from notorious piracy websites. Not only does the recent Supreme Court decision in Warhol v. Goldsmith make clear that the high commerciality of your use argues against fair use, but no court would excuse copying illegally sourced works as fair use.

The overarching fear is that cheap AI-generated books could flood the market, making it difficult for human authors to earn a living.

The introduction of AI threatens to tip the scale to make it even more difficult, if not impossible, for writers—especially young writers and voices from under-represented communities—to earn a living from their profession.

Journalists Uneasy About AI “Assistants”

Finally, Google pitched news organizations like The New York Times and the Washington Post AI tools to help generate draft news stories.

While potentially saving journalists’ time, it raised unease about undermining the craft of quality journalism.

There were also worries AI could spread misinformation if not carefully edited. Newsgroups wanted to explore uses responsibly, but tensions existed around protecting reporting integrity.

Google offered a formal statement to address journalists’ concerns about their proposed AI technology.

The statement was not met with enthusiasm.

very on-brand for comms to push a tool for replacing writers with AI during the SAG-AFTRA strike #WGAStrong

when reached for comment, @GoogleAI comms replied they were too busy in celebrity yoga class

— ellafannn (@pwais) July 20, 2023

AI’s Potential For Disruption

The emergence of AI poses complex questions about how to balance innovation and ethics.

While advancements promise to transform industries, concerns remain about fair compensation and training practices.

With livelihoods at stake, writers in various industries could face difficulties adapting – even those that chronicle the rapid developments and AI advances.

It falls to stakeholders on all sides to navigate a collaborative way forward. There are opportunities to enhance society if the human values of equity and attribution remain integral to AI’s evolution.

By joining forces, understanding diverse views, and mitigating risks, the writing community hopes to retain its integral role while responsibly shaping the future.


Featured image: KieferPix/Shutterstock