Medium welcomes posts written with AI as long they’re ‘clearly labeled’

Image: MediumOnline publisher Medium says it’s happy for users to post articles written with the help of AI writing tools like ChatGPT — as long as their use of the technology is disclosed. The blogging platform turned publisher, which...

Medium welcomes posts written with AI as long they’re ‘clearly labeled’

Online publisher Medium says it’s happy for users to post articles written with the help of AI writing tools like ChatGPT — as long as their use of the technology is disclosed.

The blogging platform turned publisher, which lets anyone create their own blog while selling membership access for some exclusive pieces, is the latest organization to establish ground rules for AI-generated text. Medium’s VP of content, Scott Lamb, shared the new policy in a blog post, which states:

“We welcome the responsible use of AI-assistive technology on Medium. To promote transparency, and help set reader expectations, we require that any story created with AI assistance be clearly labeled as such.“

Lamb stresses that this is just an “initial approach” and that the company’s policy may change as the technology does.

“We believe that creating a culture of disclosure, where the shared expectation of good citizenship is that AI-generated content is disclosed, empowers readers,” writes Lamb. “It allows them to choose their own reaction to, and engagement with, this kind of work, and clearly understand whether a story is machine- or human-written.”

The company says when it finds posts it believes to be AI-generated that have not been disclosed, the posts won’t be removed but won’t be distributed via Medium’s various recommendation features (these include a “For You” section on the homepage, the “Medium Digest” newsletter, and so on).

A number of writers and publications that use Medium have already banned the use of AI writing tools. For example, the site’s second most followed blog, Towards Data Science, says it’s “committed to publishing work by human authors only,” and will only allow AI-generated text when it’s clearly labelled and used as an example.

Medium does not say it will proactively scan published pieces for AI-generated text. A number of AI-text detectors do exist, but while they are reliably able to spot unedited AI-generated text, users can customized this output to avoid detection.

There are other problems for publishers associated with AI writing tools, including the ease with which this software can commit plagiarism. Alex Kantrowitz, a technology journalist who publishes on Substack, wrote in a recent post that another Substack user used AI to copy, paraphrase, and re-publish one of his posts. The post went viral but has since been deleted. Its author admitted to the use of AI tools to “improve the readability” of the post.

“Imagine AI remixing the Financial Times’ ten most-read stories of the day — or The Information’s VC coverage — and making the reporting available sans paywall,” writes Kantrowitz.

A number of digital publishers have begun experimenting with the use of AI writing tools. BuzzFeed announced yesterday it will be using the technology to “personalize” and “enhance” content for readers while CNET recently stopped using AI to generate stories after complaints the technology was not properly disclosed and numerous errors were found in the articles.