How to Tell If That Song Was Made With AI

AI music isn't on its way: It's already here.

How to Tell If That Song Was Made With AI
Illustration of a drumset with pixelated music notes dancing around it

Credit: Stacey Zhu


This post is part of Lifehacker’s “Exposing AI” series. We’re exploring six different types of AI-generated media, and highlighting the common quirks, byproducts, and hallmarks that help you tell the difference between artificial and human-created content.

Of all the AI-generated content out there, AI music might be weirdest. It doesn't feel like it should be possible to ask a computer to produce a full song from nothing, the same way you ask ChatGPT to write you an essay, but it is: Apps like Suno can generate a song for you from a simple prompt, complete with vocals, instrumentals, melodies, and rhythm, some of which are way too convincing. The better this technology gets, the harder its going to be to spot AI music when you stumble across it.

In fact, it's already pretty hard. Sure, there are examples that are obvious (as good as they are, nobody thinks Plankton is really singing all these covers), but there are plenty of AI-generated songs out there that are guaranteed to trick casual listeners. Instrumental electronic music that already sounds digital is particularly challenging to discern, and raises a lot of ethical questions, as well as concerns about the future of the music industry.

Let's put that aside, however, and focus on the task at hand: spotting AI music when you hear it in the wild.

How AI music generation works

It sort of seems like magic that you could describe a song in text, and have an AI tool generate a full song, vocals and all. But really, it's the product of machine learning.

Like all AI generators, AI music generators are based on models that are trained on enormous amounts of data. These particular models are trained on music samples, learning the relationships between the sounds of different instruments, vocals, and rhythms. Programs that produce AI covers, for example, are trained on a specific artist's voice: You provide enough samples of that artist's voice, and the program will map it to the vocal track you're trying to replicate. If the model is well trained, and you give it enough vocal data, you might just create a convincing AI cover.

This is an overly simplified explanation, but it's important to remember that these "new" songs are made possible by a huge dataset of other sounds and songs. Whether the entire song was generated with AI, or just the vocals, the models powering the tech are outputting products based on their previous training. While many of these outputs are impressive, there are consistent quirks you can pick up on, if you're listening for them:

Audio glitches and hiccups

Most generative AI products have some artifacts or inconsistencies that can offer a hint to their digital origins. AI music is no different: The audio that AI models generate can sometimes sound very convincing, but if you listen closely, you may hear some oddities here and there.

Take this Suno song, "Ain't Got a Nickel Ain't Got a Dime." It's the kind of AI output that, rightly so, should scare you, as it would likely fool many people into believing it's real. But zero in on the vocals: The entire time, the "singer's" voice is shaky, but not in a way you'd expect from a human. It's modulating, almost like it's being auto-tuned, but it sounds more robotic than digitally altered. Once you get a hang of listening for this sound, you'll hear it pop up in a lot of AI songs. (Though, I begrudgingly admit, this chorus is pretty damn catchy.)

Here's another example, "Stone," which is perhaps even more scary than the last: There are times in this song, particularly the line "I know it but what am I to do" that sounds very realistic. But just after that line, you can hear some of the same modulation issues as above, starting with "oh, my love." Shortly after, there's a weird glitch, where it sounds like the singer and the band all sing and play the wrong note.

Perhaps even more telling, the second "chorus" falls apart. It has the same lyrics, up until, "I know it but what am I to do," but transitions halfway through to say "I know it, me one day," morphing into the lyrics of another verse. In addition, the AI doesn't seem to remember how the original chorus went, so it makes up a new tune. This second attempt is nowhere near as lifelike as the first.

This is one to trust your gut on: There are so many vocals edited with digital tools that it can tricky to differentiate these glitches and modulations from real human voices. But if something sounds a bit too uncanny valley, it might be a robot singing.

Low quality audio

If you have a modern streaming service and a good pair of headphones, you might be used to extremely high-quality music playback. AI-generated music, on the other hand, frequently has a classic mp3 sound. It's not crisp; instead, it's often fuzzy, tinny, and flat.

You can hear what I mean with most of the samples offered by Soundful: Click through options, and while you might not think twice about hearing any in the background of a YouTube video, notice how none is particularly crisp. Loudly's samples are a bit higher quality, but still suffer from the same effect, as if each track was compressed into a low-quality format. Even many Suno tracks, which arguably makes the best all-around AI songs right now, sound like they were downloaded over Napster. (Although they seem to be figuring out the bass drop.)

Obviously, there is a genuine lo-fi genre of music, which intentionally aims for a "low-quality" sound. But this is just one clue to look out for when determining whether a track was generated with AI or not.

A lack of passion

AI might be able to generate vocals, even relatively realistic vocals, but they still aren't perfect. The tech still struggles with producing vocals with realistic variance. You could call it a lack of passion.

Check out this song, "Back To The Start." The voice has a general robotic sound to it, but it also doesn't really go anywhere. Most of the words are sung in the same tone; poppy and light, sure, but a bit subdued, almost bored.

This is one area where AI outputs are improving, however: Suno is producing some vocals with lifelike variance (though not always). Even Plankton has some passion in his voice when belting Chappell Roan:

Another thing to look out for is the singer sounding "out of breath" in AI songs, when many of the words sound like they're not quite fully realized. I'm not sure what causes this phenomenon, but it's something I've noticed from many an AI singer. Just listen to poor Frank Sinatra struggling with every word while covering Dua Lipa:

Does the song actually make any sense?

As I write about AI, I find myself repeating one particular point: AI doesn't actually "know" anything. These generative models are trained to look for relationships, and their outputs are the results of those relationships they've learned.

As such, these songs are not evidence that AI actually knows how to make music or how music is supposed to work. It doesn't make them good lyricists, or experts at writing melodies. Rather, it produces content based on its previous training, without any critical abilities. These days, that results in an end product that is often convincing on first listen, but if you listen again, or with a discerning ear, things might fall apart. When presented with a song you think might have been made by AI, think about the different elements of the song: Do these lyrics actually make any sense? Is the music flowing in a logical way?

You don't have to be a music expert to pick up on these things. Consider the "Stone" example above: Suno seems to have "forgot" how the initial chorus was supposed to go, and, in fact, ended up messing up the lyrics it established early on. That first verse is also a melodic mess, especially the bizarre "without thinking of you" line. Not to mention, the verse is short, moving to the chorus almost immediately. It's striking how "good" the output is for AI, but that doesn't make it a "good" song.

Who's "singing?"

AI celebrity covers can be impressive, and often sound just like the singer they're impersonating. But the very fact that the song is using a famous voice can be a clue in and of itself: If Taylor Swift is covering Sabrina Carpenter, that's going to be news, not contained to a YouTube video or an Instagram reel. If a major artist put out real music, you'll likely find it on a streaming platform like Apple Music or Spotify, or at least have some verification from the artist that they indeed recorded the cover.