Spotify has reinforced its policy against generative artificial intelligence (AI) in music, through a new spam filtering system and stricter rules against impersonation, the streaming platform announced this Thursday in a statement.
To improve protection in cases of identity theft, vocal imitation will only be permitted in the music available on Spotify when the imitated artist has expressly authorized its use.
In addition, it will allocate more money to combat cases in which content is fraudulently uploaded through another artist's profile on 'streaming' services, whether it's AI-generated music or not.
And to improve the process of reviewing incorrect content so that artists can report potential mismatches even before the official release of the albums.
A new musical 'spam' filter will also be applied, a system that will identify those who upload content and the tracks that use these tactics, in addition to labeling them and stopping recommending them.
Spotify is also involved in the development of a new industry standard for AI usage declarations in music credits. As this information is submitted by labels, distributors, and music partners, it will begin to appear in the platform's application.
This way, artists and rights holders will have a clear way to indicate "where and how AI intervened in the creation of a track, whether in AI-generated vocals, instrumentation, or post-production."
"It's not about punishing artists who use AI responsibly, nor will it affect how content is prioritized or promoted on Spotify," the platform assures.
The goal is to "fight against impersonation, 'spam,' and deception," which have grown at the same rate as total payments for music on Spotify, which have gone from $1 billion in 2014 to $10 billion in 2024. High payments that "attract malicious actors."
According to Spotify, the music industry "needs a nuanced approach to transparency in the use of AI, without the obligation to classify each song as 'is AI' or 'is not AI'".








