Spotify is stepping up efforts to tackle impersonation, spam, and deceptive AI use on its platform, as generative technology continues to reshape the music industry. The company announced a set of new policies and tools designed to protect artists, songwriters, and producers while giving listeners more transparency about the music they hear.
The rise of AI in music has brought both creative opportunities and new risks. While some artists are experimenting with AI as part of their workflow, the same technology has been misused for spam uploads, mass content generation, and even deepfake vocals that mimic well-known performers. Spotify argues that these practices not only harm listener trust but also siphon royalties away from legitimate artists.
To address the growing problem of AI-generated impersonation, Spotify has introduced a stricter policy against unauthorized voice cloning. Under the new rules, vocal impersonations are only allowed when the artist being mimicked has given explicit permission. The company is also increasing its investment in preventing another form of impersonation: fraudulent uploads where music, AI-generated or not, is incorrectly delivered to an artist’s profile. Spotify says it is working with distributors to curb these attacks before they reach the platform and is improving its internal processes to speed up mismatch reporting and review.
Alongside impersonation controls, the company is preparing a new music spam filter. This system will identify accounts engaging in bulk uploads, duplicate tracks, or other forms of low-value content generation. Rather than removing tracks outright, the filter will limit their visibility and recommendations, making it harder for spammers to earn royalties through artificial engagement. Spotify emphasizes that the rollout will be gradual to avoid penalizing legitimate creators.
Transparency is another focus. In partnership with industry body DDEX and a wide range of music distributors, Spotify will begin supporting AI usage disclosures in track credits. This standard will allow artists and rights holders to indicate how AI contributed to a song’s creation, whether through vocals, instrumentation, or production tools. The goal is to create consistency across the industry so that listeners receive the same information regardless of which streaming service they use.
These moves build on Spotify’s ongoing fight against platform abuse. The company reports removing over 75 million spam tracks in the past year, underscoring the scale of the problem. With royalty payouts growing from $1 billion in 2014 to $10 billion in 2024, the incentive for bad actors to exploit the system has only increased.
Spotify’s approach highlights a balancing act: enabling artists to explore AI creatively while ensuring they remain in control of their work and identity. By tightening rules around impersonation, filtering out spam, and backing transparency standards, the company is trying to maintain trust in the streaming ecosystem as AI becomes a more common presence in music.