Spotify is once again facing scrutiny over the rise of AI-generated content after a song falsely attributed to late country artist Blaze Foley appeared on the platform. The track, titled “Together,” was uploaded as if it were an official posthumous release, complete with cover art, metadata, and copyright credits. Foley, however, died in 1989—36 years before this song appeared.
The song was swiftly removed after fans, along with Foley’s label Lost Art Records, flagged the release. The track was reportedly uploaded via SoundOn, a music distribution platform owned by TikTok. Another AI-generated song misattributed to the late Guy Clark, who passed away in 2016, was also removed in the same timeframe.
What sets these cases apart from the growing flood of AI-generated lo-fi or ambient music is the attribution. Unlike AI music published under fictitious artist names or clearly labeled experimental projects, these songs appeared in the verified discographies of real, deceased musicians. There was no indication that AI tools were involved, nor was there any mention of the uploads being tributes or simulations. This created the false impression that these were authentic, authorized releases.
Spotify confirmed the removal and emphasized that such content violates its deceptive content policies, which prohibit impersonation and misleading representations. The company added that it takes enforcement action against distributors that repeatedly fail to screen for such misuse—including permanent removal.
But the appearance of the track on a platform as large as Spotify raises concerns about the adequacy of existing verification and moderation systems. With tens of thousands of songs being uploaded daily, automated checks are a necessity. However, that same automation can become a loophole for content that meets technical requirements but crosses ethical or legal boundaries.
The issue also raises broader questions about authorship, digital identity, and consent in the era of generative AI. AI tools can now mimic voices, lyrics, and artistic styles with increasing realism. Without effective safeguards, it’s entirely possible to fabricate entire discographies for artists who are no longer alive—and whose estates may lack the means to police every new upload.
This isn’t a problem unique to Spotify. Apple Music and YouTube have also faced criticism for hosting deepfake content, while AI music generators like Suno and Udio continue to lower the barrier for creating passable musical imitations in seconds. The underlying technology isn’t the problem—it’s the misuse of that technology combined with platforms’ current lack of oversight.
Possible solutions include better identity verification for artist accounts, embedding metadata or watermarking in AI-generated audio, and more transparent labeling of synthetic content. However, these methods introduce friction to a process many platforms want to keep seamless for creators.
The Blaze Foley incident underscores the ethical tightrope streaming platforms must now walk. When AI is used responsibly, it can serve as a powerful creative tool. But when it’s used to impersonate artists—especially those no longer able to object—it becomes a matter of digital fraud, with consequences for credibility, compensation, and cultural integrity.