Spotify says its best developers haven’t written traditional code in months, a claim that signals how deeply AI coding tools are now embedded in its product workflow. During its fourth-quarter earnings call, co-CEO Gustav Söderström told analysts that some of the company’s top engineers “have not written a single line of code since December,” relying instead on generative AI systems to handle much of the implementation work.
The remarks add to a growing conversation around AI coding adoption in large technology companies. While executives across the industry have spoken about productivity gains from AI-assisted development, Spotify’s framing suggests a more pronounced shift: engineers acting less as manual coders and more as supervisors of AI systems that generate, test, and deploy code.
In 2025, Spotify rolled out more than 50 updates and new features to its streaming app. Recent additions include AI-powered Prompted Playlists, Page Match for audiobooks, and About This Song. The company attributes part of that development velocity to an internal platform called “Honk,” which integrates generative AI tools into its engineering pipeline.
According to Söderström, Honk enables remote, real-time code deployment using generative AI, specifically Anthropic’s Claude Code. In one example shared on the call, an engineer commuting to work could request a bug fix or feature addition to the iOS app through Slack on a phone. Claude would generate the required code changes, after which a new build could be reviewed and merged into production before the engineer even reached the office.
The workflow illustrates how AI coding tools are shifting developer responsibilities toward oversight, validation, and architectural decision-making rather than line-by-line programming. It also raises practical questions about quality control, long-term maintainability, and the evolving skill set required of software engineers in an AI-assisted environment.
Beyond internal productivity, Spotify is also positioning AI as a strategic differentiator. Söderström argued that the company is building a dataset around music preferences and listening behavior that is difficult for general-purpose large language models to replicate. Unlike factual domains—often scraped from open sources such as Wikipedia—music taste is subjective and shaped by geography, culture, and individual context. Workout music in the U.S., for example, may skew toward hip-hop, while parts of Europe favor electronic dance music, and Scandinavian listeners often gravitate toward heavier genres.
Spotify maintains that this behavioral data, refined through repeated model training, gives it an edge in personalization and recommendation. However, the broader AI ecosystem is rapidly evolving, and similar datasets could emerge through partnerships, licensing, or alternative distribution platforms.
Analysts also pressed the company on AI-generated music. Spotify said it allows artists and labels to disclose how a track was created through metadata while continuing to monitor for spam and low-quality uploads. The issue remains sensitive as generative music tools proliferate and streaming platforms balance openness with catalog integrity.
Taken together, Spotify’s comments reflect a larger inflection point in AI coding and AI-driven product development. If experienced engineers can operate primarily as orchestrators of generative systems, software creation could become faster and more distributed. At the same time, the shift introduces new dependencies on AI vendors and internal tooling. Whether this model proves sustainable at scale will likely depend on how well companies manage oversight, accountability, and technical debt in an increasingly automated development cycle.
