In March 2026 a U.S. court found Michael Smith, a resident of North Carolina, guilty of creating thousands of fake accounts on Spotify, Apple Music, Amazon Music and YouTube Music. Using bots he launched hundreds of thousands of AI‑generated songs, "listened" to them billions of times and extracted more than $8 million in royalties. Prosecutors noted that each track looked plausible: streams were spread across a sufficient number of songs and the automatic play‑count system did not raise an alarm.

The mechanism is simple: royalty formulas split the total revenue pool proportionally to the number of plays. An artificial stream reduces the share earned by genuine artists while delivering money to the fraudster and siphoning it from legitimate musicians.

The scheme exposed two fundamental gaps in the industry. First, user verification remains cheap and fast; hundreds of thousands of accounts can be opened without real identity confirmation. Second, platforms do not distinguish original material from synthetic content, so algorithms continue to count any playback as a valid stream.

For finance teams and streaming‑service owners this is a call to action. You need to deploy bot‑detection systems that flag anomalies in listening patterns such as unusually high click‑through rates or the absence of human interaction. Conduct regular audits of streams with particular focus on new artists and AI‑generated tracks. Legally, strengthen intellectual‑property clauses in contracts to include liability for artificially created content.

Why this matters: billions of fake plays and $8 million in stolen royalties reveal how vulnerable current royalty models are to automated attacks. For CEOs of music platforms the threat is direct – it erodes profitability and damages credibility with legitimate artists. For investors the signal is clear – security and compliance budgets must rise or competitors with stronger safeguards will capture market share.

AIstreamingbotsmusic industryroyalties