On March 24, 2026, the United States Attorney's Office for the Southern District of New York announced that Michael Smith, a man from North Carolina, had pleaded guilty to conspiracy to commit wire fraud. He used artificial intelligence technology to illegally obtain more than $8 million in streaming royalties.
Smith generated hundreds of thousands of music works using AI and deployed thousands of fake accounts and automated bot programs to create billions of false plays on mainstream platforms such as Spotify, Apple Music, Amazon Music, and YouTube Music. To bypass the platforms' anti-fraud algorithms, he finely distributed the play counts across an extensive music library, ensuring that the abnormal growth of any single track would not trigger regulatory alerts.

This case reveals the profound damage caused by the combination of generative AI and automated scripts to the creators' economy. Due to the widespread proportional sharing fund pool model used by mainstream streaming platforms, these non-real play counts directly dilute the total amount in the pool, leading to the de facto deprivation of legitimate income for real musicians and songwriters.
U.S. prosecutor Jay Clayton stated that this case marks the typical form of large-scale commercial fraud using AI has been legally punished. As the cost of AI content production continues to decrease, how to identify "non-human created" and "false consumption" has become a governance issue that the digital content industry must face. This not only requires platforms to upgrade monitoring technologies but may also force the streaming industry to restructure royalty settlement mechanisms to ensure fairness and authenticity in the content ecosystem.
