Sony Music's Stance: Why Sony Has Removed 135,000 'Deepfakes' of Its Artists' Music

Sony Music has removed 135,000 AI-generated deepfake songs impersonating its artists, highlighting the growing threat to the music industry from generative AI and streaming fraud.

Admin

Admin

Sony Music's Stance: Why Sony Has Removed 135,000 'Deepfakes' of Its Artists' Music

Mar 19, 2026

The Digital Battlefield: Sony Music Confronts AI Deepfakes

The digital landscape of music is rapidly evolving, bringing both unprecedented opportunities and new, insidious challenges. At the forefront of this battle is music giant Sony Music, which has recently taken decisive action against a tidal wave of fraudulent content. The company announced ithas removed 135,000 'deepfakes' of its artists' musicfrom streaming platforms, shining a spotlight on the alarming rise of AI-generated impersonations threatening the integrity of the industry.

The Alarming Rise of AI Deepfakes in Music

These sophisticated 'deepfakes' are not mere remixes; they are meticulously crafted audio tracks created using advanced generative artificial intelligence, designed to mimic the voices and styles of legitimate artists. Powerhouses like Beyoncé, Queen, and Harry Styles have been among those targeted, their artistic identities exploited for illicit gains. This proliferation of counterfeit music isn't just an annoyance; it inflicts direct commercial harm on genuine recording artists and can severely compromise new album releases or even tarnish an artist's carefully built reputation, as highlighted by Dennis Kooker, President of Sony's global digital business. The problem is escalating at an alarming rate, driven by the increasing accessibility and affordability of AI technology. Sony believes the 135,000 tracks identified thus far represent only a fraction of the total fraudulent content circulating across streaming services, with some 60,000 songs falsely attributed to artists like Bad Bunny, Miley Cyrus, and Mark Ronson detected since last March alone.

Exploiting Artist Momentum and Damaging Reputation

The timing of these deepfakes is particularly damaging. Kooker explains that fraudsters strategically release these tracks when an artist is actively promoting new music, capitalizing on existing demand and generating confusion. This parasitic practice undermines the artist's hard work, diverting attention and potential revenue away from their authentic creations, and in the worst cases, can directly damage a release campaign or tarnish an artist's reputation.

A Broader Industry Challenge: Insights from the Global Music Report

This crucial revelation emerged during the launch of the music industry's Global Music Report in London, an event that also celebrated significant growth. The International Federation of the Phonographic Industry (IFPI) reported a robust 6.4% increase in recorded music revenues last year, reaching an impressive $31.7 billion. This marks the eleventh consecutive year of growth, a testament to streaming subscriptions rejuvenating an industry once plagued by rampant piracy. The UK proudly maintained its position as the world's third-largest music market, while China surpassed Germany to become the fourth largest, a remarkable ascent in less than a decade.

Amidst these positive figures, the UK government's report on AI regulation was also a key discussion point. Industry stakeholders expressed relief as plans to allow AI firms to train software on copyrighted works without permission were thankfully paused, a decision lauded by IFPI CEO Victoria Oakley as a vital step towards protecting creativity while fostering innovation.

The Supercharged Threat of Streaming Fraud

While AI deepfakes present a distinct challenge, they exist within a larger ecosystem of digital manipulation: streaming fraud. This practice, often termed 'streaming manipulation,' involves fake artists uploading tracks to major platforms like Spotify, YouTube, Instagram, and Apple Music, then artificially inflating play counts to unfairly claim royalty payments. The IFPI warns that the advent of AI has 'supercharged' this fraudulent activity, diverting legitimate payments away from the artists who rightfully earned them. Unofficial estimates suggest that up to 10% of all content across streaming platforms could be fraudulent, painting a stark picture of the scale of the problem.

The Imperative for Transparency and Technological Solutions

Solving this pervasive issue is 'very simple to fix,' according to Oakley, who advocates for streaming services to implement robust tools capable of identifying and flagging fake or AI-generated music upon upload. The critical challenge ahead, she emphasized, is accurately identifying and labeling AI material. French streaming company Deezer has already stepped up, deploying software that identifies AI-generated submissions, classifying a staggering 34% of submitted songs as AI-created. While acknowledging that no system is perfect, Dennis Kooker champions Deezer's approach as transparent, allowing users to understand the origin of the content. 'Without proper identification, fans can't distinguish between genuine human creativity versus unauthorised, AI‑generated content, which risks creating confusion, undermining trust, and impacting user experiences,' Kooker stated. He concluded by asserting that 'Transparency shouldn't be optional, it's the foundation of a fair and sustainable music ecosystem.'

Protecting the Future of Music: A Call to Action

Sony Music's proactive removal of 135,000 deepfake tracks underscores the urgent need for a unified front against digital fraud and AI misuse in the music industry. As technology advances, the battle to protect artistic integrity, ensure fair compensation, and maintain consumer trust will only intensify. The future of music depends on the collective commitment of artists, labels, streaming platforms, and governments to uphold transparency and champion human creativity in the digital age.

Related Articles

Stay in the loop

Get the latest insights delivered to your inbox

Built with v0