The global music industry is on the offensive against the proliferation of songs generated by artificial intelligence (AI) without the consent of artists or record labels. However, the fight seems unbalanced, in favor of AI developers and streaming platforms. According to AFP, Sony Music has requested the removal of more than 75,000 deepfake content from the internet - clear evidence of the magnitude of the phenomenon. Even though detection technology is progressing, many fakes remain online and attract millions of views. Platforms such as YouTube and Spotify are becoming unwitting hosts for fake songs, despite monitoring measures announced by the digital giants, British media reports.
• Detection is possible, but not enough
Companies like Pindrop claim that AI-generated songs have subtle irregularities - in frequency, rhythm and digital signature - that differentiate them from human voices. But identifying them in real time remains difficult. Spotify's Sam Duboff said the company is actively working on "new tools" to detect fakes. YouTube, for its part, confirmed that it is "improving its technology" to limit this phenomenon.
• Legal battle: uncertainties and delays
More seriously, the music industry denounces the unauthorized use of original songs to train specialized AI interfaces, such as Suno, Udio and Mubert. Lawsuits are ongoing, but without clear deadlines. Complaints filed in New York and Massachusetts allege copyright violations, but everything depends on the interpretation of the notion of "fair use". Law professor Joseph Fishman warns that the legal gray area could lead to conflicting court decisions - and possibly Supreme Court intervention.
• Regulations are delayed
In parallel, attempts at federal legislation in the US have so far failed, although several states, such as Tennessee, have passed local laws against deepfakes, local media reports. President Donald Trump's stance in favor of deregulating AI further complicates the industry's efforts. The giant Meta has already suggested that the government should clearly state that using public data to train AI is legal under "fair use" - a position that could seriously tip the balance in favor of tech companies. The situation in the UK is no clearer either. The Labour government has launched a public consultation that could lead to a change in intellectual property law to facilitate access to protected content for AI developers. As AI models continue to evolve, trained on often proprietary data, the central question remains: can this battle be won or is it already too late? The answer is yet to be delivered.