Music and AI: Grimes, Ice Cube and Ed Sheeran are in a boat…


Lately, I feel like I need to put a disclaimer at the top of my articles so that no one thinks the content is generated by artificial intelligence (AI). Yet only human brains were involved in the making of this article – the brains of one editor and two editors, to be precise. And none of them were equipped with a Neuralink chip.

Unfortunately, we have reached a stage now where it is no longer so easy to distinguish between humans and robots. Admittedly, we have not yet reached the level of full control of Skynet. But the power of AI now marks a turning point: it is now finally more accessible and perhaps better understood by the general public.

This emergence is due to the generative AI platform ChatGPT, which fascinates many of us with its ability to imitate humans and help them with various tasks, including coding software, creating travel itineraries or even writing e-mail messages and essays. Beyond ChatGPT, you’ll find other AI-powered apps that can produce images and also songs “inspired” by popular artists and writers.

Inaccuracy of facts and plagiarism are red flags

This is where the heart of the debate on the limits to be set on the use of AI in certain sectors of activity lies.

For me, in my work as a journalist, the limits are crystal clear. Factual inaccuracy and plagiarism are red flags. It is for these reasons that tools such as ChatGPT have absolutely no role to play in my profession.

I guess lawyers share my concerns, especially after one of their peers in New York was called to order for citing case law that never existed. Yes, he let ChatGPT do the research and it generated content based on fake sources. However, the lines are not always so clear.

Produce songs “sung” by a voice that sounds a lot like a pop star

AI is increasingly being used to create music based on the style of popular artists, but also to produce songs “sung” by a voice that sounds a lot like that of a pop star. Singapore-based singer Stefanie Sun, for example, reportedly recorded a cover of Complicated of Avril Lavigne – except she didn’t.

To an untrained ear, the AI-generated voice sounds like Sun, who has sold more than 30 million records since her debut in 2000. Her fans, however, say her AI counterpart is easily recognizable because she doesn’t does not possess the emotional undertones of the singer.

This perception could change, however, as Sun herself acknowledged. In a blog post last week, she joked that her AI character enjoys more notoriety than herself now, and that it’s impossible to compete with someone who can go out. new albums in just minutes.

No prosecution

The singer adds that it may only be a matter of time before AI makes further progress and is able to mimic human emotions. “You are not special. You are already predictable and also, unfortunately, malleable,” writes Sun.

And the singer’s label is reportedly not considering legal action due to the lack of generative AI regulations.

While Sun sees her AI counterpart as a potential contender, Canadian singer Grimes is more open to the idea of ​​music created using an AI version of her voice. Provided that those who do so share the copyright equally. Grimes has invited her impersonators to register their music on her website, where she plans to make samples of her voice available to aid in the AI ​​process. “I think it’s cool to be fused with a machine and I like the idea of ​​opening up all sources of art and killing copyrights,” tweeted Grimes.

AI leaves Ice Cube totally ice

Other players in its sector are less generous with the new revenue model. American rapper Ice Cube said in an interview that he will sue anyone who creates a song with his AI-generated voice, as well as the platform that broadcasts it.

His comments follow the release of a song called “Heart On My Sleeve”, which was presumably created by the AI ​​and has a voice similar to that of rapper-songwriter Drake and singer-songwriter- performer The Weeknd. Heart On My Sleeve went viral on various platforms, including TikTok and Spotify, before being taken down at the request of the singers’ record label. Copies are still available on YouTube.

The source behind the song is said to have created it using artificial intelligence models trained from the artists’ works, styles and voices.

Lawyers here, and other lawyers there, have already debated potential legal issues with AI-generated songs like Heart On My Sleeve. So I’m not going to do that here. Suffice it to say that the song raises a number of issues regarding fair use and impersonation, as well as some correlation to knockoffs.

What Really Matters to Humans in a Time when AI

However, I want to draw a parallel with the way artists and musicians find their inspiration. We often hear that great songwriters are influenced by those who came before them. Bruno Mars cites Elvis Presley and the Beach Boys among his musical influences, while Billie Eilish cites the Beatles and Green Day.

These artists have grown up listening to musicians, applying what they think is most in tune with their own style, and creating their own art.

In a way, that’s exactly what big language models and generative AI tools like ChatGPT do. They produce new works based on what they have learned from previous works. The only significant difference is that human minds are shaped and influenced by works that we admire as we grow. Whereas AI models are not inherently biased and have the computational ability to not discriminate on what they choose to learn as they grow.

Why would Ed Sheeran have the law and not the AI?

So, assuming that no copyright has been infringed, why should AI-generated content that is inspired by famous works be any different from human-generated content that is also inspired by famous works?

That’s pretty much the argument British singer-songwriter Ed Sheeran used in the lawsuit he won against a Marvin Gaye heiress. He was found not liable for copyright infringement at the end of the trial. Sheeran’s attorney told jurors that the similarities in the chord progressions and rhythms used in the Gaye and Sheeran songs in question were “the letters of the musical alphabet.” “These are basic elements of music that songwriters should be free to use or anyone who loves music will be impoverished,” Mr. Farkas said.

Musician and YouTuber Rick Beato makes it clear: “You can’t copyright a chord progression.”

Why me and not an AI to animate a round table?

So what about humans, as the use of AI becomes ubiquitous? How can we differentiate ourselves when we have to compete with an entity with greater processing and learning capacity? I think we have to keep innovating and being creative. We need to add our own sensibility and incorporate elements that are not commonly used by others.

Recently, I moderated a roundtable and I had the audacity to say that the questions I was asking participants were generated by my human brain, without the help of AI. “But why not ?” asked a few attendees.

A generative AI tool like ChatGPT could very well have come up with a list of brilliant questions based on the roundtable dialogue, which, ironically, was about AI. However, he probably wouldn’t be able to adapt and modify the questions in real time, as the conversation progresses.

I always have a list of questions ready at the start of every discussion I host. But I’m constantly adding new ones based on ideas that participants share as the roundtable progresses. I modify my questions along the way to adapt to the evolution of the discourse. All of this information, including my sense of humor, cannot be easily replicated by an AI model, at least for now. This is how I hope my knowledge and skills will retain some relevance in the age of AI.

After all, the potential of AI for health is enormous, and it is even more urgent to address issues of AI ethics and data security, before it is too late.


Source: “ZDNet.com”





Source link -97