AI and The Arts

 

Image created by FLUX.1 Kontext by Black Forest Labs 

Introduction

As AI models grow larger and larger, companies have to find different ways to get more training data. The more diverse, rich, human content these companies can find, the better these models can get in imitating us. 

What do these companies do with these datasets? Well, they feed them to huge Multimodal Models (models that can support several different types of inputs) to make art, music, text, and much more. 

But, what does this mean for the creator? Sometimes, companies will copyright strike music from artists despite it being in the public domain because the company has a recording of it. 

In this case, can Artists and Musicians copyright strike AI for stealing their art?

How does AI Art work and how do you detect it? 

In essence, a model takes in a text input and creates a noisy image, and then tries decoding it. This is called Stable Diffusion. 

Take Flux K.1 Kontext for example (above). These images are starting to get more and more realistic. You can still guess this is AI though. For starters, the signs don’t seem to be correct and they look like gibberish. Also, the image just feels too perfect. Getting a photo in a city in the middle of the street with nobody? The snow is barely hiding any facial features? And a clear view of a huge tower? 

What’s the point of this image? Why even create it? 

Why do people make art?

Art is a creative expression– a way to show emotions, create joy, and show freedom. Under this framework, what makes AI images “art”? All they do is try to explain some idea of a human prompter, but they truly can’t capture the explanation: all they do is give weighted averages of what’s possible. Sure, AI art might be good for prototyping and seeing possibilities, but it can’t capture a very clear, one idea. Again, all it’s doing is calculating an estimation of what you might want, not an exact idea.

As one Harvard lecturer (Yosvany Terry) said, “‘the ability to react in the moment, is something that artificial intelligence can’t reproduce.’” 

So, despite this, why do people defend AI “art”?  It’s just like photography and painting: just because you can photograph something doesn’t mean painting it is unnecessary. In the same way, having AI “art” doesn’t mean it will replace real artists, rather, it could be used as a supplement, creating prototypes to understand and come to a consensus with a buyer quicker, saving both time and money. 

So, can musicians and artists copyright claim AI “art”?

Unfortunately, it’s probably not possible to copyright claim art. It’s extremely difficult to claim AI has stolen your style. Although there are some exceptions (like the “Studio Ghibli" style), the lines are super blurry and unclear. 

In fact, a recent Music AI startup, Suno called the claims forced and fake

But, do musicians really need to copyright strike AI? Yes, because they still feed on the musician’s success. The more diverse, rich, human content these companies can find, the better these models can get in imitating us. 

In the case of music, startups have been using unlicensed works to train their models. However, companies have decided to finally bargain with startups: allow them to license the work instead of suing them. Still, it isn’t the musicians’ choice, but it's a step towards a better future. 



 

 

Comments

Popular posts from this blog

What is a Multimodal LLM?

Top 3 Breakthroughs in Computer Vision in 2024

A Mathematical Explanation of Gradient Descent