There’s been a lot of buzz lately about the use of AI to generate art—not just pictures, but videos, music, and even scripts for plays and movies.
Some people think AI-generated content is a wonderful thing because it’s so easy and convenient, but a lot of people hate it, for a whole bunch of different reasons. Here we’re going to take a look at AI-generated art–why people use it, why some want to outlaw it, and how to spot it so you don’t get duped.
How AI develops its “intelligence”
The concept behind AI-generated content is pretty genius. A program runs through thousands of examples of something, whether it’s paintings and drawings or music, videos, or written material, and gathers information about them.
For example, the program might go through all the paintings of Vincent Van Gogh and gather information about the subject matter, the style, and the colors used. Then the program, or algorithm, goes through what’s called training. The algorithm produces art based on the information it has, and those results are rated by humans as good or bad. The algorithm takes these ratings and makes more art, which is rated again.
After many iterations, the algorithm starts to figure out the patterns that get a “good” rating, and after that, it can make pretty good art all on its own. When it’s in good shape like that, it becomes what they call an AI model. If you’ve used an AI program online, it’s already a model, already trained.
How to use AI
AI services work off of text prompts. You type in “image of a garden by a lake, in the style of Monet,” hit the GO or START button, and a few minutes later you get a lovely impressionist picture generated on the fly. Hit the button again, and you get a completely new picture in the same style. Each one is an original, and none is a copy of any existing painting.
You can do the same to generate a piece of music, or a screenplay for a film, or an essay for school, or an email to send your ex about child visitation. Whatever you want.
And most of these AI services state that whatever it generates from the prompt belongs to you. It’s yours, and no one else’s. This is a bit of a sticky point, which I’ll get back to later.
I’ve tried out AI art generation with two of the more popular models, Midjourney and DALL-E, and it can become addictive. It all started when I needed a tapestry of a medieval battle for one of my short films, and I couldn’t find anything online. AI to the rescue! and we got our tapestry pictures. I’ve also used it to make a colorful cover for a report for work, and a bunch of other small projects.
Why AI now?
The surge of AI in the past few years is largely due to leaps in technology. Ten years ago, we didn’t have the computing power that AI requires, and now we do. And OpenAI, one of the leading organizations in the field, kicked things off in 2020 with the release of ChatGPT, where you could type in a text prompt like “Who was Abraham Lincoln?” and get a response worthy of a term paper. Just four years later, AI is everywhere, and everyone can get it.
Of all the arts, images are in the lead for quality. AI images are cropping up everywhere: book covers, logos, posters, look books for film pitches, and of course, all over the internet.
But is this really a good thing? It depends who you ask. Now that anyone can create art with a few words and a click of a button, a big concern amongst artists is that they’re going to lose paying work. Many are annoyed that their art is being used to train AI models without their permission.
Origins of computer-generated art
The concept of computer-generated art is not entirely new. As early as the 1960s, artists like Georg Nees, Frieder Nake, and Manfred Mohr were writing algorithms to generate art, just to experiment with technology and push the boundaries.
But there’s a big difference between what those artistic pioneers were doing, and what AI is doing now. The early programs relied on what the individual artists programmed them to do, without relying on any other input like pieces of art themselves. Today’s AI is trained on thousands of pieces of art, stuff produced by other people, real live artists.
Some of today’s AI programs even scrape the internet for input, which means going around to publicly visible sources and grabbing up whatever it can find: pictures, videos, stories, scripts, you name it.
Is AI sourcing ethical?
This brings us to the first troubling fact about AI art–that it’s often built on the backs of other people’s art, without their permission. And this is where it really annoys artists.
In January 2023, a group of artists filed a lawsuit against Stability AI, a company that provides the AI technology that many AI services use, saying their artwork was used to train the AI models without their permission. The case is ongoing, with a judge ruling in August 2024 that the case can go forward.
Writers have the same kinds of concerns, but for different reasons. Like, suppose you write a screenplay and post it on your website. It’s copyrighted, so you don’t worry about someone stealing it, but then AI comes along and scrapes it, because it’s publicly available. A few days later, some guy fires up his favorite AI app and types in a prompt to write a similar screenplay. Let’s call this guy, Fred. And the AI uses what it learned from your screenplay to write an entirely new work, one that now belongs entirely to Fred.
Some would argue that it’s no different from Fred reading your screenplay and using it for inspiration, and they have a point. AI isn’t any more capable than the people who wrote it, but it does work a lot faster and can recognize patterns in a bunch of artwork way faster than you or I ever will.
But AI is never original. It can’t think in the way an artist does. But still, in the case of drawings or paintings, this is a huge problem, since the artwork that AI generates is often as good as anything I’ve seen from some real live artists. With writing, though, it’s a bigger problem, because AI is not original and frankly, its scriptwriting is pretty awful.
As an example, my film team and I did an experiment a few months ago where we tried to generate a script for a short film with AI. We figured AI would come up with something strange and outrageous, and we could have fun making the film and showing it to people as an example of AI’s kookiness. But what it came up with, over and over, was straight-up boring and idiotic. The story was dull, and the dialogue was what you’d expect from an eight-year-old. We scrapped that idea pretty quick. So much for AI taking over the filmmaking business!
At the same time, I hear that screenwriters are concerned that studios will use AI to generate scripts rather than asking for original work, or worse, that they’ll ask real writers to clean up terrible AI-generated scripts. Maybe this will become a problem, I don’t know. If it costs the same to get it cleaned up as to buy something that actually came out of a writer’s creative brain, maybe this won’t ever be a thing.
Another concern in the film business is that studios will just go ahead and produce these terrible scripts as-is, which means they won’t be buying original work as much. I think this will be a self-correcting problem. After a studio spends millions to produce one of these things and no one wants to see it, they’ll see the light. At least we can hope so.
Is AI-generated art as good as human-generated art?
Is AI-generated art really as good as original art? How do you recognize that something was generated with AI?
AI images have a certain look to them. First of all, AI likes to use a lot of blue and orange for some reason, so that’s one clue. And things look too clean, like a photo that’s been retouched too much.
Another clue is what I call the six-finger problem. AI doesn’t always do a good job with protrusions like fingers and arms and legs. If there’s something strange going on with the hands and limbs in the picture, there’s a good chance it’s AI-generated.
AI-generated video is coming
AI-generated video is still in its infancy. There are a few platforms around that will animate something a little bit this way or that, almost like panning a picture. Or it can put together a slideshow type of thing, pretty dull stuff. But there’s nothing yet that can produce a convincing full-blown video from a text prompt.
I’m not talking about a video generated directly from an existing video, like when someone swaps out a face or takes a video of a speech and makes someone appear to say words they never said. Those types of videos, called deep fakes, have been around for a while. But decent videos from scratch, from a text prompt, are still not possible.
In early 2024 we saw demonstrations of Sora, an amazing tool from OpenAI for creating videos out of thin air, but it’s not ready yet. You know all those problems with arms and legs in images? Multiply those by a thousand and you’ll understand the challenge that realistic videos face. The Sora website shows some pretty interesting demonstrations of this issue, like a woman walking through Tokyo, where the scenery is perfect but her legs, well, they swap places every once in a while in a way that could make squeamish people nauseous. And there’s a dalmatian puppy that defies the laws of physics, a birthday party where everyone has big smiles while awful things happen to their hands, and prancing wolf cubs that mysteriously divide and multiply like big cute fluffy amoebas. I find these videos pretty awesome to look at from a technical perspective, but I advise you to stay away if you’re the kind to get nightmares from them.
My point is that video generated from scratch, using just text prompts alone, is still a ways off. However, there are some companies making good use of existing video clips to make new videos. A company called Invideo.ai uses stock video and photography to generate short videos for creators. The company pays for the rights to use the stock images, so it’s on the up-and-up. You type in a prompt, and it uses paid-for images and video to generate a video for you. This is, in my opinion, an ethical use of AI, where the creators or the original imagery are at least getting something out of it.
Will AI replace real artists?
This type of usage kind of addresses another problem that artists have with AI–the fact that they’re losing business. It’s just plain cheaper for a company to generate a bunch of AI images than to hire a live artist to draw them.
I’m not sure where I stand on this question. On one hand, in every case where I’ve used an AI-generated image, it wasn’t like, “Oh, ordinarily I would pay an artist a thousand dollars to make this image, and now I’m going to save that money by using AI! Mwahahaha!” It was more like, “Hmm, it would be nice to have an image here, so instead of grabbing something off a royalty-free website, I’m going to have a little fun and create something with AI.” No artists have lost work because of my choices. And most of the people I know who use AI images, are in the same boat. It’s not like we were going to pay for it to begin with. The use of AI just saved us hours of scouring the web for something we could legally use.
Another perspective is that visual artists went through a similar crisis when stock photos and video started to become widely available. Instead of hiring a photographer and models for a day, a place like an advertising agency, could save big bucks by using stock photographs or videos instead.
And well before that, new technology was constantly making artists upset. TV was supposed to kill movies, and photography was supposed to kill painting. None of these things happened. There will always be a need for real live photographers, cinematographers, painters, and other visual artists, and even home streaming services haven’t replaced the experience of watching a movie on the big screen in a giant theater with dozens of other people, all cheering or gasping or laughing at the same scene while you balance a big bucket of overpriced buttery popcorn on your lap. There’s room for all these artistic experiences in our lives.
My point is that technology shifts and changes every few years, with art forms experiencing more or less popularity for one reason or another, and I don’t think we can really blame AI for that.
On the other hand, I do agree that using copyrighted works to train an AI model without the artist’s permission, and without compensating them, is sketchy. Hopefully we’ll see more AI services turning to ethical models, where artists are compensated if their work is used to train an AI model.
