Mike Winkelmann is used to being stolen from. Before he became Beeple, the world’s third most-expensive living artist with the $69.3 million sale of Everydays: The First 5000 Days in 2021, he was a run-of-the-mill digital artist, picking up freelance gigs from musicians and video game studios while building a social media following by posting his artwork incessantly.
Whereas fame and fortune in the art world come from restricting access to an elite few, making it as a digital creator is about giving away as much of yourself as possible. For free, all the time.
“My attitude’s always been, as soon as I post something on the internet, that’s out there,” Winkelmann said. “The internet is an organism. It just eats things and poops them out in new ways, and trying to police that is futile. People take my stuff and upload it and profit from it. They get all the engagements and clicks and whatnot. But whatever.”
Winkelmann leveraged his two million followers and became the face of NFTs. In the process, he became a blue-chip art star, with an eponymous art museum in South Carolina and pieces reportedly selling for close to $10 million to major museums elsewhere. That’s without an MFA, a gallery, or prior exhibitions.
“You can have [a contemporary] artist who is extremely well-selling and making a shitload of money, and the vast majority of people have never heard of this person,” he said. “Their artwork has no effect on the broader visual language of the time. And yet, because they’ve convinced the right few people, they can be successful. I think in the future, more people will come up like I did—by convincing a million normal people.”
In 2021 he might have been right, but more recently that path to art world fame is being threatened by a potent force: artificial intelligence. Last year, Midjourney and Stability AI turned the world of digital creators on its head when they released AI image generators to the public. Both now boast more than 10 million users. For digital artists, the technology represents lost jobs and stolen labor. The major image generators were trained by scraping billions of images from the internet, including countless works by digital artists who never gave their consent.
In the eyes of those artists, tech companies have unleashed a machine that scrambles human—and legal—definitions of forgery to such an extent that copyright may never be the same. And that has big implications for artists of all kinds.
In December, Canadian illustrator and content creator Sam Yang received a snide email from a stranger asking him to judge a sort of AI battle royale in which he could decide which custom artificial intelligence image generator best mimicked his own style. In the months since Stability AI released the Stable Diffusion generator, AI enthusiasts had rejiggered the tool to produce images in the style of specific artists; all they needed was a sample of a hundred or so images. Yang, who has more than three million followers across YouTube, Instagram, and Twitter, was an obvious target.
Netizens took hundreds of his drawings posted online to train the AI to pump out images in his style: girls with Disney-wide eyes, strawberry mouths, and sharp anime-esque chins. “I couldn’t believe it,” Yang said. “I kept thinking, This is really happening … and it’s happening to me.”
Yang trawled Reddit forums in an effort to understand how anyone could think it was OK to do this, and kept finding the same assertion: there was no need to contact artists for permission. AI companies had already scraped the digital archives of thousands of artists to train the image generators, the Redditors reasoned. Why couldn’t they?
Like many digital artists, Yang has been wrestling with this question for months. He doesn’t earn a living selling works in rarefied galleries, auction houses, and fairs, but instead by attracting followers and subscribers to his drawing tutorials. He doesn’t sell to collectors, unless you count the netizens who buy his T-shirts, posters, and other merchandise. It’s a precarious environment that has gotten increasingly treacherous.
“AI art seemed like something far down the line,” he said, “and then it wasn’t.”
Yang never went to a lawyer, as the prospect of fighting an anonymous band of Redditors in court was overwhelming. But other digital artists aren’t standing down so easily. In January, several filed a class action lawsuit targeted at Stability AI, Midjourney, and the image-sharing platform DeviantArt.
Brooklyn-based illustrator Deb JJ Lee is one of those artists. By January, Lee was sick and tired of being overworked and undervalued. A month earlier, Lee had gone viral after posting a lowball offer from Epic Games to do illustration work for the company’s smash hit Fortnite, arguably the most popular video game in the world. Epic, which generated over $6 billion last year, offered $3,000 for an illustration and ownership of the copyright. For Lee, it was an all-too-familiar example of the indignities of working as a digital artist. Insult was added to injury when an AI enthusiast—who likely found out about Lee from the viral post—released a custom model based on Lee’s work.
“I’ve worked on developing my skills my whole life and they just took it and made it to zeros and ones,” Lee said. “Illustration rates haven’t kept up with inflation since the literal 1930s.”
Illustration rates have stagnated and, in some cases, shrunk since the ’80s, according to Tim O’Brien, a former president of the Society of Illustrators. The real money comes from selling usage rights, he said, especially to big clients in advertising. Lee continued, “I know freelancers who are at the top of their game that are broke, I’m talking [illustrators who do] New Yorker covers. And now this?”
Lee reached out to their community of artists and, together, they learned that the image generators, custom or not, were trained on the LAION dataset, a collection of 5.6 billion images scraped, without permission, from the internet. Almost every digital artist has images in LAION, given that DeviantArt and ArtStation were lifted wholesale, along with Getty Images and Pinterest.
The artists who filed suit claim that the use of these images is a brazen violation of intellectual property rights; Matthew Butterick, who specializes in AI and copyright, leads their legal team. (Getty Images is pursuing a similar lawsuit, having found 12 million of their images in LAION.) The outcome of the case could answer a legal question at the center of the internet: in a digital world built on sharing, are tech companies entitled to everything we post online?
The class action lawsuit is tricky. While it might seem obvious to claim copyright infringement, given that billions of copyrighted images were used to create the technology underlying image generators, the artists’ lawyers are attempting to apply existing legal standards made to protect and restrict human creators, not a borderline-science-fiction computing tool. To that end, the complaint describes a number of abuses: First, the AI training process, called diffusion, is suspect because it requires images to be copied and re-created as the model is tested. This alone, the lawyers argue, constitutes an unlicensed use of protected works.
From this understanding, the lawyers argue that image generators essentially call back to the dataset and mash together millions of bits of millions of images to create whatever image is requested, sometimes with the explicit instruction to recall the style of a particular artist. Butterick and his colleagues argue that the resulting product then is a derivative work, that is, a work not “significantly transformed” from its source material, a key standard in “fair use,” the legal doctrine underpinning much copyright law.
As of mid-April, when Art in America went to press, the courts had made no judgment in the case. But Butterick’s argument irks technologists who take issue with the suit’s description of image generators as complicated copy-paste tools.
“There seems to be this fundamental misunderstanding of what machine learning is,” Ryan Murdock, a developer who has been working on the technology since 2017, including for Adobe, said. “It’s true that you want to be able to recover information from the images and the dataset, but the whole point of machine learning is not to memorize or compress images but to learn higher-level general information about what an image is.”
Diffusion, the technology undergirding image generators, works by adding random noise, or static, to an image in the dataset, Murdock explained. The model then attempts to fill in the missing parts of the image using hints from a text caption that describes the work, and those captions sometimes refer to an artist’s name. The model’s efforts are then scored based on how accurately the model was able to fill in the blanks, leading it to contain some information associating style and artist. AI enthusiasts working under the name Parrot Zone have completed more than 4,000 studies testing how many artist names the model recognizes. The count is close to 3,000, from art historical figures like Wassily Kandinsky to popular digital artists like Greg Rutkowski.
The class action suit aims to protect human artists by asserting that, because an artist’s name is invoked in the text prompt, an AI work can be considered “derivative” even if the work produced is the result of pulling content from billions of images. In effect, the artists and their lawyers are trying to establish copyright over style, something that has never before been legally protected.
The most analogous recent copyright case involves fine artists debating just that question. Last fall, well-known collage artist Deborah Roberts sued artist Lynthia Edwards and her gallerist, Richard Beavers, accusing Edwards of imitating her work and thus confusing potential collectors and harming her market. Attorney Luke Nikas, who represents Edwards, recently filed a motion to dismiss the case, arguing that Roberts’s claim veered into style as opposed to the forgery of specific elements of her work.
“You have to give the court a metric to judge against,” Nikas said. “That means identifying specific creative choices, which are protected, and measuring that against the supposedly derivative work.”
Ironically, Nikas’s argument is likely to be the one used by Stability AI and Midjourney against the digital artists. Additionally, the very nature of the artists’ work as content creators makes assessing damages a tough job. As Nikas described, a big part of arguing copyright cases entails convincing a judge that the derivative artwork has meaningfully impacted the plaintiff’s market, such as the targeting of a specific collecting class.
In the end, it could be the history of human-made art that empowers an advanced computing tool: copyright does not protect artistic style so that new generations of artists can learn from those who came before, or remix works to make something new. In 2012 a federal judge famously ruled that Richard Prince did not violate copyright in incorporating a French photographer’s images into his “Canal Zone” paintings, to say nothing of the long history of appropriation art practiced by Andy Warhol, Barbara Kruger, and others. If humans can’t get in trouble for that, why should AI?
In mid-March, the United States Copyright Office released a statement of policy on AI-generated works, ruling that components of a work made using AI were not eligible for copyright. This came as a relief to artists who feared that their most valuable asset—their usage rights—might be undermined by AI. But the decision also hinders the court’s ability to determine how artists are being hurt financially by AI image generators. Quantifying damages online is tricky.
Late last year, illustrator and graphic novelist Tomer Hanuka discovered that someone had created a custom model based on his work, and was selling an NFT collection titled “Punks by Hanuka” on the NFT marketplace OpenSea. But Hanuka had no idea whom to contact; such scenarios usually involve anonymous users who disappear as soon as trouble strikes.
“I can’t speak to what they did exactly because I don’t know how to reach them and I don’t know who they are,” Hanuka said. “They don’t have any contact or any leads on their page.” The hurt, he said, goes deeper than run-of-the-mill online theft. “You develop this language that can work with many different projects because you bring something from yourself into the equation, a piece of your soul that somehow finds an angle, an atmosphere. And then this [AI-generated art] comes along. It’s passable, it sells. It doesn’t just replace you but it also muddies what you’re trying to do, which is to make art, find beauty. It’s really the opposite of that.”
For those who benefited from that brief magical window when a creator could move more easily from internet to art world fame, new tools offer a certain convenience. With his new jet-setting life, visiting art fairs and museums around the world, Winkelmann has found a way to continue posting an online illustration a day, keeping his early fans happy by letting AI make the menial time-consuming imagery in the background.
This is exactly what big tech promised AI would do: ease the creative burden that, relatively speaking, a creator might see as not all that creative. Besides, he points out, thieving companies are nothing new. “The idea of, like, Oh my god, a tech company has found a way to scrape data from us and profit from it––what are we talking about? That’s literally been the last 20 years,” he said. His advice to up-and-coming digital artists is to do what he did: use the system as much as possible, and lean in.
That’s all well and good for Winkelmann: He no longer lives in the precarious world
of working digital artists. Beeple belongs to the art market now.