San Francisco–based concept artist RJ Palmer got his big break on DeviantArt. After joining the image-sharing platform in 2005, the 33-year-old began posting realistic drawings of Pokémon there in the style of the Japanese anime Monster Hunter. His work soon made the rounds online and, in 2016, the production designer for the movie Detective Pikachu reached out. Palmer has freelanced ever since for the entertainment industry, primarily for video games.
“DeviantArt for me was a pretty big deal,” Palmer told ARTnews. “I became one of their success stories.” But these days, Palmer describes the site as an “unusable mess” because of “AI crap”—work produced by AI-powered text-to-image generators like Midjourney, Stable Diffusion, and OpenAI’s DALL-E.
With more than 75 million users, DeviantArt is one of the latest—and largest—online spaces to grapple with AI-generated images. Last month, the site announced that it would require users to disclose whether works they submitted were created using AI tools; the announcement followed one by Google in May of a similar plan to label “AI-generated images,” just weeks before the European Union urged other Big Tech platforms to follow suit.
Both the EU and Google’s argument for labeling AI-generated images has centered on misinformation. When an image of Pope Francis in a white puffer jacket went viral earlier this year, for example, many people didn’t immediately know it was faked. The threat of misinformation around news events or elections appears obvious. But another debate around AI labeling touches on the core of how we define art, who gets to make it, and who can profit from it. That conversation has enraged creators on both sides.
“[DeviantArt] can be like, ‘Oh, there’s suddenly all these people using our service, they’re uploading tons of images.’ It’s good for—at least they think it’s good for—the site’s health, even though it’s driving … actual longtime users and … regular artists away from the service,” Palmer said.
As a successful digital artist, Palmer has become a spokesman of sorts for artists on DeviantArt who object to AI-generated images on the platform.
The first issue, for Palmer and other digital artists, is how such generators were developed—by “stealing” other artists’ work, as he put it. Most programs were trained on the LAION dataset, a collection of 5.6 billion images scraped primarily from public websites. A class action lawsuit filed by artists in January against DeviantArt, Midjourney, and StabilityAI—the company behind Stable Diffusion—estimated that 3.3 million images in LAION were ripped from DeviantArt. (DeviantArt has said in public statements that it was never asked, nor did it give, permission for this.)
Artists like Palmer were already upset when those text-to-image AI generators launched early last year, but the conflict escalated in November when DeviantArt released its own version, DreamUp, that automatically included users’ creations in its dataset. Opting out required users to delete each individual image, a prohibitive burden considering that many, like Palmer, have thousands of works on the platform.
Less than 12 hours after DreamUp’s launch, DeviantArt announced that it was reversing the policy and not keeping users’ artworks in the dataset by default. But that was mostly a feint: DreamUp was built on Stable Diffusion, and therefore, on the LAION dataset, which already includes countless images by DeviantArt users.
Palmer’s criticism of DeviantArt is as much about the platform’s tone-deaf execution of AI as about AI itself. The day DreamUp launched, Palmer conducted a Twitter Spaces conversation with several DeviantArt executives. One question on which Palmer pressed the company: If DeviantArt was intent on creating an AI image generator, why not use an “ethically sourced” dataset?
CMO Liat Karpel Gurwicz told Palmer that users would upload AI images even if the platform banned them. By introducing its own generator, DeviantArt retained some control. “We cannot go and undo what these datasets and models have already done … ” Gurwicz said. “We could build our own model, that’s true … But doing that would take us probably a couple of years in reality.”
Despite DeviantArt’s insistence that it was protecting artists, DreamUp fueled a massive user backlash. They spoke out to the media and launched online protests; message boards were rife with complaints, and some users said they would leave the site entirely.
Beyond the ethics of AI training datasets, Palmer’s issue with AI-generated images—and why he supports labeling—comes down to time and creativity. Users say DeviantArt’s homepage and search are now flooded with low-quality, AI-generated images that likely took seconds or minutes to create, many of which aren’t labeled, despite the site’s new requirement. By Palmer’s measure, AI has turned a vibrant artistic community into an image dump.
Palmer has also noticed other users imitating his work using AI (and not well, he said). If the training improves, he’s worried AI could replace him or other artists entirely. Unfortunately, they can’t copyright style, only specific artworks. And according to the US Copyright Office, AI creators can’t even do that.
This past March, the office released an official position that only “human-authored” works are eligible for copyright. Many artists applauded the decision, as it seemingly eliminated corporations’ ability to profit from AI-generated images and, therefore, offered some hope for the protection of artists’ livelihoods. AI labeling, then, would help establish what images can and cannot be legally protected.
But for Jason M. Allen, the 40-year-old founder of tabletop games studio Incarnate Games, arguments over copyright miss the point. Artists and AI similarly create artworks influenced and derived from an amalgamation of images, experiences, and art.
“So really, every experience that you have, every book that you read, every piece of art that you look at, is going through your neural network. And then you’re using that experience and your recollection of these ideas and combination of concepts to then express yourself using your choice of medium and technique,” Allen said of the artistic process. “And I can’t? Because it’s artificial intelligence?”
This past September, Allen won first place at the Colorado State Fair annual art competition with his AI-generated image Théâtre d’Opéra Spatial. By Allen’s estimation, he spent more than 80 hours experimenting with different prompts on Midjourney to generate the image. He also founded Art Incarnate, where he sells prints and other upcoming AI creations.
The US Copyright Office’s decision, Allen argues, ignores the creativity in using AI tools, and he’s since appealed in an effort to copyright his award-winning piece. For Allen, forced AI-labeling produces a similar bias against AI creators and creates multiple “tiers” of artists.
“I feel like it’s impossible to remove the human element from the work,” said Allen, who doesn’t consider himself an artist. “There’s always a user, there’s always a person, there’s always a creative force.”
The idea that AI generators are just another tool for artists has parallels to 19th-century debates about photography, which was seen, at the time, as a mechanistic reproducer of fact rather than a conduit to creativity. In an 1884 US Supreme Court case, a lithograph company that reproduced a photograph of Oscar Wilde argued the original could not be copyrighted because photographs lacked originality, being the result of a simple button push. In the decision, Justice Samuel Miller deemed it an “original work of art,” noting the creative decisions that went into the portrait’s production. Similar debates and court cases were waged in France, the United Kingdom, and elsewhere at the time.
Ahmed Elgammal, a professor of computer science at Rutgers University and the director of the Rutgers Art and Artificial Intelligence Lab, sees photography and AI similarly, as tools.
“I think it might be fair to think of labeling [images] as AI the same as labeling an image a photograph or labeling an image as digitally created,” Elgammal told ARTnews, adding that fake images circulating on social media are “really problematic.”
Even if platforms agree that all AI-generated works should be labeled, the challenge remains how to do so. User reporting has obvious problems. Google’s AI labeling tool, rolled out in May, asks text-to-image generators to label works at the point of production. The company said Midjourney and others would join in the coming months. Meanwhile, using an algorithm or automated detection system to determine whether something was created with AI could introduce more problems than it solves.
“A technological solution to a technological problem, that’s gonna lead to more technological problems,” Jennifer Gradecki, assistant professor of art and design at Northeastern University, told ARTnews.
Derek Curry, also an art and design professor at Northeastern, told ARTnews that algorithmic detection would likely end up with false positives and false negatives. That could have a major impact on artists, depending on how platforms and governments choose to approach AI copyright in the future.
The real problem with labeling AI, Gradecki and Curry believe, is that the lines are blurry. Almost all smartphone cameras and many digital cameras already use AI to enhance images with image stabilization or color optimization. Image-editing software also offers AI enhancement. How much AI processing is acceptable before an image is deemed AI-generated?
“Even if you require large companies that are under some sort of regulation to label AI-generated content, within that even there’s a question of what constitutes AI-generated content,” Curry said.
While it’s clear that AI image generators are not going anywhere, Elgammal, the computer science professor, thinks the threat to artists will blow over.
“Soon people will realize that they are losing a lot by using these tools, their identity is lost, control is lost,” Elgammal said. “And at the end, art created by these kinds of tools will look the same. For me, anything produced by Midjourney looks the same.”