AI

A breakout year for AI art brings as many questions as answers

In 2022, generative AI exploded into the mainstream. What comes next?
article cover

Francis Scialabba

· 6 min read

Luke Miller was receiving a lot of robot kittens.

It was October, a week or two after OpenAI released DALL-E 2 to the public, and as a product manager who had worked on the AI image-generation model, Miller recalled fielding messages from family members and friends “from all walks of life” and, most frequently, his 14-year-old niece, who has a penchant for cats.

Miller's personal influx of kitten pictures was part of a much larger trend: DALL-E 2 and other image-generation models like it had been going viral for months. Five weeks after OpenAI’s public release of the model, more than 3 million people were using DALL-E 2 to generate over 4 million images per day. Other popular image-generation models, like Stable Diffusion, were also gaining steam. Since Stable Diffusion’s public debut in late August, it’s surpassed 1 million downloads, Tom Mason, CTO of Stability AI, told us in early December.

In 2022, companies working on generative AI have taken over headlines, social media, and even VC dollars, raising $1.3 billion through late November—a 15% spike year over year, even amid a broader market contraction, according to Pitchbook data. Users have already brought AI-generated art to a wide range of industries, from marketing and graphic design to film and retail. One creation was featured on the cover of Cosmopolitan. Another (controversially) won an art prize at a state fair.

But generative AI has also introduced a slew of ethical and legal questions. Leading image platforms like Getty, Adobe, and Shutterstock have all taken different approaches to AI-generated content, for example, and a programmer recently sued Microsoft, GitHub, and OpenAI over GitHub Copilot, a generative AI tool that used vast amounts of publicly available computer code to learn to write its own.

“It’s hard for me to think of any new technology that has seen so much development in so short a time,” Andy Baio, a technologist, writer, and former CTO of Kickstarter, told us. “I’ve been in technology for over 20 years…and just to see in a matter of months, with new developments really every single week, just how rapidly this has gotten better…It’s an area that is overflowing with potential—and practical and creative applications—but it is also a morass of legal, ethical, and moral dilemmas.”

Generating returns

Although it wouldn’t become publicly available for months, OpenAI introduced DALL-E 2 to the world in April, inspiring similar models like Craiyon (formerly DALL-E mini).

In November, OpenAI released the public beta of DALL-E 2’s API—enabling developers and businesses to integrate the model into their own applications and products.

One business using the API is Cala, an online clothing design and production platform. The first week Cala began enabling its users to generate and personalize designs with DALL-E 2, the company saw a 7x increase in signups from partners like fashion brands, designers, and creators, from about 50 to about 370, Andrew Wyatt, Cala’s CEO and co-founder, told us.

“The future of fashion is going to be AI-powered design and on-demand production,” Wyatt said, later adding, “I thought it was about five to 10 years out, but this year, as we started seeing a lot of the generative AI stuff come together, we just got really excited.”

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

For Stability AI, the release of Stable Diffusion was two or three years in the making, CTO Tom Mason told us, recalling that the company debuted the tech to its Discord community, which had 10,000–15,000 members at the time, as an open-source model—partly because it’s “not a technology that should be owned by one company or controlled by one company,” according to Mason. Stable Diffusion was trained on a corpus of more than 5 billion images in total, and user applications include animation, VR, and marketing.

“The reaction has been incredible,” Mason said, adding, “[I] just wake up every morning, and it takes me probably half an hour, just to see on Twitter, just to catch up with all the projects that are being launched or announced just on Stable Diffusion, let alone the other models that are being developed.”

Generating concerns

But the number of ethical questions surrounding AI-generated art threatens to overshadow the tech’s wide range of use cases.

Since machine learning tools are trained on real-world data, they virtually always exhibit some degree of bias, and the AI art space is no different—at times sexualizing images of women and generating wildly different types of results based on what the AI perceives to be the race of the subject. There’s also the constant risk of use cases like generating deepfakes.

OpenAI implemented certain guardrails for DALL-E 2 in an attempt to hamper potential use cases like misinformation and harassment, such as prohibiting the model from generating human faces. For its part, Stable Diffusion’s next iteration will allow artists to opt out of their work’s inclusion in training data.

Ethical and legal questions over generative AI’s training process—in which models learn from a large corpus of content that is often human-created—also remain largely unanswered.

In addition to the GitHub Copilot lawsuit, there are art-specific debates related to artists recognizing their own work in AI model outputs. One example involves an AI model created by a user via DreamBooth and Stable Diffusion that was designed to mimic the style of illustrator Hollie Mengert’s work after being trained on 32 of her creations.

Whether AI-generated content qualifies as fair use, the legal doctrine that holds “transformative” uses—or uses that “add something new, with a further purpose or different character, and do not substitute for the original use of the work”—are “more likely to be considered fair,” is an open question. To that end, some experts believe that it may be difficult to fight a broad-scale generative AI model’s output in court.

“Artists sort of found themselves in a situation where there were large commercial companies that were developing commercial services to generate art, often in the artist’s style, by name, using their work…without their knowledge, permission, or consent, and with no opt-out,” Baio said. “So this has opened a Pandora’s box of questions about artists’ rights, and copyright, and fair use…Each new development raises new questions.”

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.