OpenAI’s DALL-E AI image generator can now edit pictures, too
DALL-E 2 results for “Teddy bears mixing sparkling chemicals as mad scientists, steampunk.” | OpenAIArtificial intelligence research group OpenAI has created a new version of DALL-E, its text-to-image generation program. DALL-E 2 features a higher-resolution and lower-latency version of...
Artificial intelligence research group OpenAI has created a new version of DALL-E, its text-to-image generation program. DALL-E 2 features a higher-resolution and lower-latency version of the original system, which produces pictures depicting descriptions written by users. It also includes new capabilities, like editing an existing image. As with previous OpenAI work, the tool isn’t being directly released to the public. But researchers can sign up online to preview the system, and OpenAI hopes to later make it available for use in third-party apps.
The original DALL-E, a portmanteau of the artist “Salvador Dalí” and the robot “WALL-E,” debuted in January of 2021. It was a limited but fascinating test of AI’s ability to visually represent concepts, from mundane depictions of a mannequin in a flannel shirt to “a giraffe made of turtle” or an illustration of a radish walking a dog. At the time, OpenAI said it would continue to build on the system while examining potential dangers like bias in image generation or the production of misinformation. It’s attempting to address those issues using technical safeguards and a new content policy while also reducing its computing load and pushing forward the basic capabilities of the model.
A DALL-E 2 result for “Shiba Inu dog wearing a beret and black turtleneck.”One of the new DALL-E 2 features, inpainting, applies DALL-E’s text-to-image capabilities on a more granular level. Users can start with an existing picture, select an area, and tell the model to edit it. You can block out a painting on a living room wall and replace it with a different picture, for instance, or add a vase of flowers on a coffee table. The model can fill (or remove) objects while accounting for details like the directions of shadows in a room. Another feature, variations, is sort of like an image search tool for pictures that don’t exist. Users can upload a starting image and then create a range of variations similar to it. They can also blend two images, generating pictures that have elements of both. The generated images are 1,024 x 1,024 pixels, a leap over the 256 x 256 pixels the original model delivered.
DALL-E 2 builds on CLIP, a computer vision system that OpenAI also announced last year. “DALL-E 1 just took our GPT-3 approach from language and applied it to produce an image: we compressed images into a series of words and we just learned to predict what comes next,” says OpenAI research scientist Prafulla Dhariwal, referring to the GPT model used by many text AI apps. But the word-matching didn’t necessarily capture the qualities humans found most important, and the predictive process limited the realism of the images. CLIP was designed to look at images and summarize their contents the way a human would, and OpenAI iterated on this process to create “unCLIP” — an inverted version that starts with the description and works its way toward an image. DALL-E 2 generates the image using a process called diffusion, which Dhariwal describes as starting with a “bag of dots” and then filling in a pattern with greater and greater detail.
An existing image of a room with a flamingo added in one corner.Interestingly, a draft paper on unCLIP says it’s partly resistant to a very funny weakness of CLIP: the fact that people can fool the model’s identification capabilities by labeling one object (like a Granny Smith apple) with a word indicating something else (like an iPod). The variations tool, the authors say, “still generates pictures of apples with high probability” even when using a mislabeled picture that CLIP can’t identify as a Granny Smith. Conversely, “the model never produces pictures of iPods, despite the very high relative predicted probability of this caption.”
DALL-E’s full model was never released publicly, but other developers have honed their own tools that imitate some of its functions over the past year. One of the most popular mainstream applications is Wombo’s Dream mobile app, which generates pictures of whatever users describe in a variety of art styles. OpenAI isn’t releasing any new models today, but developers could use its technical findings to update their own work.
A DALL-E 2 result for “a bowl of soup that looks like a monster, knitted out of wool.”OpenAI has implemented some built-in safeguards. The model was trained on data that had some objectionable material weeded out, ideally limiting its ability to produce objectionable content. There’s a watermark indicating the AI-generated nature of the work, although it could theoretically be cropped out. As a preemptive anti-abuse feature, the model also can’t generate any recognizable faces based on a name — even asking for something like the Mona Lisa would apparently return a variant on the actual face from the painting.
DALL-E 2 will be testable by vetted partners with some caveats. Users are banned from uploading or generating images that are “not G-rated” and “could cause harm,” including anything involving hate symbols, nudity, obscene gestures, or “major conspiracies or events related to major ongoing geopolitical events.” They must also disclose the role of AI in generating the images, and they can’t serve generated images to other people through an app or website — so you won’t initially see a DALL-E-powered version of something like Dream. But OpenAI hopes to add it to the group’s API toolset later, allowing it to power third-party apps. “Our hope is to keep doing a staged process here, so we can keep evaluating from the feedback we get how to release this technology safely,” says Dhariwal.
Additional reporting from James Vincent.