been playing with stable diffusion.
the output varies between WTF, nightmare fuel, and nice waterfall
OpenAI on Wednesday made DALL-E, its cloud service for generating images from text prompts, available to the public without any waitlist. But the crowd that had gathered outside its gate may have moved on. The original DALL-E debuted in January 2021 and was superseded by DALL-E 2 this April. The latest release, which offers …
Been playing with Dall-E (my waitlist was four days, about a month and a bit ago).
It can create some interesting images depending upon how it understood the text given, but they really need to fix the faces. It clearly understands how a human body is constructed, and generally does quite good work except for the eyes which are a complete horror show. It's as if the AI has a routine to pick an eye to go with the face, but runs it twice completely independently for each eye. The results are generally rather awful. Sometimes for the character it works, but oftentimes a person with two notably different eyes just looks bizarre.
I have no training in art. After 1 hour I was producing quite nice stuff here.
Easily stuff that was better than 50% of the Tate Modern.
I find this tool wonderful for band album covers initially. The actual stuff that won the award is beautiful
There's a weird argument that humans can take existing art as a reference but apparently AI cannot? I find that argument silly. There will always be a market for old school art. But now my 13yr old cousin can create this stuff with 2hours training.