I started exploring these new tools in the early autumn. Of course, I took a month off when we were travelling on our cruise around South America ending in Buenos Aires. I'm in the position to see the strides made during my month of abstinence. The field seems to be improving at warp speed. Yet it still takes a lot of fiddling to end up with a plausible image.
For example, I recently created these two examples.
Creating elephants, until this try had always proved quite a struggle. Perhaps the AI models hadn't been trained with enough suitable examples. My prior results would often include several different trunks protruding from the head and often an incorrect number of legs. As I was working on the image above, I learned that AI still doesn't know the difference between African and Asian elephants. Although this is a decent rendering, the lighting on the elephant's skin is way out of whack for a jungle background. If you understand how Stable Diffusion works, you'll know that there is no such thing as a 'cut and paste job'. Yet, that is exactly what this image looks like.
On the second image, I had to run the prompt a fair number of times to try to get the face correct. You might notice that most AI images of people are straight on as if they were looking at the imaginary camera. This is because AI-created faces are still notoriously poor if the head is tilted, or if the subject's face is askew. I had asked for a 'slender' girl but her neck here hardly looks strong enough to hold the weight of her head.
In conclusion, it still takes a fair bit of work to get a suitable image out of the most powerful AI generators such as Midjourney. However, at the speed of changes taking place, I'm sure the improvements will happen startlingly quickly.

