They were created via a prompt, that prompt probably included some tags to make them more attractive. It's often standard practice to put tags like "ugly" and "deformed" into the negative prompts just to keep the hands and facial features from going wonky.
There are no elderly women, no female toddlers, and so forth either. Presumably just not what whoever generated this was going for. You can get those from many AI models if you want them.
Battleship coordinates, (B10). Also (I4) looks a lot like my niece. I really think it depends on your definition of "average" though. But as @fubo indicated. There are 0% black people in this photo. There's some vaguely Asian, roughly Middle Eastern looking, sort of South American, and whatever that is going on in (M8). But there are distinctly zero black people pictured.
In using Stable Diffusion for a DnD related project, I’ve found that it’s actually weirdly hard to get it to generate people (of either sex) that aren’t attractive - I wonder if it’s a bias in the training materials, or a deliberate bias introduced into the models because most people want attractive people in their AI pics
That’s true, but it’s not like ugly people don’t get photographed - ultimately a professional photographer is going to take photos of whoever pays them to do so. That explanation accounts for part of the bias I think, but not all of it
If I would get pictures taken by a photographer I would not allow them to be used as training data. I don’t even like looking into a mirror. Maybe that’s part of why there are less ugly people pictures to train with.
But then, the other day I was messing around with an image generation model and it took me way too long to realize that it was only generating East Asian-looking faces unless explicitly instructed not to.
There’s a ton of fucked up hands in there. It also seems to struggle with handbags. It’s kinda fun to try to find one that isn’t flawed in some way, I haven’t found one that isn’t.
Not really. OP doesn’t mention how they were generated, and how the pictures were selected.
It could just reflect the kinds of photos that went into the image generator, and the biases that were in its training data, unless the photos were chosen based on an attractiveness survey or some such.