15 years ago people made fun of AI models because they could mistake some detail in a bush for a dog. Over time the models became more resistant against those kinds of errors. The change was more data and better models.
It’s the same type of error as hallucination. The model is overly confident about a thing it’s wrong about. I don’t see why these types of errors would be any different.