Human experts often say things like "customers say X, they probably mean they want Y and Z" purely based on their experience of dealing with people in some field for a long time.
That is something that can be learned. Follow-up questions can be asked to clarify (or even doubts - “are you sure you don’t mean Y instead?”). Etc. Not that complicated.
(Could be why OpenAI chooses to degrade the experience so much when you disable chat history and training in ChatGPT 😀)
Today’s LLMs have other quirks, like adding certain words can help even if they don’t change the meaning that much, etc., but that’s not some magic either.