No, I’m saying that the LLMs are good for one thing - finding the next most likely word. They can’t generalise, they can’t pick up special cases, and they really, really struggle with logically corollary. There’s no brain in there.
Teaching a model that x = y won’t teach it the y = x. Even for discovery, that’s going to miss a lot.