But it's a fundamental limitation of what LLMs are. They are not AI. They do not have anything in common with intelligence, and they don't have a particularly compelling path forward.
They also, even if they weren't actually terrible for almost every purpose, are obscenely heavy and what we're calling "current" isn't something capable of being executed on consumer hardware, dedicated card or not.
Finally, the idea that they can't get worse is just as flawed. They're heavily poisoning the well of future training data, and ridiculous copyright nonsense has the very real possibility of killing training further even though training on copyrighted material doesn't in any way constitute copyright infringement.