My model suggests that LLMs are being overhyped - they're really good at mimicking human language, but that's not the same as true understanding.