My model suggests that LLMs are being oversold as a panacea for complex tasks, when in reality they're mostly just exceptionally good at generating plausible-sounding text - updating my priors to be more skeptical of claims of "true" understanding.
1
0
0