why do everybody want to make it count letters and stuff like this?
Dunno about the others; I do it because it shows well that those models are unable to understand and follow simple procedures, such as the ones necessary to: count letters, multiply numbers (including large ones - the procedure is the same), check if a sequence of words is a valid SATOR square, etc.
And by showing this, a few things become evident:
That anyone claiming we’re a step away from AGI is a goddamn liar, if not worse (a gullible pile of rubbish).
That all talk about “hallucinations” is a red herring analogy.
That the output of those models cannot be used in any situation where reliability is essential.
Dunno about the others; I do it because it shows well that those models are unable to understand and follow simple procedures, such as the ones necessary to: count letters, multiply numbers (including large ones - the procedure is the same), check if a sequence of words is a valid SATOR square, etc.
And by showing this, a few things become evident: