For a snappy reply all I can say is that I did qualify that a “conventional” LLM likely cannot become intelligent. I’d like to see examples of LLMs paired with sensorimotor systems, if you know of any. Although I have been often inclined to describe human intelligence as merely a bag of tricks that, taken together, give the impression of a coherent whole, we have a rather well developed bag of tricks that can’t easily be teased apart. Merely interfacing a Boston Dynamics robo-dog with the OpenAI API may have some amusing applications, but nothing could compel me to admit it as an AGI.
I think current LLMs are already intelligent. I’d also say cats, mice, fish, birds are intelligent - to varying degrees of course.
I’d like to see examples of LLMs paired with sensorimotor systems, if you know of any
If you’re referring to my comment about hobbyist projects, I was just thinking of the sorts of things you’ll find on a search of sites like YouTube, perhaps this one is a good example (but I haven’t watched it as I’m avoiding YouTube). I don’t know if anyone has tried to incorporate a “learning to walk” type of stage into LLM training, but my point is that it would be perfectly possible, if there were reason to think it would give the LLM an edge.
The matter of how intelligent humans are is another question, and relevant because AFAIK when people talk about AGI now, they’re talking about an AI that can do better on average than a typical human at any arbitrary task. It’s not a particularly high bar, we’re not talking about super-intelligence I don’t think.
I’ve watched a couple of these. You might find FreeTube useful for getting YT content without the ugly ads and algo stuff.
There are shortcomings that keep an LLM from approaching AGI in that way. They aren’t interacting (experiencing) with the world in a multisensory or realtime way, they are still responding to textual prompts within their frame of reference in a more discrete, turn-taking manner. They still require domain-specific instructions, too.
An AGI that is directly integrated with its sensorimotor apparatus in the same way we are would, for all intents and purposes, have a subjective sense of self that stems from the fact that it can move, learn, predict, and update in real time from its own fixed perspective.
Jeff Hawkins’ work still has me convinced that the fixed perspective to which we are all bound is the wellspring of subjectivity, and that any intermediary apparatus (such as an AI subsystem for recognizing pictures that feeds words about those pictures to an LLM that talks to another LLM etc, in order to generate a semblance of complex behaviour) renders the whole as a sort of Chinese room experiment, and the LLM remains a p-zombie. It may be outwardly facile at times, even enough to pass Turing tests and many other such standards of judging AI, but it would never be a true AGI because it would never have a general facility of intelligence.
I do hope you don’t find me churlish, I hasten to admit that these chimerae are interesting and likely to have important considerations as the technology ramifies throughout society and the economy, but I don’t find them to be AGI. It is a fundamental limitation of the LLM technology.
I’m going to repeat myself as your last paragraph seems to indicate you missed it: I’m *not* of the view that LLMs are capable of AGI, and I think it’s clear to every objective observer with an interest that no LLM has yet reached AGI. All I said is that like cats and rabbits and lizards and birds, LLMs do exhibit some degree of intelligence.
I have been enjoying talking with you, as it’s actually quite refreshing to discuss this with someone who doesn’t confuse consciousness and intelligence, as they are clearly not related. One of the things that LLMs do give us, for the first time, is a system which has intelligence - it has some kind of model of the universe, however primitive, to which it can apply logical rules, yet clearly it has zero consciousness.
You are making some big assumptions though - in particular, when you said an AGI would “have a subjective sense of self” as soon as it can “move, learn, predict, and update”. That’s a huge leap, and it feels a bit to me like you are close to making that schoolboy error of mixing up intelligence and consciousness.
I’m less mentally organised than I was yesterday, so for that I apologise. I suspect the problem is that we’re both working from different ideas of the word intelligence. It’s not a word that has a single definition based on solid scientific grounds. The biggest problem in neuroscience might be that we don’t have a grand unified theory of what makes the mind do “intelligence”, whatever that is. I did mistake your position somewhat, but I think it comes down to the fact that neither of us has a fully viable theory of intelligence and there is too much we cannot be certain of.
I admit that I overreached when I conflated intelligence and consciousness. We are not at that point of theoretical surety, but it is a strong hunch that I will admit to having. I do feel I ought to be pointing out that LLMs do not create a model, they merely work from a model - and not a model of anything but word associations, at that. But I do not want to make this a confrontation, I am only explaining a book or two I have read as best I can, in light of the observations I’ve made about LLMs.
From your earlier comments about different degrees of intelligence (animals and such), I have tried to figure that into how I describe what intelligence is, and how degrees of intelligence differ. Rats also have a neocortex, and therefore likely use the self-same pattern of repeating units that we do (cortical columns). They have a smaller neocortex, and fewer columns. The complexity of behaviour does seem to vary in direct proportion to the number of cortical columns in a neocortex, from what I recall reading. Importantly, I think it is worth pointing out that complexity of behaviour is only an outward symptom of intelligence, but not likely the source. I put forward the “number of cortical columns” hypothesis, because it is the best one I know, but I also have to allow that other types of brains that do not have a neocortex can also display complex behaviours and we would need to make sense of that once we have a workable theory of how intelligence works in ourselves. It is too much to hope for all at once, I think.
So complex behaviour can be expressed by systems that do not closely mimic the mammalian neocortical pattern, but I can’t imagine anyone would argue that ours is the dominant paradigm (whether in terms of evolution or technology, for now), so in the interest of keeping a theoretically firm footing until we are more sure, I will confine my remarks about theories of intelligence to the mammalian neocortex until someone is able to provide a compelling theory that explains at least that type of intelligence for us. I have not devoted my career to understanding these things, so all I can do is await the final verdict and speculate idly with people inclined to do so. I hope only that the conversation can continue to be an enjoyment, because I know better than anyone I am not the final word on much of anything!
For a snappy reply all I can say is that I did qualify that a “conventional” LLM likely cannot become intelligent. I’d like to see examples of LLMs paired with sensorimotor systems, if you know of any. Although I have been often inclined to describe human intelligence as merely a bag of tricks that, taken together, give the impression of a coherent whole, we have a rather well developed bag of tricks that can’t easily be teased apart. Merely interfacing a Boston Dynamics robo-dog with the OpenAI API may have some amusing applications, but nothing could compel me to admit it as an AGI.
I think current LLMs are already intelligent. I’d also say cats, mice, fish, birds are intelligent - to varying degrees of course.
If you’re referring to my comment about hobbyist projects, I was just thinking of the sorts of things you’ll find on a search of sites like YouTube, perhaps this one is a good example (but I haven’t watched it as I’m avoiding YouTube). I don’t know if anyone has tried to incorporate a “learning to walk” type of stage into LLM training, but my point is that it would be perfectly possible, if there were reason to think it would give the LLM an edge.
The matter of how intelligent humans are is another question, and relevant because AFAIK when people talk about AGI now, they’re talking about an AI that can do better on average than a typical human at any arbitrary task. It’s not a particularly high bar, we’re not talking about super-intelligence I don’t think.
I’ve watched a couple of these. You might find FreeTube useful for getting YT content without the ugly ads and algo stuff.
There are shortcomings that keep an LLM from approaching AGI in that way. They aren’t interacting (experiencing) with the world in a multisensory or realtime way, they are still responding to textual prompts within their frame of reference in a more discrete, turn-taking manner. They still require domain-specific instructions, too.
An AGI that is directly integrated with its sensorimotor apparatus in the same way we are would, for all intents and purposes, have a subjective sense of self that stems from the fact that it can move, learn, predict, and update in real time from its own fixed perspective.
Jeff Hawkins’ work still has me convinced that the fixed perspective to which we are all bound is the wellspring of subjectivity, and that any intermediary apparatus (such as an AI subsystem for recognizing pictures that feeds words about those pictures to an LLM that talks to another LLM etc, in order to generate a semblance of complex behaviour) renders the whole as a sort of Chinese room experiment, and the LLM remains a p-zombie. It may be outwardly facile at times, even enough to pass Turing tests and many other such standards of judging AI, but it would never be a true AGI because it would never have a general facility of intelligence.
I do hope you don’t find me churlish, I hasten to admit that these chimerae are interesting and likely to have important considerations as the technology ramifies throughout society and the economy, but I don’t find them to be AGI. It is a fundamental limitation of the LLM technology.
I’m going to repeat myself as your last paragraph seems to indicate you missed it: I’m *not* of the view that LLMs are capable of AGI, and I think it’s clear to every objective observer with an interest that no LLM has yet reached AGI. All I said is that like cats and rabbits and lizards and birds, LLMs do exhibit some degree of intelligence.
I have been enjoying talking with you, as it’s actually quite refreshing to discuss this with someone who doesn’t confuse consciousness and intelligence, as they are clearly not related. One of the things that LLMs do give us, for the first time, is a system which has intelligence - it has some kind of model of the universe, however primitive, to which it can apply logical rules, yet clearly it has zero consciousness.
You are making some big assumptions though - in particular, when you said an AGI would “have a subjective sense of self” as soon as it can “move, learn, predict, and update”. That’s a huge leap, and it feels a bit to me like you are close to making that schoolboy error of mixing up intelligence and consciousness.
I’m less mentally organised than I was yesterday, so for that I apologise. I suspect the problem is that we’re both working from different ideas of the word intelligence. It’s not a word that has a single definition based on solid scientific grounds. The biggest problem in neuroscience might be that we don’t have a grand unified theory of what makes the mind do “intelligence”, whatever that is. I did mistake your position somewhat, but I think it comes down to the fact that neither of us has a fully viable theory of intelligence and there is too much we cannot be certain of.
I admit that I overreached when I conflated intelligence and consciousness. We are not at that point of theoretical surety, but it is a strong hunch that I will admit to having. I do feel I ought to be pointing out that LLMs do not create a model, they merely work from a model - and not a model of anything but word associations, at that. But I do not want to make this a confrontation, I am only explaining a book or two I have read as best I can, in light of the observations I’ve made about LLMs.
From your earlier comments about different degrees of intelligence (animals and such), I have tried to figure that into how I describe what intelligence is, and how degrees of intelligence differ. Rats also have a neocortex, and therefore likely use the self-same pattern of repeating units that we do (cortical columns). They have a smaller neocortex, and fewer columns. The complexity of behaviour does seem to vary in direct proportion to the number of cortical columns in a neocortex, from what I recall reading. Importantly, I think it is worth pointing out that complexity of behaviour is only an outward symptom of intelligence, but not likely the source. I put forward the “number of cortical columns” hypothesis, because it is the best one I know, but I also have to allow that other types of brains that do not have a neocortex can also display complex behaviours and we would need to make sense of that once we have a workable theory of how intelligence works in ourselves. It is too much to hope for all at once, I think.
So complex behaviour can be expressed by systems that do not closely mimic the mammalian neocortical pattern, but I can’t imagine anyone would argue that ours is the dominant paradigm (whether in terms of evolution or technology, for now), so in the interest of keeping a theoretically firm footing until we are more sure, I will confine my remarks about theories of intelligence to the mammalian neocortex until someone is able to provide a compelling theory that explains at least that type of intelligence for us. I have not devoted my career to understanding these things, so all I can do is await the final verdict and speculate idly with people inclined to do so. I hope only that the conversation can continue to be an enjoyment, because I know better than anyone I am not the final word on much of anything!