Brave Little Hitachi Wand

I’m a human being, god damn it. My life has value.

  • 55 Posts
  • 5.61K Comments
Joined 2 年前
cake
Cake day: 2023年6月14日

help-circle

  • I’m less mentally organised than I was yesterday, so for that I apologise. I suspect the problem is that we’re both working from different ideas of the word intelligence. It’s not a word that has a single definition based on solid scientific grounds. The biggest problem in neuroscience might be that we don’t have a grand unified theory of what makes the mind do “intelligence”, whatever that is. I did mistake your position somewhat, but I think it comes down to the fact that neither of us has a fully viable theory of intelligence and there is too much we cannot be certain of.

    I admit that I overreached when I conflated intelligence and consciousness. We are not at that point of theoretical surety, but it is a strong hunch that I will admit to having. I do feel I ought to be pointing out that LLMs do not create a model, they merely work from a model - and not a model of anything but word associations, at that. But I do not want to make this a confrontation, I am only explaining a book or two I have read as best I can, in light of the observations I’ve made about LLMs.

    From your earlier comments about different degrees of intelligence (animals and such), I have tried to figure that into how I describe what intelligence is, and how degrees of intelligence differ. Rats also have a neocortex, and therefore likely use the self-same pattern of repeating units that we do (cortical columns). They have a smaller neocortex, and fewer columns. The complexity of behaviour does seem to vary in direct proportion to the number of cortical columns in a neocortex, from what I recall reading. Importantly, I think it is worth pointing out that complexity of behaviour is only an outward symptom of intelligence, but not likely the source. I put forward the “number of cortical columns” hypothesis, because it is the best one I know, but I also have to allow that other types of brains that do not have a neocortex can also display complex behaviours and we would need to make sense of that once we have a workable theory of how intelligence works in ourselves. It is too much to hope for all at once, I think.

    So complex behaviour can be expressed by systems that do not closely mimic the mammalian neocortical pattern, but I can’t imagine anyone would argue that ours is the dominant paradigm (whether in terms of evolution or technology, for now), so in the interest of keeping a theoretically firm footing until we are more sure, I will confine my remarks about theories of intelligence to the mammalian neocortex until someone is able to provide a compelling theory that explains at least that type of intelligence for us. I have not devoted my career to understanding these things, so all I can do is await the final verdict and speculate idly with people inclined to do so. I hope only that the conversation can continue to be an enjoyment, because I know better than anyone I am not the final word on much of anything!





  • I’ve watched a couple of these. You might find FreeTube useful for getting YT content without the ugly ads and algo stuff.

    There are shortcomings that keep an LLM from approaching AGI in that way. They aren’t interacting (experiencing) with the world in a multisensory or realtime way, they are still responding to textual prompts within their frame of reference in a more discrete, turn-taking manner. They still require domain-specific instructions, too.

    An AGI that is directly integrated with its sensorimotor apparatus in the same way we are would, for all intents and purposes, have a subjective sense of self that stems from the fact that it can move, learn, predict, and update in real time from its own fixed perspective.

    Jeff Hawkins’ work still has me convinced that the fixed perspective to which we are all bound is the wellspring of subjectivity, and that any intermediary apparatus (such as an AI subsystem for recognizing pictures that feeds words about those pictures to an LLM that talks to another LLM etc, in order to generate a semblance of complex behaviour) renders the whole as a sort of Chinese room experiment, and the LLM remains a p-zombie. It may be outwardly facile at times, even enough to pass Turing tests and many other such standards of judging AI, but it would never be a true AGI because it would never have a general facility of intelligence.

    I do hope you don’t find me churlish, I hasten to admit that these chimerae are interesting and likely to have important considerations as the technology ramifies throughout society and the economy, but I don’t find them to be AGI. It is a fundamental limitation of the LLM technology.








  • For a snappy reply all I can say is that I did qualify that a “conventional” LLM likely cannot become intelligent. I’d like to see examples of LLMs paired with sensorimotor systems, if you know of any. Although I have been often inclined to describe human intelligence as merely a bag of tricks that, taken together, give the impression of a coherent whole, we have a rather well developed bag of tricks that can’t easily be teased apart. Merely interfacing a Boston Dynamics robo-dog with the OpenAI API may have some amusing applications, but nothing could compel me to admit it as an AGI.


  • The argument is best made by Jeff Hawkins in his Thousand Brains book. I’ll try to be convincing and brief at the same time, but you will have to be satisfied with shooting the messenger if I fail in either respect. The basic thrust of Hawkins’ argument is that you can only build a true AGI once you have a theoretical framework that explains the activity of the brain with reference to its higher cognitive functions, and that such a framework necessarily must stem from doing the hard work of sorting out how the neocortex actually goes about its business.

    We know that the neocortex is the source of our higher cognitive functions, and that it is the main area of interest to the development of AGI. A major part of Hawkins’ theory states that because the neocortex is arranged into many small columns (cortical columns), it is chiefly the number of them that differs between creatures of different intelligence level, and it forms essentially a basic repeating unit across the whole of the neocortex to model and make predictions about the world based on sensory data. He holds that these columns vote amongst each other in realtime about what is being perceived, constantly piping up and shushing each other and changing their models based on updated data almost like a rowdy room full of parliamentarians trying to come to a consensus view, and that it is this ongoing internal hierarchy of models and perceptions that makes up our intelligence, as it were.

    The reason I ventured to argue that sensorimotor integration is necessary for an AI to be an AGI is because I got that idea from him as well; in order to gather meaningful sensory data, you have to be able to move about your environment to make sense of your inputs. Merely receiving one piece of sensory data fails to make any particular impression, and you can test this for yourself by having a friend place an unknown object against your skin without moving it, and having you try to guess based on that one data point. Then, have them move the object and see how quickly you gather enough information to make a solid prediction - and if you were wrong, your brain will hastily rewire its models to update based on that finding. An AGI would similarly fail to make any useful contributions unless it has the ability to move about its environment (asterisk - that includes a virtual environment) in order to continually learn and make predictions. The sort of thing we cannot possibly expect from any conventional LLM, at least as far as I’ve heard so far.

    I’d better stop there and see if you care to tolerate more of this sort of blather. I hope I’ve given you something to sink your teeth into, at any rate.




  • Doesn’t fit, does it? His work only really fits the profile of a passion or maybe a vanity project. His “label” selling verses in his tracks makes me think he’s really all about clout and fakey sort of posturing bullshit, so that does fit the MO of a faked C&D. Would a spoiled manchild risk a defamation* suit by falsifying legal notices from one of the largest, most oligarchic, most childishly run corporations in the world? Maybe. But that doesn’t explain why the news sites are falsely characterizing his track as a critique of the truck.

    • Libel maybe? IANAL but it seems risky


  • Things that are surprising me right now (not in order):

    • Big Huey is such a non-entity. There are 3 tracks of his (& one collab) on the whole damn internet that I can see. Why did Tesla even bother?
    • The video is flattering to the deactivated item. They can afford to lose admirers right now? Why do they feel so safe?
    • There are no written records of the lyrics on the internet I could find.
    • Why does his label’s website have an offer to “buy a verse”? Dude so unfamous and lacking success he’s trying to sell clout to strangers.
    • Yet he owns a cybertruck and has only needed to make a handful of videos over the last 4 years.
    • The news wants us to think Tesla is retaliating over a dissenting view, erroneously (?!)

    The whole thing reads like a maze of mirrors. At no point does the story make actual sense. Did we really see a jumped-up vanity project get stomped by a mega corp for no actual reason?

    What explanation could make this series of events make sense? Is Huey a Tesla plant? Is this a (botched? Who knows) test balloon by the company trying to create a legal precedent for corporate powers to smash all dissent? Then why would they pick a target that was favourable to them? They must not want to prejudice the jury…

    I can’t make the pieces all fit. But I feel that it must make complete sense to one of the players.