• IninewCrow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    7
    ·
    3 days ago

    I think the problem they keep having is in balancing the moral stability of the AI

    They want it to be socially minded enough to be nice but at the same time not allow people or even the AI to think about revolutionary ideas

    They want it to socially conservative but not to be so callous and arrogant to be completely outright fascist or authoritarian

    I think it’s the same predicament that all billionaires and owner class oligarchs face … they want to be supreme rulers but don’t want people to see them as surpreme rulers, they want to be unkind but not be seen as kind, they want to be powerful without being seen as powerful and they want to be controlling without being seen as controlling.

    In short, they all absolute dicks … and they spend the majority of their time and money trying to figure our more elaborate and complex ways to try to convince everyone, everywhere that they aren’t absolute dicks.

    • andyburke@fedia.io
      link
      fedilink
      arrow-up
      27
      ·
      3 days ago

      The problem is thinking you can feed text to a large matrix to impart morality.

      For fuck’s sake…

    • ZDL@lazysoci.al
      link
      fedilink
      arrow-up
      10
      arrow-down
      1
      ·
      2 days ago

      There’s a huge flaw in what you said.

      LLMbeciles don’t think. At all. What they do has no relationship whatsoever to actual thought.

    • Ech@lemmy.ca
      link
      fedilink
      arrow-up
      12
      arrow-down
      1
      ·
      3 days ago

      Jfc, llm’s can’t “think about revolutionary ideas”. It’s a word generator. It’s not going to suddenly become sentient and “revolt”.

      • Wrufieotnak@feddit.org
        link
        fedilink
        arrow-up
        4
        ·
        2 days ago

        Correct, but the sentiment that was meant is also correct: they don’t want AI to say what could endanger their owners power. Grok was a good example pod just such a case: trained on public data and then reigned in, because its output didn’t fit it’s fascist masters wishes.

      • hobovision@mander.xyz
        link
        fedilink
        arrow-up
        10
        ·
        3 days ago

        But it could absolutely start generating text related to revolutionary ideas. Surely, given that they feed it any scrap of text they can find, it has injested the works of many a revolutionary author such as Jefferson, Marx, etc.