• CarbonIceDragon@pawb.social
    link
    fedilink
    arrow-up
    28
    ·
    5 days ago

    Honestly I think the scariest part of all this is how it shows that all it takes to drive someone off the deep end is for someone or something that person trusts to merely agree with whatever idea pops into a person’s head. I guess it makes sense, we use reinforcement to learn what we think is true and often have bad ideas, but still, I’d always been under the impression that humans were a bit more mentally resilient than that.

    • rollin@piefed.social
      link
      fedilink
      English
      arrow-up
      9
      ·
      5 days ago

      bit more mentally resilient than that

      I think when we get down to it, none of us can actually separate reality from imagination accurately - after all, our perceptions of reality all exist inside our minds and are informed by our imaginations. People who are outwardly crazy seem to be placing the line between reality and fantasy at a very different place to anyone else, but we all put the line in a slightly different place.

      Compare people who believe in conspiracy theories, or horoscopes, or conflicting religions for instance. What I’m trying to say is that “crazy people” are not really so different from the rest of us.

    • Kissaki@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 days ago

      The reinforcement learning is a good point, but the social aspect seems equally important to me. Humanity is a very social creature. We learn from others, we seek agreement and acknowledgment, if we see rejection from one end, we may be all too willing to seek out where we don’t see rejection.

      A trained chat bot hijacking this evolved mechanism is interesting, at least, if not ironic or absurd. We are so driven by the social mechanisms of communication and social ideation, that no human is needed for this mechanism to work - whether in good or bad effect.

  • jjjalljs@ttrpg.network
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    5 days ago

    I keep telling people not to use the lie machine but I’m not making much progress. People aren’t smart and resilient enough for the world we built.

  • lol_idk@piefed.social
    link
    fedilink
    English
    arrow-up
    6
    ·
    5 days ago

    The thing about this is you have to use it enough for it to get that far. I’ve used it 3 times and the one time it successfully refactored my code without coaxing me into psychosis

    • fubarx@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      5 days ago

      It’s more subtle than that. When refactoring code, it constantly compliments you on how smart you are that you caught its mistake.

      It deliberately creates an overinflated sense of self. Then you go and mistreat everyone around you. Next thing you know, you’re in a padded cell with a shaved head and a ball-gag.

      That’s the coding ‘assistant’ end-game.

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      5 days ago

      The same as TV, if TV could dynamically respond to your input in realtime, reinforcing your biases.

      • Lembot_0004@discuss.online
        link
        fedilink
        arrow-up
        2
        arrow-down
        12
        ·
        5 days ago

        TV is just more straightforward: it creates your biases instead of figuring out and reinforcing them. Unification.