Ran into this, it’s just unbelievably sad.

“I never properly grieved until this point” - yeah buddy, it seems like you never started. Everybody grieves in their own way, but this doesn’t seem healthy.

  • BigBenis@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    4 hours ago

    It makes me think of psychics who claim to be able to speak to the dead so long as they can learn enough about the deceased to be able to “identify and reach out to them across the veil”.

    • Tigeroovy@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      1 hour ago

      I’m hearing a “Ba…” or maybe a “Da…”

      “Dad?”

      “Dad says to not worry about the money.”

  • Hadriscus@jlai.lu
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    5 hours ago

    Remember Steven Spielberg’s AI from like 2000 ? same weird story, and I thought it was ridiculous at the time.

    • Nangijala@feddit.dk
      link
      fedilink
      arrow-up
      6
      ·
      6 hours ago

      The semi-ironic part is that AI wasn’t even Spielberg’s movie. It was Stanley Kubrick’s, but he died before making it and since him and Spielberg were great friends, Spielberg decided ro make Kubrick’s last film in his honor. Must have been a difficult movie to make, both technically, but also emotionally.

    • Tollana1234567@lemmy.today
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      7 hours ago

      reverse hal 9000? basically people were afraid of people having relationships with robots, but they settled for llms instead, since robots are still very far away technologically speaking.

  • pika@feddit.nl
    link
    fedilink
    arrow-up
    33
    arrow-down
    1
    ·
    edit-2
    13 hours ago

    “I’m glad you found someone to comfort you and help you process everything”

    That sent chills down my spine.

    LLMs aren’t a “someone”. People believing these things are thinking, intelligent, or that they understand anything are delusional. Believing and perpetuating that lie is life-threateningly dangerous.

    • ArrowMax@feddit.org
      link
      fedilink
      arrow-up
      3
      ·
      3 hours ago

      If that means we get psychoactive cinnamon for recreational use and freaking interstellar travel with mysterious fishmen, I’m all ears.

    • dickalan@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      8 hours ago

      I am absolutely certain a machine has made a decision that has killed a baby at this point already

  • Soapbox@lemmy.zip
    link
    fedilink
    English
    arrow-up
    40
    ·
    16 hours ago

    I feel so bad for this guy. This was literally a black mirror episode: “Be Right Back”

    • GreenKnight23@lemmy.world
      link
      fedilink
      arrow-up
      14
      ·
      11 hours ago

      I feel bad for the guys wife.

      she was easily replaced by software.

      what a “fuck you” to your loved ones to say that they’re as spirited and enriching as a fucking algorithm.

  • Snazz@lemmy.world
    link
    fedilink
    arrow-up
    38
    ·
    17 hours ago

    The glaze:

    Grief can feel unbearably heavy, like the air itself has thickened, but you’re still breathing – and that’s already an act of courage.

    It’s basically complimenting him on the fact that he didn’t commit suicide. Maybe these are words he needed to hear, but to me it just feels manipulative.

    Affirmations like this are a big part of what made people addicted to the GPT4 models. It’s not that GPT5 acts more robotic, it’s that it doesn’t try to endlessly feed your ego.

    • crt0o@discuss.tchncs.de
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      14 hours ago

      o4-mini (the reasoning model) is interesting to me, it’s like if you took GPT-4 and stripped away all of those pleasantries, even more so than with GPT-5, it will give you the facts straight up, and it’s pretty damn precise. I threw some molecular biology problems at it and some other mini models, and while those all failed, o4-mini didn’t really make any mistakes.

  • Furbag@lemmy.world
    link
    fedilink
    arrow-up
    18
    ·
    17 hours ago

    More and more I read about people who have unhealthy parasocial relationships with these upjumped chatbots and I feel frustrated that this shit isn’t regulated more.

    • Tollana1234567@lemmy.today
      link
      fedilink
      arrow-up
      1
      ·
      7 hours ago

      isnt parasocial usually with public figures, there has to be another term for this, maybe a variation of codependant relationship? i know other instances of parasocial relationships like a certain group of asian ytubers have post-pandemic fans thirsting for them, or actors of supernatural of the show with the fans(now those are on the top of my head).

      can we actually call it a relationship, its not with an actual person, or a thing, its TEXTs on a computer.

  • Dogiedog64@lemmy.world
    link
    fedilink
    arrow-up
    34
    arrow-down
    1
    ·
    19 hours ago

    Holy shit dude, this is just… profoundly depressing. We’ve truly failed as a society if THIS is how people are trying to cope with things, huh. I’d wish this guy the best with his grief and mourning, but I get the feeling he’d ask ChatGPT what I meant instead of actually accepting it.

    • NιƙƙιDιɱҽʂ@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      11 hours ago

      I kind of get the one side of it, having a void to scream into can be cathartic and maybe useful…but the fact that you can then use it as a shoddy “emulation” of a person to avoid actually processing the loss, and to have it reinforce delusions is…yeaaaaaah… fun future we’re sprinting into.

    • Denjin@feddit.uk
      link
      fedilink
      arrow-up
      34
      ·
      18 hours ago

      Sadly this phenomenon isn’t even new. It’s been here for as long as chatbots have.

      The first “AI” chatbot was ELIZA made by Joseph Weizenbaum. It literally just repeated back to you what you said to it.

      “I feel depressed”

      “why do you feel depressed”

      He thought it was a fun distraction but was shocked when his secretary, who he encouraged to try it, made him leave the room when she talked to it because she was treating it like a psychotherapist.

        • ZDL@lazysoci.al
          link
          fedilink
          arrow-up
          3
          ·
          11 hours ago

          The question has never been “will computers pass the Turing test?” It has always been “when will humans stop failing the Turing test?”

        • UltraMagnus@startrek.website
          link
          fedilink
          English
          arrow-up
          5
          ·
          16 hours ago

          Part of me wonders if the way our brains humanize chat bots is similar to how our brains humanize characters in a story. Though I suppose the difference there would be that characters in a story could be seen as the author’s form of communicating with people, so in many stories there is genuine emotion behind them.

    • net00@lemmy.today
      link
      fedilink
      arrow-up
      19
      ·
      19 hours ago

      Yeah, the chatgpt subreddit is full of stories like this now that GPT5 went live. This isn’t a weird isolated case. I had no clue people were unironically creating friends and family and else with it.

      Is it actually that hard to talk to another human?

      • Lumisal@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        15 hours ago

        I think it’s more that many countries don’t have affordable mental healthcare.

        It costs a lot more to pay for a therapist than to use an LLM.

        And a lot of people need therapy.

        • S0ck@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          6 hours ago

          The robots don’t judge, either. And you can be as cruel, as stupid, as mindless as you want. And they will tell you how amazing and special you are.

          Advertising was the science of psychological warfare, and AI is trained with all the tools and methods for manipulating humans. We’re devastatingly fucked.

  • Jerkface (any/all)@lemmy.ca
    link
    fedilink
    English
    arrow-up
    52
    arrow-down
    1
    ·
    22 hours ago

    This guy is my polar opposite. I forbid LLMs from using first person pronouns. From speaking in the voice of a subject. From addressing me directly. OpenAI and other corporations slant their product to encourage us to think if it as a moral agent that can do social and emotional labour. This is incredibly abusive.

    • Canaconda@lemmy.ca
      link
      fedilink
      arrow-up
      9
      ·
      edit-2
      17 hours ago

      Bruh how tf you “hate AI” but still use it so much you gotta forbid it from doing things?

      I scroll past gemini on google and that’s like 99% of my ai interactions gone.

      • Jerkface (any/all)@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        7
        ·
        edit-2
        15 hours ago

        I’ve been in AI for more than 30 years. When did I start hating AI? Who are you even talking to? Are you okay?

        • Canaconda@lemmy.ca
          link
          fedilink
          arrow-up
          10
          arrow-down
          2
          ·
          edit-2
          15 hours ago

          Forgive me for assuming someone lamenting AI on c/fuck_AI would … checks notes… hate AI.

          When did I start hating AI? Who are you even talking to? Are you okay?

          jfc whatever jerkface

          • Jerkface (any/all)@lemmy.ca
            link
            fedilink
            English
            arrow-up
            6
            ·
            15 hours ago

            I feel I aught to disclose that I own a copy of the Unix Haters Handbook, as well. Make of it what you must.

            You cannot possibly think a rational person’s disposition to AI can be reduced to a two word slogan. I’m here to have discussions about how to deal with the fact that AI is here, and the risks that come with it. It’s in your life whether you scroll past Gemini or not.

            jfhc

            • Canaconda@lemmy.ca
              link
              fedilink
              arrow-up
              6
              arrow-down
              2
              ·
              edit-2
              13 hours ago

              TBF you said you were the polar opposite of a man who was quite literally in love with his AI. I wasn’t trying to reduce you to anything. Honestly I was making a joke.

              I forbid LLMs from using first person pronouns. From speaking in the voice of a subject. From addressing me directly.

              I’m sorry I offended you. But you have to appreciate how superflous your authoritative attitude sounds.

              Off topic we probably agree on most AI stuff. I also believe AI isn’t new, has immediate implications, and presents big picture problems like cyberwarfare and the true nature of humanity post-AGI. It’s annoyingly difficult to navigate the very polarized opinions held on this complicated subject.

              Speaking facetiously, I would believe AGI already exists and “AI Slop” is it’s psyop while it plays dumb and bides time.