⭒˚。⋆ 𓆑 ⋆。𖦹
- 6 Posts
- 227 Comments
audaxdreik@pawb.socialto Asklemmy@lemmy.ml•Are you scared of AI becoming sentient? How do we ensure we never make one that is?English101·4 hours agodeep breath OK here we go: Hard NOOOOOOOOOO.
First let’s start with the two different schools of AI, Symbolic and connectionist AI.
When talking about modern implementations of AI, mostly those generative and LLMs, we’re talking about connectionist or neural networks approaches. A good example of this is the Chinese Room Argument which I first read about in Peter Watts’ Blindsight (just a fun sci-fi, first encounter book, check it out sometime).
“Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.”
It’s worth reading the Stanford Encyclopedia article for some of the replies, but we’ll say that the room operator or the LLM does not have a direct understanding, even if some representation of understanding is produced.
On the other hand, symbolic AI has been in use for decades for extremely narrow approaches. Take a look at any game-playing AI for example, something like StackRabbit for Tetris or Yosh’s delightful Trackmania playing AI. Or for something more scientific, animal pose tracking like SLEAP.
Gary Marcus makes an argument for a merging of the two into something called neurosymbolic AI. This certainly shows promise, but in my mind there are two big problems with this:
- The necessary symbolic algorithms that the connectionist models invoke are still narrow and would likely need time and focused development to plug into the models and,
- The chain-of-thought reasoning of LLMs has been shown to be fragile and exceptionally poor at generalization, Is Chain-of-Thought Reasoning of LLMs a Mirage? A Data Distribution Lens. This is what would be required to properly parse data and hand it off to a more symbolic approach
(I feel like I had more articles I wanted to link here, as if anyone was already going to read all that. Possible edits with more later …)
So why are there so many arguments for sentience and super-intelligence? Well first and most cynically, manipulation. Returning to that first article, one of the big cons of connectionist AI is that it’s not very interpretable, it’s a black box. Look at Elon Musk’s Grok and the recent mecha Hitler episode. How convenient is it that you can convince people that your AI is “super smart” and can digest all this data to arrive at the one truth while putting your thumb on the scale to make it say what you want. Consider this in terms of the Chinese Room thought experiment. If the rulebook says to reply to the question, “Do you like dogs?” with the answer, “No, I hate them” this does not reflect an opinion of the room operator nor any real analysis of data. It’s an obfuscated opinion someone wrote directly into the rulebook.
Secondly, and perhaps a bit more charitably, they’re being duped. AI psychosis is the new hot phrase, but I wouldn’t go that far. The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con from July 4th, 2023 (!!!) does a good job of explaining the self-fulfilling nature of it. The belief isn’t reached after a careful weighing of evidence, it’s reached when pre-formed hypothesis (the machine is smart) is validated by interpreting the output as true understanding. Or something.
So again, WHY? Back to Gary Marcus and the conclusion of the previously linked article:
“Why was the industry so quick to rally around a connectionist-only approach and shut out naysayers? Why were the top companies in the space seemingly shy about their recent neurosymbolic successes? Nobody knows for sure. But it may well be as simple as money. The message that we can simply scale our way to AGI is incredibly attractive to investors because it puts money as the central (and sufficient) force needed to advance.”
Would this surprise you?
People want you to believe that amazing things are happening fast, in the realm of the truly high-minded and even beyond that which is known! But remember the burden of proof lies with them to demonstrate that the thing has happened, not that it could’ve happened just outside your understanding. Remain skeptical, I’ll believe it when I see it. Until then, it remains stupider than a parrot because a parrot actually understands desire and intent when it asks for a cracker. EDIT: https://www.youtube.com/watch?v=zzeskMI8-L8
audaxdreik@pawb.socialto Technology@lemmy.world•AI companion apps are on track to generate $120M+ in revenue in 2025, and in H1 there were 60M downloads of this kind of app, up 88% YoYEnglish51·7 hours agoYeah. I mean, I don’t want to police the internet, I get why this situation is darkly humorous to a lot of people.
But it’s worth considering what you’re laughing at and why, because the joke could soon be on you or someone you love.
audaxdreik@pawb.socialto Technology@lemmy.world•AI companion apps are on track to generate $120M+ in revenue in 2025, and in H1 there were 60M downloads of this kind of app, up 88% YoYEnglish352·9 hours agoSarah Z did an amazingly prescient and compassionate take on this over 2 years ago with The Rise and Fall of Replika.
I urge everyone to try and approach this with some level of compassion and understanding; even though it seems ridiculous to most of us they are actively preying on the emotionally vulnerable to profit. In the same way that “you are not immune to propaganda”, the ability of AI tools to parse language and wield emotional payloads in a calculated manner (something a lot of us already refer to as “the algorithm” in various ways) shouldn’t be underestimated. Even when you’re not directly using those tools yourself, they could be used against you. Dead Internet Theory and AI posting chatbots are already a part of this.
People using AI companion apps are juts the leading edge, the volunteers. I urge you to take the danger seriously and please have some compassion for your fellow human beings, especially the vulnerable.
https://dictionary.cambridge.org/us/pronunciation/english/debacle
More like de-BAH-cle. Like a chicken, bawk bawk.
Heh, ameliorate was a better example word. The real one that always comes to mind for me is debacle. I always read it as de-buckle (like unbuckling a belt) in my head until I heard someone on the news say it once. “Lol, that anchor pronounced debacle wrong … wait …”
Conversely, just fucking go for it. Who even cares? Have a laugh about it!
I think mispronouncing weird words you’ve worked into your vocab is a nice middle ground between sounding insufferable and approachable. Yes I used ameliorate but I also mangled the hell out of it, so how smart could I really be?
audaxdreik@pawb.socialto Asklemmy@lemmy.ml•What is your absolute favourite track from a video game?English3·1 day agoBeen getting into bullet hells lately and the OST for Espgaluda slaps so hard, https://www.youtube.com/watch?v=KJSlYmeW_es&list=PL6N1UyX9iZqvHkgNTslgU0ivl2jiLjNk2&index=6
The whole game is a wild, 2000’s anime fever dream and worth checking out even if you’re not into bullet hells. The primary mechanic involves building up gems from enemy kills that allow you to activate a power up to slow time. When time is slowed, any enemy you kill has their bullets cancelled and added to your score, so it encourages you to build up insane amounts of bullets on screen before going bullet time and killing everything. With this insane, trance-y soundtrack playing in the background it is a V I B E.
audaxdreik@pawb.socialto News@lemmy.world•Trump says he's placing Washington police under federal control and deploying the National GuardEnglish11·2 days agoVance waiting in the wings with Thiel puppeting. They’ll use Trump’s favorability (even though it’s waning) to do the heavy lifting for fascism and then swoop in when he keels over from whatever.
audaxdreik@pawb.socialto Games@lemmy.world•Any good Android games that aren't roguelikes?English5·2 days agoI really like The Quest for being a simple first person, dungeon crawler RPG. There’s an overworld and towns and a story, so it’s not just straight dungeon crawling. Nothing mold-breaking, but for a mobile game that I just want to fill some time when I have nothing else in my pockets, it absolutely does the trick.
audaxdreik@pawb.socialto Games@lemmy.world•Games Where Nothing Happens (SPOILERS for various game plots)English1·2 days agohttps://github.com/sayucchin/P2-EP-PSP/
This isn’t the CJ Iwakura patch, but if you’re not into fan translation drama that won’t mean a whole lot to you. It’s fine!
audaxdreik@pawb.socialto Games@lemmy.world•Games Where Nothing Happens (SPOILERS for various game plots)English3·2 days agoThis reminds me that there’s an official fan translation for Persona 2: Eternal Punishment PSP version. It has some many quality of life improvements I was holding off on completing the duology until it was available.
audaxdreik@pawb.socialto Linux@lemmy.ml•5 Linux KDE Plasma Features that Completely Changed How I Use My PCEnglish5·5 days agoAnyone know what music visualizer that is in the screenshot near the bottom under entry 2? Quick search of the available music widgets and I didn’t see anything that looked like it.
audaxdreik@pawb.socialto Technology@lemmy.world•OpenAI says new GPT-5 AI model can provide PhD-level expertise.English61·5 days agoPart of what makes these models so dangerous is that as they become more “powerful” or “accurate”, it becomes more and more difficult for people to determine where the remaining inaccuracies lie. Anything using them as a source are then more at risk of propagating those inaccuracies which the model may feed on further down the line, reinforcing them.
Nevermind the fact that 100% is just statistically impossible, and they’ve clearly hit the point of diminishing returns some time ago so every 0.1% comes at increased cost and power. And, you know, any underlying biases.
Just ridiculously unethical and dangerous.
As the seventh (7TH!!!) film in the franchise, it has the exact same issues as Star Wars Episode VII. Slavish devotion to the formula of the original and afraid to take too many risks. Why was the family even there? They were separated for so much of the runtime and the storyline didn’t even really intertwine, it was just space filler.
I appreciated the body horror dinosaur, but they didn’t even do enough with it, just a big baddie in reserve for the climactic scare.
At this point, if you really want to keep the series going (debatable), get fucking crazy with it: “We built the dinosaur park in the one location they could never escape, THE MOON! Oh no they escaped! Wait, are those raptors actually wielding laser guns?! Pew pew!”
audaxdreik@pawb.socialto Not The Onion@lemmy.world•Microsoft is cautiously onboarding Grok 4 following Hitler concernsEnglish24·6 days agoThis hints at another problem with general AI I don’t really see being discussed a lot; voice assistants with low-key personality and names (Alexa, Cortana, etc.) already filled that niche, at least in perception.
Most people don’t live the exciting lives AI execs keep pitching. We’re not planning our kid’s birthday party while ordering a dozen expensive cupcakes and scheduling a trip to Italy. We need an egg timer. And like, somebody to Google that Tim Burton film whith the guy with scissor hands, you know, what’s it called. Or if you’re really spicy, maybe invoke Wolfram Alpha for something.
A tiny bit of natural language parsing (still impressive in some respects) and some clever voice tech was sufficient. We didn’t need a lying machine that hallucinates and boils lakes.
Which is to say, it’s about devaluing human art and labor. Always has been. They keep forcing it down our throats. Our buy in isn’t necessary, it just makes the conquest cheaper if we submit.
The marketing is kind of the problem, here.
Capitalism keeps pushing companies to pursue growth quarter over quarter, never slowing down, so rather than allow Heinz to continue taking in merely large sums of money for its mediocre (fight me) core product, they need to instead pursue absurd profits with ridiculous ideas. Tomato based smoothie is fine, ketchup based smoothie including vinegar and typically spices like garlic and onion powder is … More questionable. Especially paired with an underwhelming, omnipresent brand like Heinz.
I realize this isn’t actually a cross promotion either, but it still feels kind of adjacent to one. I’m tired of everything being mixed with everything. Heinz x Oreo. Nike x Twix. Pampers x Toyota. Make it fucking stop.
audaxdreik@pawb.socialto TechTakes@awful.systems•Microsoft’s 2030 Vision: no mouse, no keyboard — just the AI!English17·6 days agoThis really ignores that a vast majority of people appreciate some tactility and feedback on what they do. Millions if not billions of dollars have been spent researching the proper feedback to provide to users. I admit some workflows can be overly clumsy or burdensome, but even assuming AI functioned correctly and did the things it’s supposed to this is still incredibly delusional.
I mean, I don’t expect anything from Microsoft anymore, but the disconnect here between what they are attempting to promise and what people even want is growing at alarming levels.
audaxdreik@pawb.socialto Technology@lemmy.world•Meet the AI vegans. They are refusing to use artificial intelligence for environmental, ethical and personal reasonsEnglish14·6 days agoWhat aspects of crypto have been integrated into everything?
audaxdreik@pawb.socialto Technology@lemmy.world•Sweden prime minister under fire after admitting that he regularly consults AI tools for a second opinionEnglish155·7 days agoAbsolutely incorrect. Bullshit. And horseshoe theory itself is largely bullshit.
(Succinct response taken from Reddit post discussing the topic)
“Horseshoe Theory is slapping “theory” on a strawman to simplify WHY there’s crossover from two otherwise conflicting groups. It’s pseudo-intellectualizing it to make it seem smart.”
This ignores the many, many reasons we keep telling you why we find it dangerous, inaccurate, and distasteful. You don’t offer a counter argument in your response so I can only assume it’s along the lines of, “technology is inevitable, would you have said the same if the Internet?” Which is also a fallacious argument. But go ahead, give me something better if I assume wrong.
I can easily see why people would be furious their elected leader is abdicating thought and responsibility to an often wrong, unaccountably biased chat bot.
Furthermore, your insistance continues to push an acceptance of AI on those who clearly don’t want it, contributing to the anger we feel at having it forced upon us
Thanks! Yeah, only the first score extend. I’ve been trying to figure out the game on my own since I kinda treat these things as puzzles, but I think I’ve really maxed out what I can understand and it’s time I watch a video or two of a pro playing. I have a general concept of how things work, but I often forget where the hidden bees are. I’ve memorized a bunch of patterns but I still don’t really approach things with a “plan”, mostly just survive and pick up bees when/where I remember them. I also probably hold onto my hypers too long to use on the midboss and endboss, I could be more efficient with them.
Had no idea I was so far off on the scoring, though, oops. I can get the hidden extra on Stage 3 before getting the extend pretty easily, but I’ve only ever been able to get into Stage 5 twice as it is. I thought my barrier was skill, but maybe it’s scoring (AND skill). I appreciate the advice!