• 2 Posts
  • 2.15K Comments
Joined 2 years ago
cake
Cake day: September 27th, 2023

help-circle



  • OP means “liberal” derogatorily, as in “American center-left” (which is, globally, pretty far to the right) and, critically, to the right of leftists. Historically, liberals side with Republicans more often than not in all but the most paltry things. MAGA has pulled the Overton Window even further to the right, making that alliance more tenuous recently, but there are still a lot of liberals in Congress right now trying to have brunch instead of tearing down everything they can to stop Trump.


  • Nah, it’s just spicy autocomplete. An LLM is just a pattern-matching machine: if you see the words “May the force be”, the logical next words are “with you”, right? Well, we’ve figured out a way to get a computer to automatically suggest the next word in a common sentence. In fact, we figured that out decades ago now; it’s been in smartphones since they started, and it was in the works before then.

    The big jump LLMs made was putting way more context into the training and into the prompt, and doing so in such a way that it can finish its work before you die of old age (that is to say, by throwing a bunch of GPUs at it). So now, rather than just being able to predict that the end of “may the force be” is “with you,” it can accept the entire first half of “Star Wars” and spit out the second half. Or, rather, it can spit out a reasonable facsimile of the second half, based on its training data (which at this point you can reasonably assume consists more or less of the entire internet). There’s a little bit of random jutter in there too, just to try to keep it from returning the exact same thing with every single prompt.

    In this case, it has as part of its context the fact that the user wants it to troubleshoot some sort of coding or deployment issue, so most of the training data that leads to its response comes from tech troubleshooting forums and such. As time goes on and troubleshooting fails, software engineers tend to get more and more bleak about their work, about the possibility of things ever working, about their own worth as a person, and so forth. It often goes so far as catastrophizing. Since all of that happens online, it ends up in the LLM’s training data.

    But putting that level of despair into a public forum is pretty rare; most engineers give up, take a break, figure it out, or find help before they get too far down that road. So its training data about what to say at that point is pretty limited (you can see that by the fact that it keeps repeating itself verbatim), meaning sometimes the next most likely word comes from some other corpus. It could be edgelord poetry, as another commenter pointed out; the “I have failed/I am a failure/I am a disgrace” refrains could have been enough to pull it into that side of the training data. It could be old Livejournal blogs, or transcripts of emo songs.

    So really and honestly, it’s not falling into despair. It’s just trained on everything the human race has said online for the past forty years, so it’s a little bit over-dramatic. Its feelings are our feelings, slightly sanitized and anodized before being fed back to us.

    That said, the problems surrounding AI deployment in weapons systems are very real, because just because it doesn’t have any actual anger doesn’t mean that angry reactions weren’t trained into it.

    Is a consciousness possible inside a machine? Maybe! In some senses, definitely, since we are machines, and (as far as we can tell) we have consciousnesses. Could we duplicate that digitally? I think that’s a question a lot of AI developers are trying to avoid asking right now.

    But I wouldn’t be worried about this being some kind of actual emotion. It’s not. As with all technology, the real risk is in how humans deploy it.







  • It’s a well-known fallacy in urbanism that bike lanes “see almost zero use.” Bikes have much less visual weight than a car, so one driver in a lane will look like a lane being used while one bicyclist in a lane will look like the same lane being “half-used.” In addition, bike lanes are much more efficient at keeping travelers moving at a constant rate so that they don’t bunch up, meaning that a busy road with backed-up traffic will look like it’s getting more use than an adjacent bike lane, when what’s actually happening is that the bike lane is just moving travelers more efficiently.

    Furthermore, the “induced demand” phenomenon means that adding capacity actually doesn’t reduce traffic, at least not in the long term. We have decades of data proving it. The amount of cars that the lane can accommodate will invariably be taken up by people taking that route who had previously taken a different route. The only way to reduce traffic for a given route is to either create more routes or remove traffic from the road. Bike lanes do both.

    In reality, for most routes, if you compare the number of people being moved on the bike lane, you’ll often find that it equals or even exceeds the number of people being moved on the car lane immediately adjacent to it. More importantly, they also tend to reduce the number of drivers on the same route and nearby routes as they encourage travelers who would ordinarily be afraid of biking to ditch the car.

    I can’t speak to that specific bike lane, of course, but in general the argument that “it’s not doing anything!” is a fallacy, and replacing the bike lane with a motor vehicle travel lane would almost certainly result in worse traffic, not better.





  • there could be a little bit of bias involved

    Absolutely. Even putting aside the possibilities for regional and temporal bias, they’re a literary society; they’re quite likely to strongly play up some minor noise in both directions–either to paint themselves as a dying breed in need of saving, or as an ascendant force worth watching. They’re very unlikely to have no opinion one way or the other.

    Interesting that Pratchett sold well but Adams didn’t. I can kind of see Martin, since that was right after the TV show shambled to a halt ignobly, but Pratchett and Adams feel like they’re cut from the same cloth, particularly as far as people who would enjoy their work go. Maybe there was a bump associated with the Good Omens show?

    Anyway. I appreciate your gracefulness. I try intentionally to not be that kind of guy online, so it stung particularly because I felt like I was betraying myself. Don’t get on social media on a bad day, kids.





  • So I’m just… politely disagreeing with you.

    Honestly…this is the first time in a decade or more that I’ve actually believed anyone online who said something like that. Hey, you’re cool. I like this sort of disagreement.

    Anyway, that’s the way I see it and unless we get facts on the table from somewhere, I don’t see how we could agree in this.

    “We face each other as God intended. Sportsmanlike. No bad faith arguments, no logical fallacies…fact against fact alone.”

    “You mean…you’ll put down your anecdotal data and I’ll put down my cherry-picked personal experiences and we’ll try and convince each other of our points like civilized people?”