• 2 Posts
  • 30 Comments
Joined 26 days ago
cake
Cake day: July 18th, 2025

help-circle
  • Historically, Firefox has had fewer security measures than Chrome. For example, full tab isolation was only implemented recently in Firefox, many years after Chrome. MV3-only extentions in Chrome also reduce the attack surface from that perspective.

    The counterpoint to this is that there are much fewer users of Firefox, so it less attractive to try to exploit. Finding a vulnerability in Chrome is much more lucrative, since it has the potential to reach more targets.


  • a CoT means externally iterating an LLM

    Not necessarily. Yes, a chain of thought can be provided externally, for example through user prompting or another source, which can even be another LLM. One of the key observations behind these models commonly referred to as reasoning is that since an external LLM can be used to provide “thoughts”, could an LLM provide those steps itself, without depending on external sources?

    To do this, it generates “thoughts” around the user’s prompt, essentially exploring the space around it and trying different options. These generated steps are added to the context window and are usually much larger that the prompt itself, which is why these models are sometimes referred to as long chain-of-thought models. Some frontends will show a summary of the long CoT, although this is normally not the raw context itself, but rather a version that is summarised and re-formatted.





  • RoadTrain@lemdro.idtoTechnology@lemmy.worldNo bias, no bull AI
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    3 days ago

    What if AI didn’t just provide sources as afterthoughts, but made them central to every response, both what they say and how they differ: “A 2024 MIT study funded by the National Science Foundation…” or “How a Wall Street economist, a labor union researcher, and a Fed official each interpret the numbers…”. Even this basic sourcing adds essential context.

    Yes, this would be an improvement. Gemini Pro does this in Deep Research reports, and I appreciate it. But since you can’t be certain that what follows are actual findings of the study or source referenced, the value of the citation is still relatively low. You would still manually have to look up the sources to confirm the information. And this paragraph a bit further up shows why that is a problem:

    But for me, the real concern isn’t whether AI skews left or right, it’s seeing my teenagers use AI for everything from homework to news without ever questioning where the information comes from.

    This is also the biggest concern for me, if not only centred on teenagers. Yes, showing sources is good. But if people rarely check them, this alone isn’t enough to improve the quality of the information people obtain and retain from LLMs.




  • I use GroundNews. Their biggest value to me is that I can see the headlines for the same coverage from different sources before I read the text. A lot of times this alone is enough to tell me if there is actual content there or just speculation/alarmism. If I do decide to read the content, it’s a very easy way to get a few different perspectives on the same matter, and over time I start to recognise patterns in the reporting styles even when I’m not reading through GroundNews.

    Another useful feature is that you can past an article link or headline and it will show you alternative sources for the same coverage. This doesn’t always find useful alternatives, but it’s a simple, easy way to do basic fact-checking.

    And while most people here might not appreciate it, when they aggregate multiple sources, they also have an LLM-written summary of the content of the articles. The (somewhat ironic) thing about these summaries is that often they’re the least biased, most factual interpretation of the news compared to all the sources covering it. This is because the summaries are generated from all the content, so when the LLM finds weak or contrasting information, it won’t report it as a fact; when most of the sources agree, then it will summarise the conclusion. This is an excellent use for LLM in my opinion, but you can use GroundNews perfectly fine without it.









  • I would say it’s slightly more than this: The vast majority of Lemmy is comprised of only a few things—politics, tech, memes—and it’s hard to find discussions or opinions about almost everything else. The main value of reddit to me is (was?) that you could find a lot of input from people involved in a wide variety of fields, from niche hobbies to more generic areas of interest like history, philosophy, or medicine.

    I’ve actually found that there are people on Lemmy with similar levels of expertise, and they’re willing to share it just as well, but they have fewer opportunities to do so, because very few threads get posted outside the 3 main topics. Several times I’ve come across useful and interesting insight, but it was in the comments of posts only vaguely related, so it would have been difficult to find intentionally if I hadn’t run into it.

    So, perhaps, this is what could improve Lemmy: starting more discussions about different topics. Perhaps this will attract more people to read them, which might attract more people to post.




  • RoadTrain@lemdro.idtoLinux@lemmy.mlAnyone use powershell on linux?
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    9 days ago

    Hi! I’m interested in trying Nushell at some point, although I keep putting it off…

    Would you share your experience on a couple of items?

    1. How easy was it to get started?
    2. Do you find, or did you at least find in the beginning, that it is more suited for some particular tasks than using it as your day-to-day shell? If so, what were those?
    3. Can you integrate it with existing tools that you know how to use from other shells, like grep or awk?