• 0 Posts
  • 1.09K Comments
Joined 6 months ago
cake
Cake day: February 10th, 2025

help-circle

  • FauxLiving@lemmy.worldtoLemmy Shitpost@lemmy.worldLemmy be like
    link
    fedilink
    arrow-up
    1
    arrow-down
    2
    ·
    12 hours ago

    AlphaFold is made by DeepMind, an Alphabet (Google) subsidiary.

    Google and OpenAI are also both developing world models.

    These are a way to generate realistic environments that behave like the real world. These are core to generating the volume of synthetic training data that would allow training robotics models massively more efficient.

    Instead of building an actual physical robot and having it slowly interact with the world while learning from its one physical body. The robot’s builder could create a world model representation of their robot’s body’s physical characteristics and attach their control software to the simulation. Now the robot can train in a simulated environment. Then, you can create multiple parallel copies of that setup in order to generate training data rapidly.

    It would be economically unfeasible to build 10,000 prototype robots in order to generate training data, but it is easy to see how running 10,000 different models in parallel is possible.

    I think that continuing to throw billions at marginally better LLMs and generative models at this point is hurting the real innovators.

    On the other hand, the billions of dollars being thrown at these companies is being used to hire machine learning specialists. The real innovators who have the knowledge and talent to work on these projects almost certainly work for one of these companies or the DoD. This demand for machine learning specialists (and their high salaries) drives students to change their major to this field and creates more innovators over time.



  • FauxLiving@lemmy.worldtoLemmy Shitpost@lemmy.worldLemmy be like
    link
    fedilink
    arrow-up
    3
    arrow-down
    4
    ·
    12 hours ago

    Do you really need to have a list of why people are sick of LLM and Ai slop?

    We don’t need a collection of random ‘AI bad’ articles because your entire premise is flawed.

    In general, people are not ‘sick of LLM and Ai slop’. Real people, who are not chronically online, have fairly positive views of AI and public sentiment about AI is actually becoming more positive over time.

    Here is Stanford’s report on the public opinion regarding AI (https://hai.stanford.edu/ai-index/2024-ai-index-report/public-opinion).

    Stop being a corporate apologist and stop wreaking the environment with this shit technology.

    My dude, it sounds like you need to go out into the environment a bit more.


  • FauxLiving@lemmy.worldtoLemmy Shitpost@lemmy.worldLemmy be like
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    13 hours ago

    I firmly believe we won’t get most of the interesting, “good” AI until after this current AI bubble bursts and goes down in flames.

    I can’t imagine that you read much about AI outside of web sources or news media then. The exciting uses of AI is not LLMs and diffusion models, though that is all the public talks about when they talk about ‘AI’.

    For example, we have been trying to find a way to predict protein folding for decades. Using machine learning, a team was able to train a model (https://en.wikipedia.org/wiki/AlphaFold) to predict the structure of proteins with high accuracy. Other scientists have used similar techniques to train a diffusion model that will generate a string of amino acids which will fold into a structure with the specified properties (like how image description prompts are used in an image generator).

    This is particularly important because, thanks to mRNA technology, we can write arbitrary sequences of mRNA which will co-opt our cells to produce said protein.


    Robotics is undergoing similar revolutionary changes. Here is a state of the art robot made by Boston Dynamics using a human programmed feedback control loop: https://www.youtube.com/watch?v=cNZPRsrwumQ

    Here is a Boston Dynamics robot “using reinforcement learning with references from human motion capture and animation.”: https://www.youtube.com/watch?v=I44_zbEwz_w


    Object detection, image processing, logistics, speech recognition, etc. These are all things that required tens of thousands of hours of science and engineering time to develop the software for, and the software wasn’t great. Now, freshman at college can train a computer vision network that outperforms these tools using free tools and a graphics card which will outperform the human-created software.

    AI isn’t LLMs and image generators, those may as well be toys. I’m sure eventually LLMs and image generation will be good, but the only reason it seems amazing is because it is a novel capability that computers have not had before. But the actual impact on the real world will be minimal outside of specific fields.






  • I’m not saying use technology to extend a person’s biological memory. I’m saying use technology to keep a record of a person’s life (obviously I know the privacy implications of doing this in actual practice in the year 2025, which is why I prefaced my comment with “In a techno utopia”).

    You, personally, will still forget things and be capable of nostalgia.

    I think it’s pretty uncontroversial to say that people like to have pictures. They collect pictures of vacations that they enjoyed, pictures of their children when they were X age, pictures of dead relatives and pictures of themselves with friends. Because people enjoy revisiting memories. When video cameras became more ubiquitous, people took videos of vacations they enjoyed, videos of their children’s first steps, videos of themselves. There are entire markets for services which let you store and retrieve every picture that you’ve ever taken.

    At the same time everyone has a story where they wish they had recorded some event. For example, a baby’s first steps that a spouse missed because they were at work or some unexpected spectacular event. Or even mundane things like ‘Where did I leave my phone?’. Having the ability to keep a record of memories, in video or in some hypothetical full-sensory recording, of every moment is something that people would be interested in.

    Compare this to prompting your local AI with “give me a perfect list of songs from my childhood”.

    Perhaps this is just a matter of taste, because I would absolutely do this.



  • The problem of social manipulation via bots isn’t limited to intelligence operations, though I would argue that this is the most immediate danger.

    We’re also seeing a huge spike in advertising bots pretending to be normal users just to push goods and services.

    Because of these motives social media has become less about bringing people together and more about extracting information from people in order to more efficiently manipulate them.

    It’s causing social media to become actively dangerous to society in general. Ensuring that everyone is a human is an essential first step for having ethical online social interactions.

    Just look at the difference in conversations on Lemmy vs Reddit. Sure, there are some assholes here and there but it’s largely a calm place where you can have an actual conversation.

    This is how online discourse used to be from the early BBS days right up until Facebook and algorithmically curated feeds discovered that fear, outrage and anger are the best drivers of engagement.

    Now, in addition to the platform’s manipulation (which is largely commercially motivated) we have LLMs which let anybody with funding create massive armies of fake people who can dynamically insert themselves into conversations in order to push any messaging you can imagine.

    It’s a bad situation that needs an immediate solution.

    I just don’t like that the solution has been decided on, in secret, by western democracies and is being forcefully implemented in a manner that also allows intelligence/law enforcement a backdoor into everything. (A digital ID also makes it very easy to view every users complete Internet history because that data is tagged with the users actual identity).









  • It’s really simple.

    The western democracies want to create a universal digital ID wallet and have that be required to access any site.

    There are a lot of reasons they could want this. For example, there are probably tens of millions of fake accounts controlled by adversarial nations which are used to sow extremism and disinformation online. It is impossible for counterintelligence to detect these at scale. We can see the corrosive effects that social media is having on society, there are countries actively working to make the problem worse but we have no tools to stop them.

    This is also why there is a big push to limit children from accessing social media. They’re often the targets for these campaigns because they’re easily manipulated and have a lot of free time to spread the misinformation once they’re indoctrinated.

    I don’t think a digital ID is the way to solve this problem. But, we’re not being asked or informed about why it is happening. They’re, instead, trying to ram these measures through using moral panic about children so anybody opposing them is easily dismissed as “not caring about The Children” or “supporting sex trafficking/pedophiles/predators”.

    I understand the situation, but they’re trying to go around the democratic process by not talking about the problems.