• 4 Posts
  • 10 Comments
Joined 10 months ago
cake
Cake day: October 14th, 2024

help-circle
  • DegenerateSupreme@lemmy.ziptoFuck AI@lemmy.worldOn Exceptions
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    9 days ago

    I’d say the main ethical concern at this time, regardless of harmless use cases, is the abysmal environmental impact necessary to power centralized, commercial AI models. Refer to situations like the one in Texas. A person’s use of models like ChatGPT, however small, contributes to the demand for this architecture that requires incomprehensible amounts of water, while much of the world does not have enough. In classic fashion, the U.S. government is years behind on accepting what’s wrong, allowing these companies to ruin communities behind a veil of hyped-up marketing about “innovation” and beating China at another dick-measuring contest.

    The other concern is that ChatGPT’s ability to write your Python code for data modeling is built on the hard work of programmers who will not see a cent for their contribution to the model’s training. As the adage goes, “AI allows wealth to access talent, while preventing talent from accessing wealth.” But since a ridiculous amount of data goes into these models, it’s an amorphous ethical issue that’s understandably difficult for us to contend with, because our brains struggle to comprehend so many levels of abstraction. How harmed is each individual programmer or artist? That approach ends up being meaningless, so you have to regard it more as a class-action lawsuit, where tens of thousands have been deprived as a whole.

    By my measure, this AI bubble will collapse like a dying star in the next year, because the companies have no path to profitability. I hope that shifts AI development away from these environmentally-destructive practices, and eventually we’ll see legislation requiring model training to be ethically sourced (Adobe is already getting ahead of the curve on this).

    As for what you can do instead, people have been running local Deepseek R1 models since earlier this year, so you could follow a guide to set one up.



  • I’m in complete agreement with this perspective, but rarely do I see discussions like this address the sticking point centrists and conservatives get hung up on: they don’t believe this is “theft.”

    When I told my coworker about the historic productivity-to-wages gap, she argued (paraphrasing), “Could it not be that gap is reflective of the CEOs innovating ways to make their workers increasingly productive, while the value of those workers’ labor hasn’t actually increased, therefore explaining why the minds behind those innovations deserve the wealth?”

    This conversation will go nowhere if we keep throwing around terms like “wage theft” and skipping step 1 where we argue the moral determination as to why that is true.


  • To say “that feeling” of indignation (at the letter’s inclusion in a gallery) is the same as other things that make him roll his eyes, is reductionist. We regard things as stupid for different reasons; they’re not all the “same feeling.” As others have said, the artist’s intentionality in presenting something is part of its message. So the indignation he felt about a piece being put in a gallery is part of that piece’s effect on him, born from the artist’s choices. That feeling is different than hearing a moron say something dumb and thinking it’s stupid.

    Intentionality is the key. Case in point, “language evolves” is a silly thing to say after a mistake, but many subcultures start misspelling things on purpose, and that intentionality is how language evolves.




  • I feel conflicted. On one hand, people can regulate themselves, and Facebook becoming a bigoted cesspit may bring more people to a moderated Fediverse.

    On the other hand, these major platforms having such user monopoly and influence can cause unfettered hate speech to breed violence.

    I’m conflicted about the idea that an insidious for-profit megacorporation should be expected to uphold a moral responsibility to prevent violence; their failure to do so might be a necessary wake-up call that ultimately strips them of that problematic influence. Thoughts?




  • DegenerateSupreme@lemmy.ziptopolitics @lemmy.worldTrump wins.
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    9 months ago

    Watching the count last night, I remarked to my friend that Democrats lost people in the middle—the undecided voters whose existence I struggled to understand a few months ago—because they don’t actually have any principles and convictions; if they did, they’d eventually have to address issues that neoliberal capitalists want people to ignore, so they remain ineffectual and uninspiring.

    Thus, white middle-class centrists who don’t actually comprehend the threat to minority groups drifted back into their nostalgic dreams of ‘smaller government’ and ‘lower taxes’, regardless of being presented no evidence those things will be delivered. In their minds, those theoretical ideals are more exciting than another establishment Democrat with no values who does nothing to speak to their woes.




  • Agreed. The problem is that so many (including in this thread) argue that training AI models is no different than training humans—that a human brain inspired by what it sees is functionally the same thing.

    My response to why there is still an ethical difference revolves around two arguments: scale, and profession.

    Scale: AI models’ sheer image output makes them a threat to artists where other human artists are not. One artist clearly profiting off another’s style can still be inspiration, and even part of the former’s path toward their own style; however, the functional equivalent of ten thousand artists doing the same is something else entirely. The art is produced at a scale that could drown out the original artist’s work, without which such image generation wouldn’t be possible in the first place.

    Profession. Those profiting from AI art, which relies on unpaid scraping of artists’s work for data sets, are not themselves artists. They are programmers, engineers, and the CEOs and stakeholders who can even afford the ridiculous capital necessary in the first place to utilize this technology at scale. The idea that this is just a “continuation of the chain of inspiration from which all artists benefit” is nonsense.

    As the popular adage goes nowadays, “AI models allow wealth to access skill while forbidding skill to access wealth.”


  • DegenerateSupreme@lemmy.ziptoPolitical Memes@lemmy.worldFar left intellectualism
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    11
    ·
    10 months ago

    I was banned from r/LateStageCapitalism for politely supporting a post with this reasoning. I pointed out that Trump would make the conflict even worse for innocents, and voting third-party to make a statement against neoliberal Democrat rule (which is bad) is a position that, in this moment, only the least-vulnerable in America can take when there is a risk of outright christo-fascism threatening the least-enfranchised.

    Banned. “This is a socialist sub.” Proceeded to see a post from a mod openly mocking anyone who entertained lesser-of-two-evils arguments; they sounded like a sneering teenager. Over there, it’s all theory and no parsing of theory with reality.