- cross-posted to:
- fediverse@lemmy.world
- cross-posted to:
- fediverse@lemmy.world
cross-posted from: https://infosec.exchange/users/thenexusofprivacy/statuses/115012347040350824
As you’ve probably seen or heard Dropsitenews has published a list (from a Meta whistleblower) of “the roughly 100,000 top websites and content delivery network addresses scraped to train Meta’s proprietary AI models” – including quite a few fedi sites. Meta denies everything of course, but they routinely lie through their teeth so who knows. In any case, whether the specific details in the report are accurate, it’s certainly a threat worth thinking about.
So I’m wondering what defenses fedi admins are using today to try to defeat scrapers: robots.txt, user-agent blocking, firewall-level blocking of ip ranges, Cloudflare or Fastly AI scraper blocking, Anubis, stuff you don’t want to disclose … @deadsuperhero@social.wedistribute.org has some good discussion on We Distribute. It would b e very interesting to hear what various instances are doing.
And a couple of more open-ended questions:
Do you feel like your defenses against scraping are generally holding up pretty well?
Are there other approaches that you think might be promising that you just haven’t had the time or resources to try?
Do you have any language in your terms of servive that attempts to prohibit training for AI?
Here’s @FediPact’s post with a link to the Dropsitenews report and (in the replies) a list of fedi instances and CDNs that show up on the list.
Just to clarify your question, are you concerned about metas scrapers causing additional server load, or about them stealing the content?
Not OP but i’d be concerned about both
The nature of federation makes the later basically impossible to prevent. All data is federated freely, so all meta has to do is spin up an instance and the data is handed directly to them.
Yeah. It’s really just making them do that kind of work. We can block those instances, though ofc it won’t truly stop them, it’ll change the cost benefit analysis.
That plus anubis or something, and whatever future tech that arises
Agreed, it’s all about changing the cost-benefit analysis, great framing. And also agreed, blocking – and/or shifting to allow-list federation or something more nuanced (to deal with the point @CameronDev@programming.dev makes about Meta just being able to spin up a new instance – is a really important complement to preventing scraping.
Only from the moment they start the instance. That doesn’t give them historical data.
Yeah I think most admins are concerned about both. And whether or not it’s “stealing” (in the legal sense), a lot of people want to keep their content and personal information out of these AI systems.