There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

Stanford researchers find Mastodon has a massive child abuse material problem

cross-posted from: beehaw.org/post/6795142

Mastodon, an alternative social network to Twitter, has a serious problem with child sexual abuse material according to researchers from Stanford University. In just two days, researchers found over 100 instances of known CSAM across over 325,000 posts on Mastodon. The researchers found hundreds of posts containing CSAM related hashtags and links pointing to CSAM trading and grooming of minors. One Mastodon server was even taken down for a period of time due to CSAM being posted. The researchers suggest that decentralized networks like Mastodon need to implement more robust moderation tools and reporting mechanisms to address the prevalence of CSAM.

candle_lighter ,
@candle_lighter@lemmy.ml avatar

Fortunately it’s all on Japanese instances that many instances like Mastodon.social defederate from

0x1C3B00DA ,
@0x1C3B00DA@kbin.social avatar

I have argued for a while that the Fediverse is way behind in this area; part of this lack of tooling and reliance on user reports, but part is architectural. CSAM-scanning systems work one of two ways: hosted like PhotoDNA, or privately distributed hash databases. The former is a problem because all servers hitting PhotoDNA at once for the same images doesn't scale. The latter is a problem because widely distributed hash databases allow for crafting evasions or collisions.

-- https://hachyderm.io/@det/110769474386499134

This is from the study's author (here's the full thread). It shows how pernicious centralization is in technology. The author is claiming the fediverse is "behind" instead of the tools behind behind in supporting decentralized services. They were developed with only centralized Silicon Valley silos in mind and now they can't keep up with a decentralized infrastructure and the authors solution is for decentralized services to centralize around these tools.

DmMacniel ,

Going by the blurb posted, not the link. How are the demanding more robust moderation and reporting tools when obviously reporting something even took down the instance in question?

Who was the sponsor of this research, Zuck and Musk?

density ,
@density@kbin.social avatar

In just two days, researchers found 112 instances of known CSAM across 325,000 posts

“We got more photoDNA hits in a two-day period than we’ve probably had in the entire history of our organization of doing any kind of social media analysis, and it’s not even close,”

In the whole history of this group they have found less than 112 pieces of CSAM? It's Stanford University. Why not drop in on a few of Jeffery Epstein's friends and fans. They can tell you were to look.

emeralddawn45 ,

Yeah literally. What a propaganda piece. Now do twitter, or Facebook, or Instagram. Except due to the walled garden effect of those platforms, the dangerous material probably isn’t viewable by just anyone. That doesn’t mean it’s not there though.

inexplicablehaddock ,

Or Reddit. You know, the website where a community dedicated to sharing CSAM was one of the biggest on the site and its lead moderator was a sitewide celebrity (oh, and Reddit’s current top admin was also a moderator on that community).

Quik ,

I don’t think it’s a propaganda piece as it’s even bringing up ideas on how to do moderation better in the Fediverse, it seems to me to be a bit too constructive to just call it propaganda and move on.

poVoq ,
@poVoq@slrpnk.net avatar

Very sensationalist head line.

If you read the paper, it is mostly that one well known Japanese instance that according to Japanese laws is mostly legal.

ragica ,
@ragica@lemmy.ml avatar

Where did you find the actual study? The link in the above article leads to purl.stanford.edu/vb515nd6874 which has an abstract, but I can’t see the study.

poVoq ,
@poVoq@slrpnk.net avatar

It links to a PDF with the full study.

drdiddlybadger ,
@drdiddlybadger@pawb.social avatar

Isn’t this bound to happen without built in automated tools for flagging and moderation. Not quite sure how the federation handles this sort of thing besides community modding, saying something if you see something.

sciawp ,

This is something I have worried about for a while. The core concept of the fediverse makes stuff like this really easy to do and there’s not really a solution. I guess government agencies just need to be on the lookout for it?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines