There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

UK watchdog accuses Apple of failing to report sexual images of children

Apple is failing to effectively monitor its platforms or scan for images and videos of the sexual abuse of children, child safety experts allege, which is raising concerns about how the company can handle growth in the volume of such material associated with artificial intelligence.

The UK’s National Society for the Prevention of Cruelty to Children (NSPCC) accuses Apple of vastly undercounting how often child sexual abuse material (CSAM) appears in its products. In a year, child predators used Apple’s iCloud, iMessage and Facetime to store and exchange CSAM in a higher number of cases in England and Wales alone than the company reported across all other countries combined, according to police data obtained by the NSPCC.

Through data gathered via freedom of information requests and shared exclusively with the Guardian, the children’s charity found Apple was implicated in 337 recorded offenses of child abuse images between April 2022 and March 2023 in England and Wales. In 2023, Apple made just 267 reports of suspected CSAM on its platforms worldwide to the National Center for Missing & Exploited Children (NCMEC), which is in stark contrast to its big tech peers, with Google reporting more than 1.47m and Meta reporting more than 30.6m, per NCMEC’s annual report.

FlyingSquid ,
@FlyingSquid@lemmy.world avatar

I honestly do not know a good way to monitor these platforms for CSAM. Obviously, they need to be monitored, but the choices, as far as I can tell, is either pay people to look at traumatic things all the time or use AI to do it. The former sounds kind of like either torture or titillation depending on who you are, and neither is a good thing, and I’m not convinced the latter will work.

SlopppyEngineer ,

Well, to get the AI to have enough training data you have to pay people to look at traumatic things all the time first. Once that is done, you still need some people to check the AI to catch false positives as being falsely charged can be devastating. There are already people stories for people being arrested for CSAM because they’ve sent a pic of their child in a swimming pool to the grandparents.

FlyingSquid ,
@FlyingSquid@lemmy.world avatar

Good point. So no matter what, you end up essentially either torturing people (sure, they’re being paid, but it’s still a form of psychological torture to have to look at CSAM images all the time) or you hire people who want to look at CSAM. Both are terrible options, but I guess the former is really the only viable one.

And, of course, the humans hired to do this are always from developing nations so they can get away with paying them a few dollars a day.

Docus ,

Obviously, they need to be monitored is a flawed view. Obviously, you want all your mail to be checked, all your phone calls to be recorded?

Obviously, most of us would like the producers of CSAM, and the people spreading it, to be held to account. But arguing that if we could just monitor these platforms we would reduce the problem is not credible.

Docus ,

There is no ‘good’ way of monitoring these platforms without a massive intrusion of privacy. Just like there is no good way of monitoring what people store on their hard disks, memory sticks, or burn onto dvd/cd and send through the mail.

FlyingSquid ,
@FlyingSquid@lemmy.world avatar

You think you have privacy on these platforms? They’ve been using everything in their cloud storage to train their AIs.

Docus ,

Any evidence that Apple use everything I store in iCloud for training their AI? Please share your evidence. There is a long list of things I don’t like about Apple, but their views on privacy are not on that list.

wewbull ,

The trouble is that Apple (after a lot of protest against their proposed “solution”) didn’t implement measures that other platforms did. For once I think they did the right thing, but it’s a difficult position to defend against the NSPCC.

We need to value the privacy of people more as a society IMHO.

xmunk ,

If Apple is unable to properly monitor their service then they should stop offering it.

I am a developer that works with file classification and I’m quite aware of how difficult this monitoring is… and that’s not a fucking excuse. Apple chose to become a monopoly company with immense power and if they can’t properly run it then maybe we shouldn’t have such gigantic platforms.

redditReallySucks ,
@redditReallySucks@lemmy.dbzer0.com avatar

How do you scan images for content without infringing on the privacy of users?

xmunk ,

I don’t offer such a service so I don’t host user’s images.

This is a hard problem to solve.

Docus ,

You are missing the point here that from a privacy point of view, Apple should not have the ability to see what I store on my phone, or by extension on iCloud. Just like the company that made my TV has zero business knowing what I watch on that TV.

xmunk ,

Oh, I’m perfectly happy to exclude local phone contents from searching. People have all sorts of private images on their phones and it’s creepy as fuck to try and dig into that.

Apple is trying to offer this service and it’s unreasonable and extremely fallible - they should stop that.

RestrictedAccount ,

V

autotldr Bot ,

This is the best summary I could come up with:


Apple is failing to effectively monitor its platforms or scan for images and videos of the sexual abuse of children, child safety experts allege, which is raising concerns about how the company can handle growth in the volume of such material associated with artificial intelligence.

In a year, child predators used Apple’s iCloud, iMessage and Facetime to store and exchange CSAM in a higher number of cases in England and Wales alone than the company reported across all other countries combined, according to police data obtained by the NSPCC.

Through data gathered via freedom of information requests and shared exclusively with the Guardian, the children’s charity found Apple was implicated in 337 recorded offenses of child abuse images between April 2022 and March 2023 in England and Wales.

The company instead directed the Guardian to statements it made last August, in which it said it had decided not to proceed with a program scanning iCloud photos for CSAM because it instead chose a path that “prioritizes the security and privacy of [its] users”.

Apple’s tool, called neuralMatch, would have scanned images before they were uploaded to the iCloud’s online photo storage, comparing them against a database of known child abuse imagery via mathematical fingerprints known as hash values.

“Apple does not detect CSAM in the majority of its environments at scale, at all,” said Sarah Gardner, chief executive officer of Heat Initiative, a Los Angeles non-profit focused on child protection.


The original article contains 764 words, the summary contains 240 words. Saved 69%. I’m a bot and I’m open source!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines