Understanding Apple Child Abuse Detection Scan Plans

I told you Apple would have some explaining to do

Lance Ulanoff
Debugger

--

Photo by Laurenz Heymann on Unsplash

Apple’s plans for scanning your iCloud photos to see if any of them match with a known database of child abuse imagery is coming into focus.

Ever since Apple confirmed plans to use artificial intelligence (AI) trained on a database of 200,000 Child Sexual Abuse Material (CSAM) images from the National Center for Missing & Exploited Children (NCMEC), to scan to iCloud images, consumers, privacy advocates, and I have had a raft of questions. Now we have some answers.

First, it’s important to note that Apple’s plans include two separate systems. One is an update for Messages that will flag potentially explicit material on message accounts of family account members. Those under 12 will get prompts if they send or receive explicit imagery and, after a certain threshold, parents (the Family account owners) will be notified. Family members between 13 and 17 will get the prompt, but parents won’t get a notification. Why those age distinctions? It more or less aligns…

--

--

Lance Ulanoff
Debugger

Tech expert, journalist, social media commentator, amateur cartoonist and robotics fan.