iPhone photos will be checked before they are uploaded to Apple’s cloud storage services in the United States, according to the company’s announcement on Thursday. The technology will guarantee that uploaded photos do not match known images of child sexual assault.
Apple claimed that if it detects child abuse images in a way that prevents false positives, it will notify law enforcement and send a report to them. The method, according to the report, is supposed to minimise false positives to one in a trillion, according to the document.
According to Apple, the new approach is designed to help police combat child sexual exploitation while also preserving the privacy and security principles that are important to Apple’s brand.
The technique might be used to monitor political speech or other content on iPhones, according to some privacy activists. Facebook Inc, Alphabet Inc’s Google, and Microsoft Corp. already screen photographs against a database of known child sexual abuse material before publishing them.
“With so many people using Apple products, these new safety measures have lifesaving potential for children who are being enticed online and whose horrific images are being circulated in child sexual abuse material,”
Here’s how Apple’s operating system works in brief.
They convert images into “hashes” – numerical numbers that positively identify the image, but cannot be utilised to rebuild it.
Using a technology called “NeuralHash“, Apple has developed a database that includes modified photographs that are similar to the originals.
The iPhones will be used to store the data. A hash of the image to be uploaded is created by the iPhone and compared to a database when it uploads it to Apple’s iCloud storage service.
Apple said that only photos kept on the phone are reviewed, and that human review is performed before a report is filed.
It is possible to challenge the suspension of a user’s account if they believe it was done in error, Apple added. Photos kept on phones are checked before they are sent to Apple’s servers, which is a unique feature.
Others raised fear on Twitter that the system could potentially be used to check phones for forbidden content or political speech in a more generalised way.
As India McKinney and Erica Portnoy of the Electronic Frontier Foundation pointed out, it may not be practical for outside researchers to double-check if Apple meets its promises about merely checking a restricted collection of on-device material.
“At the end of the day, even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor,” McKinney and Portnoy wrote.
growing Geeks Community of 3000+ & counting..
Get fast access to gadgets, tech news & deals!
Thank you for subscribing.
Something went wrong.