Across Amazon’s services, we devote significant resources to combating CSAM, and we continue to invest in new technologies and tools to improve our ability to prevent, detect, respond to, and remove it.

Our approach

As we strive to be Earth’s Most Customer-Centric Company, Amazon and its subsidiaries provide services directly to customers as well as enable businesses to use our technology and services to sell and provide their own products and services. In all cases, our policies, teams, and tools work together to prevent and mitigate CSAM.
Our consumer services use a variety of tools and technologies such as machine learning, keyword filters, automated detection tools, and human moderators to screen images, videos, or text in public-facing content for policy compliance before it’s allowed online. These measures enforce multiple policies, including prohibitions on CSAM, and they appear to be an effective deterrent, as reflected in the low number of reports for CSAM. As one example, Amazon Photos uses Thorn’s Safer technology to detect hash matches of images uploaded to the service and verifies positive matches using human reviewers.
We also work directly with businesses that use Amazon technologies and services to mitigate abuse in their products and services. For example, we make Thorn’s Safer technology available to businesses via the Amazon Web Services (AWS) Marketplace so they can proactively identify and address CSAM.
Our services enable anyone to report inappropriate, harmful, or illegal content to us. When we receive reports of prohibited content, we act quickly to investigate and take appropriate action. We have relationships with U.S. and international hotlines, like the National Center for Missing & Exploited Children (NCMEC) and the Internet Watch Foundation (IWF), to allow us to receive and quickly act on reports of CSAM.

2024 CSAM mitigation

Depending on the specifics of the service, we remove or disable, as applicable: URLs, images, chat interactions, resources, services, or accounts.
Amazon and its subsidiaries collectively submitted 64,195 reports of CSAM to NCMEC in 2024.* These reports related to content found in 3,959 accounts.
Detection
Amazon Photos reported 30,778 images (affecting 3,758 accounts) using Safer—a 24.8% increase YOY.** As part of our commitment to safeguard our LLM datasets from CSAM and responsibly host models, we automatically detected and reported 30,744 pieces of content.
Reports
Amazon received 337 reports from third parties for potential CSAM content including chat interactions and URLs. Hotlines, such as those administered by NCMEC, IWF, Canadian CyberTipline, and INHOPE, submitted a total of 752 reports (in 273 accounts) for content that we promptly reviewed and actioned as appropriate (average time to action any hotline report was 2.6 days). For all content reported by hotlines, we found 391 were CSAM (relating to 200 accounts), actioned them and reported them to NCMEC. For reports involving AWS customers, customers resolved the issue without additional intervention 87% of the time.
*Get more information about Twitch’s efforts.
**Photos re-submitted 11,307 images originally reported 2023. These re-submissions are not included in the 2024 total.

2024 highlights

Youth Safety Campaign
As part of the launch of the NUDES series about cyber-bullying, Amazon and the producer of the series organized 20+ screenings and awareness-raising roundtables gathering the series’ talents, experts and policymakers at high schools and universities throughout France. The project was recognized by EU Commission’s “Safer Internet” program and was praised in the media for its awareness-raising message on online child safety. The series is available on www.amazon.fr.
European Union Internet Forum (EUIF)
Amazon joined the EUIF in 2024. One of our key aims was to continue to collaborate with industry, law enforcement and civil society on effective solutions to mitigate the use of online technology to disseminate CSAM.
Mitigating CSAM in AI
Amazon joined the Generative AI Principles to Prevent Child Abuse in April 2024. We highlight the ways we are complying with each of the Principles, below.
AWS's Responsible AI policy explicitly prohibits the use of our AI/ML Services to harm or abuse a minor, which includes grooming and child sexual exploitation.
In our own models, we scan for known CSAM on data sets that are used to build our generative AI models and remove and report that content to NCMEC. We design and test our models and generative AI applications to reduce the risk that they will produce exploitative content. Our first party image models embed an invisible watermark on all images they generate, and we also offer a detection solution to allow individuals to check for the existence of the watermark. Our first party image models also by default include content credentials based on a technical specification developed by the Coalition for Content Provenance and Authenticity (C2PA).
Amazon Bedrock makes models available for customers to use and train. Amazon Bedrock includes automated detection for known CSAM and rejects and reports positive matches. Additionally, customers can configure Amazon Bedrock Guardrails to provide additional protections (including for sexual content) and help enforce their own acceptable use policies.
We design our consumer-facing Generative AI products, such as PartyRock and Rufus with safeguards. We block exploitative prompts and responses and test our systems to prevent inappropriate use or model outputs. We allow users to report content that may escape our controls; our trust and safety teams prioritize review of those reports and we make adjustments to prevent recurrence.
Finally, Amazon has continued to invest financial and in-kind contributions to key organizations that are working on research and technologies to address safety risks associated with the advancement of Generative AI. This includes Thorn, NCMEC, and the Coalition for Content Provenance and Authenticity.

Commitments and partnerships

As part of our work to fight CSAM, we engage with a variety of organizations and support their work to protect children.
Amazon has endorsed the Voluntary Principles to Counter Child Sexual Exploitation and Abuse and is part of the WePROTECT Global Alliance. We sit on the boards of NCMEC and the Tech Coalition. Together with Thorn and All Tech is Human, we have committed to Generative AI Principles to Prevent Child Abuse to reduce the risk that our generative AI services will be misused for child exploitation.
Amazon provides NCMEC millions of dollars in AWS technology and services to reliably operate mission-critical infrastructure and applications to help missing and exploited children. In 2024, Amazon continued to provide financial support to NCMEC’s Exploited Child Division to advance their hash sharing and Notice and Tracking initiatives that help remove CSAM from the internet. Amazon continues to partner closely with Thorn, including by providing millions of dollars in free advanced cloud services and technical support from AWS for Thorn to develop and operate their services. In 2024, with financial support from AWS, Thorn enhanced Safer Predict and Safer Portal, to better support customers to easily detect and manage known and unknown CSAM.

More information on our partnerships

FAQs

Previous reports