Amazon Web Services (AWS) has announced the availability of DeepSeek-R1 as a fully managed, serverless large language model (LLM) in Amazon Bedrock, and is the first cloud service provider to deliver the fully managed model as generally available. DeepSeek-R1 is part of a family of models launched by artificial intelligence startup DeepSeek. It is a publicly available model and offers sophisticated reasoning capabilities with high precision and deep contextual understanding when handling complex tasks.

Why you should care

DeepSeek has been a major topic of conversation over the last few months, with global news outlets covering its rapid upward trajectory and training techniques that have yielded models that are reportedly 90-95% more affordable and cost-effective than comparable models.
Today’s news expands the ways that customers can get started with DeepSeek-R1 and its distilled variants (i.e., smaller models trained to mimic the behavior of DeepSeek-R1) in Amazon Bedrock. DeepSeek-R1 is now available as a fully managed, serverless model in Amazon Bedrock, making it readily accessible to all AWS customers for enterprise scale deployment.
Any customer can tap into its powerful capabilities for solving complex problems, writing code, crunching numbers, analyzing data, and much more. And because it’s fully managed, they don’t have to worry about any technical setup or maintenance behind the scenes. When using DeepSeek-R1 in Amazon Bedrock, customers also benefit from enterprise-grade security, including data encryption and strict access controls, that help maintain data privacy and regulatory compliance. Customers retain full control over their data and can set up safeguards such as Amazon Bedrock Guardrails, which are recommended by AWS to detect and prevent hallucinations.
DeepSeek-R1 is already available in Amazon Bedrock Marketplace, which offers customers the opportunity to run models using self-managed infrastructure. Additionally, customers can already upload their own fine-tuned versions of DeepSeek-R1 Distilled Llama variants and run them as fully managed via Amazon Bedrock Custom Model Import, a capability that enables customers to import and use customized models alongside existing models through a single API.
Since the models became available in late January, thousands of customers have already deployed DeepSeek-R1 models using Amazon Bedrock via Custom Model Import.

Meet the AI: What is DeepSeek-R1

DeepSeek-R1 logo with access granted message for text and code generation
If DeepSeek-R1 were a person, they would be an expert software engineer who can effortlessly switch between coding in Python and explaining complex algorithms in plain English to writing a thesis on classic philosophers, and even translating your project requirements into Mandarin for your international team. The best part? They’re available 24/7 and don’t mind if you ask them to clarify something for the hundredth time.

Straight from the source, an AWS leader on using DeepSeek in Amazon Bedrock

"We are excited to bring DeepSeek-R1, a cutting-edge model with frontier reasoning performance at significantly lower inference costs, to Amazon Bedrock. When paired with features like Amazon Bedrock Guardrails, customers can implement AI safety guardrails while benefiting from the built-in security and privacy that Amazon Bedrock provides. With this fully managed implementation, customers can utilize the model in a serverless pay-per-token fashion that helps them scale from experiment to production without managing any infrastructure," said Vasi Philomin, VP of generative AI, AWS.

The bigger story

By making DeepSeek-R1 available as a fully managed, serverless model in Amazon Bedrock, AWS continues to bring the latest innovations in industry-leading gen AI models to a broad range of customers, from small startups to large enterprises, regardless of their technical expertise.
By offering the broadest selection of fully managed models from leading AI companies, AWS continues to enable businesses to choose the right tools for their specific needs, making it the easiest way to build and scale generative AI applications.

What else do I need to know?

AWS strongly recommends that customers integrate Amazon Bedrock Guardrails and Amazon Bedrock model evaluation features with their DeepSeek-R1 model to protect generative AI applications. Just as guardrails on a highway prevent cars from veering off the road, Amazon Bedrock Guardrails aid in preventing an application from producing harmful or inappropriate content. This includes helping to block offensive language, explicit content, or other material deemed unsuitable for end users, as well as helping identify and remove personal data to protect user privacy.
Customers can also set specific rules based on their company's policies or industry regulations. Evaluation tools help customers assess how well the AI model is performing for their specific needs. Learn more about Amazon Bedrock Guardrails and Amazon Bedrock evaluation tools.

Dive deeper

Visit the AWS News blog for more information about DeepSeek-R1's capabilities, how to deploy the model, and how to integrate DeepSeek-R1 with Amazon Bedrock features like Guardrails, and check out the DeepSeek-R1 in Amazon Bedrock product page.

How to use DeepSeek-R1

Get started with DeepSeek-R1 in the Amazon Bedrock Console.