Amazon has announced a $4 billion investment in the artificial intelligence startup Anthropic (the creator of the breakthrough AI model ‘Claude’), as part of a broader collaboration to develop top-performing AI systems that become accessible to Amazon Web Services (AWS) customers.

Anthropic has chosen AWS as its main cloud provider in this agreement and will use AWS Trainium and Inferentia chips to train and deploy its upcoming models. Most of Anthropic’s workloads will be on AWS, taking advantage of the cloud provider’s infrastructure for important tasks such as safety research and developing foundation models.

In return, Anthropic has committed to providing AWS customers worldwide with access to its foundation models, including Claude and the recently announced Claude 2, through Amazon Bedrock. This fully managed AWS service enables companies to build generative AI applications using top foundation models.

Anthropic will also give AWS customers early access to unique model customization and fine-tuning capabilities, allowing them to optimize performance by using their own data.

In April this year, we shared news that Anthropic was looking to raise further capital from VC firms and key partners to build its next generation of AI models.

The $4 billion investment will give Amazon a minority stake in Anthropic. While the company’s governance structure remains unchanged, Amazon developers will be able to incorporate Anthropic’s AI capabilities into their own applications via Amazon Bedrock.

“We have tremendous respect for Anthropic’s team and foundation models, and believe we can help improve many customer experiences through our collaboration,” said Amazon CEO Andy Jassy in a press release published hours ago.

Anthropic co-founder and CEO Dario Amodei said the company was “excited to use AWS’s Trainium chips to develop future foundation models.”

“By significantly expanding our partnership, we can unlock new possibilities for organizations of all sizes, as they deploy Anthropic’s safe, state-of-the-art AI systems together with AWS’s leading cloud technology,” he said.

Anthropic and Claude Are the Most Serious Competitors to OpenAI and ChatGPT

Established in 2021, Anthropic has rapidly emerged as a top provider of safe and dependable generative artificial intelligence systems. Customers report that its Claude model performs well in complex language tasks while ensuring predictability and reducing harmful results. Importantly, it can handle significantly more information at once compared to other AI models, particularly ChatGPT.

The expanded AWS collaboration builds on the strong early adoption of Claude through Amazon Bedrock since its launch in April. Customers are using the technology for automated market forecasts, research reports, drug discovery, and personalized education.

AWS has rapidly expanded its generative AI stack in recent months, providing access to specialized hardware, foundation models, and development tools. The Anthropic deal delivers advanced customization for its Bedrock service.

Both companies are committed to responsibly advancing AI, engaging with regulators, and joining industry initiatives like the recent Voluntary Commitments on Responsible AI led by the White House.

“Training state-of-the-art models requires extensive resources including compute power and research programs,” Anthropic commented in a separate press release. “Amazon’s investment and supply of AWS Trainium and Inferentia technology will ensure we’re equipped to continue advancing the frontier of AI safety and research.”

Amazon’s Investments in AI Date Back to 2017

The e-commerce giant has steadily built up its AI capabilities in the past two decades, laying the groundwork for major advances in generative AI systems.

AWS launched Amazon SageMaker, its service for building, training, and deploying machine learning models, back in 2017. This allowed Amazon engineers and external developers to rapidly iterate on AI systems by using the company’s flexible cloud infrastructure.

The AWS Marketplace now lists over 10,000 pre-trained AI models and algorithms that can be purchased and implemented with minimal effort. Amazon’s own retail business applies various ML techniques across areas like search, recommendations, and supply chain optimization.

In recent years, AWS has focused on purpose-built chips to accelerate AI workloads in a cost-effective manner. The Trainium processor, announced in late 2021, is tailored to efficiently train complex models like large language systems. AWS also unveiled its Inferentia chips in 2018 to drive down the cost of inference, when models generate predictions in real-time. The fast pace of innovation continued this year with the launch of 2nd generation Trainium and Inferentia silicon.

This specialized hardware has enabled Amazon AI researchers to experiment with ever-larger foundation models. In 2021, the company discussed training models with over 100 billion parameters, surpassing the size of models like GPT-3 at the time.

Last April, AWS launched Amazon Comprehend, its natural language processing service that can understand text syntax, key phrases, entities, and sentiment. Comprehend powers various applications where analyzing unstructured text is critical.

More recently, Amazon (AMZN) introduced CodeWhisperer to generate code suggestions for developers based on natural language prompts. After extensive preview testing, CodeWhisperer was released for general availability supporting over 15 programming languages.

All of these developments and more provided the base for AWS’s unveiling of Amazon Bedrock in December. Bedrock opens up a wide range of generative models to AWS clients via a single managed service.

It represents a key step in democratizing access to powerful systems like Anthropic’s Claude and Stability AI’s image generator Stable Diffusion. AWS is primed to rapidly deliver more advanced capabilities as generative AI continues advancing at a breakneck pace.