Google and Anthropic have announced an expanded partnership to achieve the highest standards of AI safety. Anthropic will leverage Google's technology to develop AI responsibly and deploy it in a way that benefits society. This collaboration includes collaborations on AI safety standards, committing to the highest standards of AI security, and the use of TPU chips for AI inference.
The companies in question will use Google's AlloyDB, a fully managed PostgreSQL-compatible database, for handling transactional data with high performance and reliability. Additionally, they will employ Google's BigQuery data warehouse to analyse vast datasets and extract valuable insights.
As part of the expanded partnership, Anthropic will use Google's latest generation Cloud TPU v5e chips for AI inference. These chips will allow Anthropic to scale their powerful Claude large language model, ranked second to GPT-4 in many benchmarks.
The announcement of the expanded partnership follows the participation of both companies in the inaugural AI Safety Summit held at Bletchley Park, hosted by the UK government. The summit brought together government officials, technology leaders, and experts to address concerns about frontier AI. Google and Anthropic will also be actively engaged with the Frontier Model Forum and MLCommons, contributing to the development of robust measures for AI safety.
To enhance security for organisations deploying Anthropic's models on Google Cloud, Anthropic is utilising Google Cloud's security services. This includes Chronicle Security Operations, Secure Enterprise Browsing, and Security Command Center, which provide visibility, threat detection, and access control.
Thomas Kurian, the CEO of Google Cloud, emphasised the shared values of developing AI boldly and responsibly. He noted that the expanded partnership with Anthropic, built on years of collaboration, will bring AI to more people safely and securely.