OpenAI and Anthropic have agreed to share AI models — before and after release — with the US AI Safety Institute. The agency, established through an executive order by President Biden in 2023, will offer safety feedback to the companies to improve their models. OpenAI CEO Sam Altman hinted at the agreement earlier this month.
“Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical
→ Continue reading at Engadget