OpenAI and Anthropic have agreed to share AI models before and after release with the US AI Safety Institute, established through an executive order by President Biden in 2023. OpenAI CEO Sam Altman hinted at the agreement earlier this month. The US AI Safety Institute will offer safety feedback to companies to improve their models, and Google is in discussions with the agency. Meanwhile, Google rolled out updated chatbot and image generator models for Gemini. Elizabeth Kelly, director of the US AI Safety Institute, emphasized the importance of safety in fueling technological innovation. The agency, part of the National Institute of Standards and Technology, aims to create guidelines and best practices for testing potentially dangerous AI systems. Vice President Kamala Harris highlighted the potential risks of AI in late 2023. The agreement, a Memorandum of Understanding, grants the agency access to major new AI models for evaluation. State regulators are also implementing AI safety measures, such as the California bill requiring safety testing for AI models over $100 million. This bill would give the state attorney general enforcement power if AI developers do not comply.