Google has introduced a conceptual framework for regulating AI technology called SAIF (Secure AI Framework). It is Google’s latest endeavour to make safe and secure AI systems, as the rapid and unprecedented development of Artificial Intelligence poses a unique threat regarding cybersecurity.
Explaining the necessity of a regulatory framework for Artificial Intelligence development in the current world, a statement from Google reads,
“A framework across the public and private sectors is essential for making sure that responsible actors safeguard the technology that supports AI advancements. As AI capabilities become increasingly integrated into products across the world, adhering to a bold and responsible framework will be even more critical.”
According to Google, SAIF is inspired by best security practices. The framework revolves around 6 core elements, that are;
- Expand strong security foundations to the AI ecosystem.
- Extend detection and response to bring AI into an organization’s threat universe.
- Automate defenses to keep pace with existing and new threats.
- Harmonize platform level controls to ensure consistent security across the organization.
- Adapt controls to adjust mitigations and create faster feedback loops for AI deployment.
- Contextualize AI system risks in surrounding business processes.
Google plans on implementing this framework throughout the industry. As such, the company has taken several initiatives, such as fostering industry support for the framework, working directly with organizations, sharing insights about threats, expanding bug hunter programs, and offering secure, open-sourceAI tools.