President Biden recently issued a sweeping executive order intended to shape the development and use of artificial intelligence in the United States. This historic order lays out principles and actions aimed at ensuring AI is safe, ethical, and responsibly deployed across sectors. What are the facts around the order and how might it impact various stakeholders?
Outlining the Order
The overarching goal of the executive order is to ensure AI is trustworthy by mitigating risks around safety and security. Key elements include:
- Requiring sharing of safety testing data on powerful AI systems with the government
- Setting standards for rigorous testing before public release of AI models
- Directing agencies to develop tools to detect fake content and authenticate official communications
- Calling for accelerated research on privacy-enhancing AI techniques
- Strengthening enforcement around algorithmic discrimination and civil rights
- Establishing consumer safeguards on AI use in areas like healthcare
- Supporting workers through guidance on AI's impacts on jobs
- Promoting innovation by expanding access to AI research resources
- Asserting American leadership on setting global AI technical standards
Who's Impacted and How
The order aims to balance risks and opportunities for a diverse set of stakeholders, including companies working on foundational AI developments and individual consumers of AI technology. Across the board, the focus is on transparency from leading companies, privacy safeguards, equity, and broader access and education. Let's break it down even further.
Impact on AI Companies
- Startups and small/medium sized businesses (SMBs) may struggle with compliance costs associated with new testing and reporting mandates for powerful AI systems. However, expanded access to federal AI research resources and data under the order could help offset those costs and aid innovation.
- Larger technology companies and enterprises developing major foundational AI models will face direct requirements to share safety test results and other critical data with the government before releasing their systems publicly. This could slow product release timelines.
Impact on Individual Users
- The executive order focuses heavily on improving privacy protections and preventing harm from AI systems that open up greater opportunity for fraud and deception. This will have a positive impact on individual and organizational users.
- The order also includes mandates related to AI use in high-impact sectors like healthcare and education, with the goal of increasing privacy safeguards for individuals.
Impact on Associations
- Associations stand to benefit from provisions aimed at promoting equity, non-discrimination, and privacy.
- Measures to prevent algorithmic bias and discrimination will help associations ensure AI is used fairly and ethically when interacting with members or making internal decisions.
- Directives to support adoption of AI in areas like healthcare research and education could open up new federal grants and support for associations to deploy AI in their domains.
- Access to new training programs and resources will help association staff build valuable skills. As mission-driven organizations, preparing workers for the AI era is of critical importance.
- For associations involved in standards development or credentialing, the order's support for developing technical standards for trustworthy AI could present partnership opportunities with government and industry.
A Step in the Right Direction?
While light on some implementation details, the executive order sets an ambitious vision for responsible AI development. It puts in place high-level principles and priorities rather than specific regulations. This appears to be a wise approach in a fast-moving domain like AI. The focus on safety, transparency, and technical standards is encouraging for advancing public trust. Work remains, but this order sets a constructive foundation for AI oversight that protects American interests while maintaining room for continued innovation.
November 16, 2023