SCAI Question 3
Governance Structure & Regulatory Measures
What are optimal governance structures and regulatory measures for AI?
Context & Assumptions
Governance and regulation have a key role to play in shaping the direction of development and deployment across the spectrum of different AI technologies and applications. Governments should provide the conditions for confident AI innovation and adoption and for preventing AI-related harms. The purpose of public policy is to protect the public interest, and we assume that effective, efficient and legitimate governance and regulation is a precondition for trusted and sustainable advancement of AI, rather than representing an obstacle to innovation. At the same time, the multi-faceted, complex, and border-crossing nature of AI raises challenges for achieving effective, efficient, and legitimate governance structures and regulatory measures.
Success requires coherence and integration of governance structures and regulatory measures across multiple dimensions. These should ideally encompass different sectors and regulatory remits, various points of intervention across the AI lifecycle and AI value chains, different types of regulatory tools, and several levels of governance (municipal, national, regional, and international/global).
In order to achieve this integration, and to do so with legitimacy, it is important that processes of designing governance structures and regulatory measures are inclusive of perspectives from the diverse groups in society whose interests are at stake and whose expertise can help identify solutions, including users, businesses, policymakers, civil society, and academia.
Legitimacy also depends on ensuring checks and balances, enforcement and accountability mechanisms, as well as the prevention of abuses of power by those who deploy and use AI.
Question
What are optimal governance structures and regulatory measures for AI?
Answering this question involves determining the appropriate combination of structures and measures along several dimensions:
- Legitimacy: Which parties have legitimacy to establish governance mechanisms in a given jurisdictional context (government vs. non-government actors; collective or representative organisations)?
- Sectors and regulatory remits: What is the optimal combination of governance and regulation designed to apply to AI horizontally vs. in specific sectors vs. in specific use cases/applications?
- Types of governance tools: What is the optimal combination of different regulatory levers, such as legislation, mandatory codes of practice, voluntary frameworks, standards and principles, public procurement rules, and other tools which can be used to achieve desired practices and outcomes?
- Points of intervention: What is the optimal combination of structures and measures that target different stages in the AI lifecycle, different segments in the AI value chain, and should these be implemented as ex ante requirements (e.g. as preconditions for licensing or regulatory approval) or as ex post actions (e.g. requirements to take remedial actions after the fact)?
- Levels of governance: What is the optimal combination of municipal, national, regional, and international/global measures? To what extent is international/global alignment or harmonisation desirable and achievable (as opposed to instances of divergence that are unavoidable due to fundamental differences in political ideologies and values, and respect for state sovereignty)?
- Accountability: What do enforcement and accountability look like? Who should be responsible for enforcement, and what mechanisms are appropriate (e.g. criminal sanctions, regulatory penalties, access blocks and bans)?
Indicators of Progress
There are some significant challenges in determining the optimal governance or regulatory approach for AI development and use. These include a lack of transparency and access to information on how AI models are developed, governed, and deployed, especially in the private sector; a failure of coordination between various agencies or departments in governments or organisations involved in exercising governance or regulatory functions or processes; a lack of stakeholder inclusion in developing governance and regulatory frameworks; a lack of clear and effective reporting, enforcement, and accountability mechanisms; abuse of or misalignment with incentives, whether at the political or commercial level; insufficient resourcing, capabilities, and expertise within governments or organisations in designing and implementing governance structures or regulatory measures, and in ensuring compliance; and regulatory lags given the ever-changing and fast-moving nature of AI.
While these are not insurmountable, careful attention must be paid to addressing and overcoming these challenges, in order to prevent them from becoming key stumbling blocks to progress in answering the question.
Possible strategies to make progress on this question include:
- Systematic mapping, gap analysis and guidance for relevant existing laws and legal principles, and governance structures and regulatory measures in individual and regional jurisdictions.
- Adopting new laws and regulations to fill the gaps that existing law does not cover.
- Establishing new forms of collaboration and coordination between regulatory bodies across different sectors, remits and jurisdictions.
- Systematic approaches to developing score cards and evaluation frameworks for governments and companies.
- Designing reporting, liability and accountability schemes.
- Advancing development of international standards that enable interoperability between regulatory requirements and governance frameworks established in different jurisdictions.
Measures of progress in answering the question include:
- Emergence of clear, effective, efficient, and legitimate governance frameworks for the use of AI, with appropriate scrutiny (including by the media), accountability, and checks and balances on the use of AI.
- Implementation of measures specifically aimed at protecting the public interest in the context of AI development and deployment, including product safety, citizen rights, and rules relating to the use of AI in providing public services.
- Whole-of-government and cohesive approaches in policymaking and regulation, involving all relevant government departments and regulatory agencies.
- Increased transparency of and access to information on how private sector models are developed and their governance and implementation frameworks, as well as on governmental development and use of AI.
- More countries adopting explicit and clear positions on AI governance and regulation, whether by explaining how existing law and regulations apply to AI, and/or by introducing new laws and regulations for AI.
- Increased, and new forms of, international collaboration, cooperation, and capacity-building on AI regulation, standards, and other governance frameworks, mechanisms and tools.
- Development and promulgation of international standards that can enable interoperability between frameworks and approaches in different jurisdictions.