SCAI Question 7
Equitable Access, Control & Fair Competition
Where in the AI ecosystem should we ensure equitable access, control and fair competition? How should we address these concerns?
Context & Assumptions
Recent developments in AI have demonstrated tremendous potential to have both positive and negative impacts on society, organisations and individuals. AI systems currently rely upon large compute, advanced models, and extensive training data. These capabilities are inequitably distributed and result in a concentration of power. This in turn confers agency on specific actors, who may have goals that are misaligned with broader societal objectives. Areas of misalignment include an adequate recognition of risks, creating systems which are safe, and using AI to achieve social benefit or public good, rather than in support of profit maximisation. On the last, we note that the boundary between commercialization and basic research is not distinct. One of the assumptions driving research work, which may not be well founded, is that research can feed into a product that will in turn generate revenue and other resources to feed back into research. However, the majority of this research takes place in proprietary labs in companies that ship products and have profit as their central motivation.
Question
Where in the AI ecosystem should we ensure equitable access, control and fair competition? How should we address these concerns?
Concerns with concentration of power apply to a range of issues, including but not limited to price, quality, volatility, restrictions in access (with particular attention to which communities might be marginalised), restrictions in developing capability (e.g., training or hardware limitations) and ability to shape outcomes/output. There are also emerging externalities (e.g., situations of great individual benefit which result in collective harm) which may be exacerbated by such concentrations. We also realise there are unrecognised benefits to more open access to both the resources required to build AI models as well as to AI models themselves. These include broader economic growth across sectors, and broader perspectives in building and deployment. At the same time, there are some things that should remain closed or not be broadly accessible, e.g., PII and sensitive information, healthcare data.
Indicators of Progress
Given these considerations, we recommend two pathways to mitigate the effects of such concentration of power. First, develop a more democratic system that enables broader access to the key resources necessary to develop these technologies (i.e., lower barriers to entry for new entrants). This allows a wider range of actors of varying size and capability to the field, preventing or slowing concentration of power. Second, develop regulatory and non-regulatory strategies for the reduction of the harm in situations of power concentration. Non-regulatory strategies could include encouraging norms that support desired behaviours, such as transparency in model-building to manage risk. These strategies are not unfamiliar. They have been used in the past to regulate other industries with similar potential impact, such as public utilities and telecommunications. For both pathways, potential areas for intervention, or chokepoints, include compute asymmetries (e.g., compute fabs, chip architecture, and optimisation for specific labs/models by chipmakers), and datasets (e.g., lack of representative datasets in underrepresented languages).
Some challenges to operationalising these strategies might be deeply held ideological beliefs about how the market should be structured, and risk tensions and tradeoffs (e.g., limited vs broader perspectives, more vs less control). Evolving use and emerging threats also require nimbleness in adapting regulation.
Indicators of success would include:
- Emergence of a mix of independent AI providers at different scales throughout the market i.e. both small and large firms.
- Large firms are well-regulated to minimise negative impact.
- Regulation differentiates between applications based on implications of their use (e.g., nuclear power vs nuclear weapons).
- Policy makers consider a clear framework when considering how to regulate different sectors/areas of the AI stack.