SCAI Question 10
Combating Mis/disinformation Campaigns
What are the appropriate speed bumps and incentives for content channels to reduce the negative impact of mis/disinformation campaigns?
Context & Assumptions
Mis/disinformation is an existing problem, but with the development of AI, we will see an increase in volume and sophistication. This can be socially corrosive and degrade shared trust between citizens and institutions. The pervasiveness and velocity of social media content distributed through content channels have created the conditions where a generation relies on these channels to shape their understanding of the world.
This problem is hard to address because the techniques for mis/disinformation (especially multi-step emotional manipulation) are hard to detect. While AI tools lower the cost and increase access for those seeking to generate and proliferate disinformation, we are nearing a point where we lack the ability to discern if the source of information is human or bot and distinguish between true and fake content.
Question
What are the appropriate speed bumps and incentives for content channels to reduce the negative impact of mis/disinformation campaigns?
- What are the technical solutions to support these speed bumps/incentives?
- What are the trade offs between public safety and freedom for content generation (including the freedom to misrepresent authority of fact)?
- How do trusted institutions maintain trust with citizens in a world of increased mis/disinformation?
Indicators of Progress
While there is no known method to fully solve this problem, mitigation measures are possible. We can consider:
- Establish a digital identity system to allow for tracking sources of information.
- Promote third-party services that monitor content channels to flag disinformation, and correct it through decentralised systems that can tackle this problem at scale.
- Establish legal requirements for content channels to label AI generated videos, as a temporary measure while enabling the widespread adoption of digital signatures embedded in hardware manufacturing, such as digital signatures in cameras to authenticate images.
- Increase public education and awareness in unknowingly spreading mis/disinformation.
An additional challenge faced by non-English speaking countries is the technical difficulty of detecting mis/disinformation in non-English languages, since many existing models are trained on and optimised for English datasets. Therefore, algorithms need to be trained on non-English data sets in order to accurately detect on global platforms. Another challenge is the cost of fact checking posts at scale, which tends to be much higher than AI generation of fake posts. This is exacerbated by AI generated content being disseminated at an increasingly low cost.
There are few known systems to monitor the flow of misinformation. We anticipate both the public and private sector will need to invest in R&D to fill the void. Law enforcement will need to expand their investigative toolkits. Given the nascence of the problem, global sharing of experiences would improve the learning curve. Potentially, there will be AI trained specifically to address mis/disinformation.