The online domain is increasingly exploited by terrorists for propaganda and recruitment. One of the methods to counteract to this phenomenon is to use AI-based tools, but from human rights standpoint there must be a framework set in pace to guide their use while respecting fundamental rights. In the wake of the newly adopted AI Act, this blog post explores the balance between using AI for public safety and protecting fundamental rights, with a focus on new the EU's legal framework. It emphasizes the need for a human-centric approach in the development and use of AI systems, especially those classified as high-risk.
Firstly, let’s clarify which systems would be classified as high risk – this classification depends on the potential impact of the respective AI system on safety and fundamental rights. For example, an AI system which provides remote biometric identification, AI systems which are used to filter job applicants or decide on employees’ promotion and demotion, AI systems which qualify citizens’ access to public services. With respect to our topic – AI systems in the law enforcement field, most certainly AI systems tasked to monitor for terrorist behaviour online would fall in this ‘high risk’ category as per Annex III, p. 6, (e) of the AI Act.
Such high-risk AI systems are subject to stringent requirements to ensure their safety, transparency, and accountability, which is ever more important in the light of fighting terrorism. Some of these requirements include:
- Initial Risk Assessment: Providers should conduct an initial risk assessment to determine if the AI system qualifies as high-risk. This involves evaluating the system’s intended purpose, potential misuse, and impact on safety and fundamental rights.
- Third-Party Conformity Assessment: High-risk AI systems should undergo a third-party conformity assessment before they can be put into service. This assessment verifies that the AI system complies with the EU AI Act’s requirements.
- Ongoing Monitoring and Incident Reporting: Providers should implement systems for ongoing monitoring and incident reporting. This ensures that any issues or risks that arise during the AI system’s operation are promptly addressed.
- Quality Management System: Providers should establish a quality management system that includes procedures for design, development, testing, and maintenance of the AI system. This system helps ensure the AI system’s reliability and safety.
- Documentation and Record-Keeping: Comprehensive documentation and record-keeping are required. This includes maintaining records of the AI system’s design, development, risk management processes, and operational data.
- Transparency and User Information: Providers must ensure transparency by providing clear information to users about the AI system’s capabilities, limitations, and the data it processes. This helps users understand how to use the AI system safely and effectively.
In addition to these requirements stemming straight from the law, а possible way to enhance the alignment between fundamental rights and the AI Act is to draw on ethical principles. The Ethics Guidelines for Trustworthy AI, developed by the European Commission’s High-Level Expert Group on Artificial Intelligence, offer such a framework for AI-based tools that aim to combat terrorism. The guidelines identify three essential components for trustworthy AI:
- Lawfulness, meaning it should comply with the relevant legislation,
- Ethical, meaning it should respect ethical values and principles, and
- Robust, meaning that it should be designed and deployed in a way that minimises potential harm from technical and social perspectives.
In the fight against terrorism, understanding that technology has societal benefits is vital, yet we should acknowledge the substantial negative consequences that can result from inconsistent or incorrect outcomes impacting both individuals and communities. Such undesired impact could be:
- Bias and Discrimination: AI systems, such as terrorist content identification algorithms, can perpetuate existing biases if they are trained on biased data.
- Privacy Violations: AI systems used for content and traffic monitoring, like behavioral analysis, can infringe on individuals’ privacy. Continuous monitoring and data collection can lead to a loss of anonymity and increased surveillance of everyday activities.
- False Positives and Negatives: AI systems can make errors, leading to false positives (incorrectly identifying someone as a suspect) or false negatives (failing to identify an actual suspect). These errors can result in wrongful arrests or missed opportunities to prevent crimes.
- Lack of Accountability: Decisions made by AI systems can be opaque, making it difficult to understand how a particular decision was reached. This lack of transparency can hinder accountability and make it challenging to contest or appeal decisions.
- Infringement on Human Rights: The use of AI in law enforcement can lead to infringements on human rights, such as freedom of expression.
To minimize these risks, developers of AI tools could adopt suitable and balanced actions, including:
- The use of robust datasets during the design, testing, and evaluation stages to prevent replication of biases or discriminatory patterns,
- Recruiting a diverse team for designing and building the respective AI system,
- Ensuring human participation not just in supervisory role but as active decision-makers,
- Rigorously recording all operations by humans and AI for enhanced transparency,
- Providing both initial and ongoing training on interpreting AI-generated results and recognizing the system's limitations,
- Cultivating critical awareness about the potential harmful effects of flawed decisions, and so forth.
Confirming an AI system incorporates ethical principles, which can be proven at any time, is crucial to affirm that it is being used responsibly. What is more, this brief underscores that employing AI tools for counterterrorism demands conscientious creation and ongoing attention beyond their market release. Additionally, as legislation and ethics evolve in this domain, a proactive approach is essential. Both developers and users should foster and uphold a discerning awareness among management and staff to strike an appropriate balance between public safety man rights.
The drafting of this blog post is funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the granting authority can be held responsible for them.