The online domain is increasingly exploited by terrorists for propaganda and recruitment. One of the methods to counteract to this phenomenon is to use AI-based tools, but from human rights standpoint there must be a framework set in pace to guide their use while respecting fundamental rights. In the wake of the newly adopted AI Act, this blog post explores the balance between using AI for public safety and protecting fundamental rights, with a focus on new the EU's legal framework. It emphasizes the need for a human-centric approach in the development and use of AI systems, especially those classified as high-risk.

Firstly, let’s clarify which systems would be classified as high risk – this classification depends on the potential impact of the respective AI system on safety and fundamental rights. For example, an AI system which provides remote biometric identification, AI systems which are used to filter job applicants or decide on employees’ promotion and demotion, AI systems which qualify citizens’ access to public services. With respect to our topic – AI systems in the law enforcement field, most certainly AI systems tasked to monitor for terrorist behaviour online would fall in this ‘high risk’ category as per Annex III, p. 6, (e) of the AI Act.

Such high-risk AI systems are subject to stringent requirements to ensure their safety, transparency, and accountability, which is ever more important in the light of fighting terrorism. Some of these requirements include:

In addition to these requirements stemming straight from the law, а possible way to enhance the alignment between fundamental rights and the AI Act is to draw on ethical principles. The Ethics Guidelines for Trustworthy AI, developed by the European Commission’s High-Level Expert Group on Artificial Intelligence, offer such a framework for AI-based tools that aim to combat terrorism. The guidelines identify three essential components for trustworthy AI:

  • Lawfulness, meaning it should comply with the relevant legislation,
  • Ethical, meaning it should respect ethical values and principles, and
  • Robust, meaning that it should be designed and deployed in a way that minimises potential harm from technical and social perspectives.

In the fight against terrorism, understanding that technology has societal benefits is vital, yet we should acknowledge the substantial negative consequences that can result from inconsistent or incorrect outcomes impacting both individuals and communities. Such undesired impact could be:

To minimize these risks, developers of AI tools could adopt suitable and balanced actions, including:

  • The use of robust datasets during the design, testing, and evaluation stages to prevent replication of biases or discriminatory patterns,
  • Recruiting a diverse team for designing and building the respective AI system,
  • Ensuring human participation not just in supervisory role but as active decision-makers,
  • Rigorously recording all operations by humans and AI for enhanced transparency,
  • Providing both initial and ongoing training on interpreting AI-generated results and recognizing the system's limitations,
  • Cultivating critical awareness about the potential harmful effects of flawed decisions, and so forth.

Confirming an AI system incorporates ethical principles, which can be proven at any time, is crucial to affirm that it is being used responsibly. What is more, this brief underscores that employing AI tools for counterterrorism demands conscientious creation and ongoing attention beyond their market release. Additionally, as legislation and ethics evolve in this domain, a proactive approach is essential. Both developers and users should foster and uphold a discerning awareness among management and staff to strike an appropriate balance between public safety man rights.

The drafting of this blog post is funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the granting authority can be held responsible for them.