Law and Internet Foundation recently participated in the scientific conference held by the Faculty Command and Staff at the National Defence College of Bulgaria, where our expert Rada Stoilova presented a paper authored by Polina Petrova and herself titled “Ethical & Legal Responsibility for Artificial Intelligence: The Regulatory Framework for High-Risk AI Systems.” The paper delves into the complex landscape of AI regulations as outlined in the AI Act, with a particular focus on high-risk AI systems.
The paper explores the stringent legal and ethical requirements for both providers and deployers of high-risk AI systems. It emphasises the necessity for robust risk management, adherence to ethical principles, and the importance of transparency and accountability in the use of AI technologies. By examining both ethical guidelines and legal frameworks, the paper aims to provide a comprehensive understanding of how AI systems can be responsibly developed and regulated to minimise potential adverse effects while safeguarding human rights and public safety. The TESTUDO project on which Law and Internet Foundation is working since last year is used as an example for a good practice in the field of security.
The presentation highlighted the interdisciplinary approach taken in the paper, discussing applicable ethical and legal aspects that arise in the context of AI. This approach allows for the alignment of AI systems with human well-being, respect for human autonomy, privacy, social responsibility, transparency, and security. The paper underscores the importance of minimising potential adverse effects when developing and deploying technological solutions, explaining how these objectives can be achieved.
You can read the full article in English (pp. 390-399) at the following link: "Modern Aspects of Security – Challenges, Approaches, Solutions" (2024).
