The adoption of artificial intelligence (AI) in sensitive decision-making contexts, such as migration and refugee rights, raises critical legal, ethical, and social questions. When AI systems cause harm or violate human rights, it becomes essential to clarify who should be accountable and what redress mechanisms exist for victims. Globally, legal systems are facing unprecedented challenges in adapting to rapid technological advancements while seeking to protect fundamental human rights.

Existing Legal Framework
Current regulations are struggling to keep up with the use of AI in decision-making processes concerning migration and refugee management. The European Union’s General Data Protection Regulation (GDPR) establishes requirements for transparency and guarantees for data subjects but lacks specific guidelines on cases where AI directly impacts complex legal and administrative decisions. While the GDPR mandates the right to human intervention in automated processes, it does not fully clarify how this right applies in cases involving asylum rights or detention decisions.
From an international human rights law perspective, using AI for border control raises issues concerning the right to privacy, data protection, and non-discrimination. The European Convention on Human Rights (ECHR) provides general protection against discrimination but lacks specific standards for AI, creating gaps in its application. This regulatory void becomes apparent when considering the risks of algorithmic biases—often unintentional yet with severe consequences for vulnerable minorities. While these laws might appear incomplete, lawmakers must balance the interests of technological advancement with an extensive range of potential risks for individuals.
Assigning Liability
The assignment of liability in AI-related damage cases is complex. Options include holding the company that developed the AI responsible, the public entities that use it, or both. Current approaches often rely on “concurrent liability” between system developers and users, yet this introduces significant challenges in determining causation and intent. Individual liability is difficult to establish, especially when decisions stem from complex, opaque systems. Causal links between an AI action and harm to a victim often require sophisticated technical expertise, making it hard to identify a direct party responsible for algorithm errors. Some experts have proposed creating new forms of liability, such as “AI-induced harm liability,” as suggested by researchers at the Berkman Klein Center.
Compensation for Damages
The expansion of AI raises legal challenges, particularly in civil liability due to AI's autonomy and unpredictability. While some national regulations provide compensation for AI-related damages, international frameworks are still developing, with ongoing UN and EU efforts hindered by implementation barriers. The EU Parliament has proposed a civil liability system for AI, though existing remedies often lack accessibility, especially for vulnerable groups like refugees. The AI Act and amended liability directives aim to improve safety, transparency, and compensation, including "strict liability" provisions for high-risk applications to support victims' claims without proving direct fault.
Case Studies
AI use in migrant control has raised substantial concerns about human rights and procedural fairness, especially regarding potential discrimination and transparency deficits. Frontex’s iBorderCtrl project, which includes an “emotion detection” system analysing facial expressions and behaviour as potential deceit indicators, is a case in point. Algorithms used for profiling migrants or asylum seekers may rely on culturally biased and scientifically questionable assumptions, as highlighted by European Digital Rights (EDRi) and other digital rights groups. Such technology is built on behavioural interpretations rooted in Western cultural models, which, according to recent studies, may not be applicable to individuals from diverse backgrounds.
In addition to cultural bias risks, AI systems used in Europe and the United States to calculate migrant risk scores present accuracy and privacy concerns. For instance, the U.S. risk scoring system for asylum seekers relies on large amounts of personal data that may be collected and analysed without the individual's full understanding of its purpose and use. This invasive surveillance approach raises data protection issues and GDPR compliance questions in Europe, as these assessments are often unverifiable and insufficiently justified, making it hard to contest errors or biases.
Human rights organisations and associations such as Access Now and Statewatch emphasise that these technologies compromise migrant privacy and risk violating fundamental principles of asylum and immigration, which require assessments based on objective evidence and an understanding of individual needs. The possibility that acceptance or rejection decisions could be influenced by algorithmic errors poses a real risk of rights violations, as illustrated by cases of classification errors based on skin colour or gender, as noted by EDRi.
Ethical Considerations
The ethical aspects of AI in immigration decisions is a topic that has to be discussed and has to be considered during the legislative way to regulate the AI responsibilities. For instance, this has to include transparency, fairness, and accountability issues. Algorithms may embed implicit biases that lead to discrimination against specific groups, and victims of such biases may face challenges in seeking compensation. Therefore, establishing a regulatory framework that integrates ethical principles into AI system design and management, as proposed by the AI Now Institute, is essential.
International cooperation is crucial to address the challenges posed by AI. The UN and the EU are promoting a global dialogue, with initiatives aimed at creating common standards to regulate AI use in the public sector and migration processes. A promising development is the “Rome Call for AI Ethics,” which encourages governments to work together to ensure that AI respects human rights. Maintaining public trust is equally vital. Transparency in decision-making processes, adequate human oversight, and clear disclosure of AI functionalities can help reassure the public. As an example, studies from MIT Technology Review highlight how transparency can restore trust in AI systems impacting vulnerable individuals' lives.
Conclusion and Future Developments
AI evolution offers opportunities but also complex challenges in terms of legal liability and human rights respect. The future of AI regulation, especially in sensitive areas like migration, will require constant regulatory updates and a balance between innovation and fundamental rights protection.
As AI systems continue to improve in predictive and autonomous capabilities, it will be essential to develop robust accountability mechanisms to ensure vulnerable individuals’ rights are upheld and to provide justice for those harmed by algorithmic biases or errors.
It is essential that clear principles are also established at the international level to ensure the protection of citizens. Global collaboration and the adoption of common guidelines are key steps toward creating a legal system that not only fosters innovation but also effectively safeguards the rights of individuals harmed by new technologies.
To protect the rights of vulnerable groups, including refugees and migrants, it is critical to prioritise their rights in both AI design and regulation. AI systems have the potential to perpetuate and even increase discrimination if not carefully managed, making it essential to establish strong, enforceable standards for civil liability and accountability.
A comprehensive framework prioritising refugee rights is needed to manage AI misuse and provide accessible legal recourse for harms, such as wrongful asylum denials. Ethical considerations are vital to ensure dignity and fairness, with regular audits to detect biases. Protecting refugee rights within AI systems is both a civil and moral duty, requiring evolving regulations to secure justice and uphold refugees' fundamental rights.
As AI continues to transform our world, legal frameworks must evolve to keep pace, providing refugees and other vulnerable groups with protections that ensure they are not only safeguarded from harm but also empowered to seek justice and exercise their fundamental rights.
Comments