top of page
Search
Writer's pictureLuca Adib Tucci

Artificial Intelligence: Helpful, But at What Cost?


Photo by Freepik.



Recent strides in the evolution of Artificial Intelligence (AI) have brought about considerable advancements across different fields. However, not all fields have had the same opportunities to fully harness the benefits technology offers. A clear example of the “controlled” and “measured” use of AI is its potential application in humanitarian contexts, in particular in healthcare assistance.


1) The necessity of mental health support with AI: where we stand


Is it necessary to substitute a machine for a human? This question has been considered in developmental contexts since the industrial revolution. There are fields where this question no longer holds relevance, considering the automated processes in certain operations. 


In recent years there's been a growing effort to limit functions deemed dangerous to humans, especially in crisis situations. Hence, the concept of AI integration in mental health support in humanitarian contexts. This kind of support can offer several advantages, especially related to accessibility. In many crisis-stricken areas, human resources for psychological support are limited. In these cases AI can help fill this gap by providing mental health support services to a wide range of individuals, even in remote or hard-to-reach areas. Additionally, from an economic standpoint, AI can be scalable - it can handle a high number of support requests at the same time. In theory, this allows for continuous monitoring, assessing the mental well-being of individuals, identifying early signs of disorders, and providing preventive interventions that would otherwise be challenging to implement. Lastly, AI could be an excellent tool to overcome the 'taboo' of psychological support. While there has been progressive strides in recent years in overcoming the stigmatisation of psychological support, it's not entirely eradicated. Some people might feel more comfortable interacting with an AI system than a human operator, reducing the fear of being judged.


Moreover, we can’t ignore the negative effects on psychological well-being that are developing in crisis situations, which we now hear about daily in the news. 


Millions of people in conflict zones are suffering due to the psychological effects of violence, with the majority unable to receive adequate support. It's precisely in such scenarios that organisations are attempting, step by step, to introduce AI-supported aid.


Crises can also represent chances to develop long-term mental health assistance systems. It is not unreasonable to think these crises could be used to  kickstart the use of AI to help people with their mental health, and introduce the use of AI in psychological support to the population. For instance, the WHO is globally committed to ensuring a coordinated and efficient response to mental health emergencies. In collaboration with different partner organisations, guidelines, intervention manuals, and policy directions are provided to support mental health responses during crisis situations, which in the near future could involve the introduction of technological support.


2) Empathy in human-machine interaction


“How can you not experience emotions?”; “Are you so apathetic?”. 

How many times in our lives have we met people we would describe as "insensitive machines"? The prejudice we face when discussing AI is that a machine does not experience emotions. Drawing from a well-known monologue from the film "The Imitation Game," which “indirectly” addresses AI: "Of course machines can't think as people do. A machine is different from a person. Hence, they think differently. The interesting question is, just because something, uh... thinks differently from you, does that mean it's not thinking? Well, we allow for humans to have such divergences from one another. You like strawberries, I hate ice-skating, you cry at sad films, I am allergic to pollen. What is the point of... different tastes, different... preferences, if not, to say that our brains work differently, that we think differently? And if we can say that about one another, then why can't we say the same thing for brains... built of copper and wire, steel?"

The lack of empathy is often considered a limitation in human-AI interaction. However, we often forget that we program the machines and provide the inputs.. So although a machine thinks differently from us, can we ensure that, in its own way, a machine experiences emotions?

While machines can be programmed to recognize and respond to certain emotional signals, their ability to understand and respond with empathy to the complexities of human emotions is limited. Nevertheless, AI can be designed to simulate empathy convincingly, providing support that, while not identical to human support, can still be meaningful to many people.

This new perspective opens up new horizons for the use of AI but also raises several questions. Will we be able to create something similar? Does something similar already exist? Some argue that we are at a point of no return, while others argue that this point is merely the limitation that mankind imposes on any novelty in its perpetual search for stability.


3) Privacy Rights and Limitations


Organisations engaged in assisting vulnerable individuals during humanitarian emergencies face a lot of challenges in managing personal data. This information, often personal data, are essential to ensure targeted and coordinated assistance. However, collecting and processing such data can raise delicate issues regarding privacy and security.


Recent investigations, from 2016, have underscored the importance of striking a balance between the right to privacy and the need to provide effective assistance during humanitarian emergencies. While it is crucial to protect the privacy of those involved, it is equally important to ensure the circulation of data among the countries concerned to facilitate a prompt and efficient response. This balance can be particularly complex in contexts where privacy regulations change from country to country.


One possible approach is to adopt strong protocols to protect personal data throughout all stages of the process, from collection to sharing. At the same time, it is essential to engage the affected communities in the decision-making process and ensure transparency regarding the use of their personal data.


Imagining a world where AI could create a sort of “blockchain” where personal data is not directly visible, but where it is possible to identify the users and the reasons why they are used, could represent a step forward in the era of digital law, based on consent, traceability, and verifiability of data.


Currently limitations on individual rights may be justified to protect the security and well-being of those involved, but they must be proportionate and based on international human rights laws and regulations, which sometimes do not take into account exceptional cases or the rights of people in vulnerable conditions.


Striking a balance between respecting privacy and providing effective humanitarian assistance represents a fundamental challenge for organisations engaged in these operations. The protection of individual rights should remain a “North Star” of considerations while striving to ensure a timely and effective humanitarian response in emergency situations.


4) Replacement of Human Psychological Support with AI in War theatres


Based on the discussions so far, the use of AI in supporting mental health holds potential for significant aid even in the context of war. Through its complementary role to human psychological support, AI could integrate available resources, providing continuous and scalable support that would otherwise be difficult to achieve.


This data leads to two interesting reflections. Psychological support could not only benefit humanitarian efforts but also aid individuals involved in war situations, helping to prevent mental health issues commonly associated with returning from conflict zones.


Indeed, considering the complexity of the situations experienced by soldiers in war, AI could be employed to constantly monitor their mental state, identifying early signs of trauma and enabling timely interventions. Furthermore, through the provision of training and psychological counselling, AI could contribute to improving the resilience of soldiers and their ability to handle stressful situations. However, it is crucial to emphasise the use of AI must be done following ethical and legal principles, including human rights and privacy protection.

The second reflection is purely legal. The laws regulating the privacy rights may vary at national, European and international levels.


The European Convention on Human Rights (ECHR), the General Data Protection Regulation (GDPR), and the Geneva Convention of 1949, protect the privacy rights and establish limitations, including in some cases those processed during emergencies.


Moreover, different countries have specific regulations regarding privacy in emergency situations, for example the Health Insurance Portability and Accountability Act (HIPAA) in the U.S. Moreover organisations like the UN and WHO can issue resolutions and guidelines on this topic.


Applying these regulations in humanitarian emergencies can be complex and open to various interpretations by those involved. In conclusion, the use and the integration of AI presents significant opportunities but requires careful consideration of ethical, legal, and social implications. AI can provide valuable support but should not completely replace human psychological assistance, especially in contexts where human empathy and understanding are essential. For this reason, it will be appropriate to think as soon as possible about codes of conduct and ethical standards for the use of AI and data management during emergencies, ensuring respect for individual rights.


0 comments

Commentaires


bottom of page