Published at ISS Bliss: https://issblog.nl/2024/11/07/ai-and-emerging-tech-for-humanitarian-action-opportunities-and-challenges/
In this blog, members of the Humanitarian Observatory Initiative South Asia (HOISA) including Anisa Abeytia, Shanyal Uqaili, Mihir Bhatt and Khayal Trivedi consider the applications of AI and other emerging technologies for humanitarian action. With UNHCR and other organisations adopting AI-enhanced planning, mapping, and prediction tools, what are some of the ethical dilemmas and challenges posed for tech-enabled humanitarian action? How can we make sure that humanitarian principles are kept to by non-human actors? And what is ‘responsible AI’?
The use of digital and emerging technologies such as artificial intelligence in the humanitarian sector is not new. Since the advent of these technologies, particularly in the last two decades, the sector has gone through several transitions as data collection, storage, and data processing have become increasingly available and sophisticated. However, the recent contemporary advances in computational power, along with ‘big data’ now at the disposal of the public and private sector has allowed for a widespread and pervasive use of these digital technologies in every sphere of human life – notably also in humanitarian contexts. AI, quite rapidly, is reshaping the humanitarian sector with projects such as Project Jetson by UNHCR, AI supported mapping for an emergency response in Mozambique, AI chatbots for displaced populations, and more besides.
Humanitarian workers therefore must pose the following questions. How can responsible AI along with emerging technology be used for humanitarian action? And what are the priority areas and conditions that the humanitarian sector should put forth while employing these technologies? And does emerging technology present any ethical challenges for the sector?
There is an enormous potential in AI technology, with its ability to predict events and results that can help in international humanitarian action. With the rate at which disasters and conflicts are increasing in the past few years, the humanitarian sector particularly in terms of funding, is simply not at par in providing the relief and responses to the degree that the world requires1. In this light, strengthening disaster resilience and risk reduction by building community resilience through initiatives such as better early warning systems become crucial.
Case Study: Using AI to forecast Seismic Activity
A study using hybrid methodologies was conducted to develop a model that could forecast seismic activity in the region of Gazientep, Türkiye (bordering Syria). The system was trained using the data gathered after the massive 7.8 magnitude earthquake in early 2023, which was then followed by more than 4,300 minor tremors. To create the algorithm, key dimensions and indicators such as social, economic, institutional and infrastructural capacity from open-source websites, were identified. During the research, two regional states were identified to have extremely low resilience to earthquakes. Incidentally, this area is also home to a large number of Syrian refugees. After gathering two years of seismic data from more than 250 geographers on the ground and other open sources, two Convolutional Neural Network models were applied that could predict 100 data points (with 93% accuracy) in future, which is amounts to about 10 seconds in future.
The study underlines the regional challenges in data collection. Several indicators were omitted due to the absence of openly availability data. This highlights the influence of power asymmetry, which allows for biased results and conclusions, thereby pushing researchers away from new understandings. A case-in-point, data pertaining to areas/neighborhoods where Syrian refugees reside was not gathered and thus excluded by default from the research findings. Despite these political challenges, there is great potential in this technology when provided with relevant data sets. AI becomes the model it is trained to be and therefore it is important to have a complete a data set to prevent reproducing real world/human biases
Fears of techno-colonialism and Asymmetric Power Structures
This case highlights the need for transparent, complete, and bias-free data sets, which remain a challenge in most parts of the world. Further, who owns these data sets? Who oversees data collection and training, and what is omitted? As AI and various deep learning methodologies transform our world, fears of techno-colonialism, techno-solutionism and surveillance are omnipresent.
Today’s post-colonial world, that in fact continues to carry forward colonial power hierarchies albeit in a new setting with changed roles, is ridden with inequalities. And these inequalities and pre-existing biases both in data and in people, are then transferred to the AI because of the way it is being (or not being) trained. Even ‘creative’ AI tools are still a conglomeration of the data that they are trained on.
AI and deep learning methodologies are tools that can be targeted to provide a solution. They require input of data, and if the data carries bias or racism to some degree then the output will also reflect that2. Questions such as, who is training the AI, what funds are being used, and who is the recipient of the effort, become critical to answer. Unfortunately, very few companies and countries in the world have the capacity to create data sets that train AI. These are often large conglomerates that work for profit in a capitalist ideology where a human centered approach is at best secondary. The decision power therefore lies in the hands of few, thereby forming a new form of colonialism.
Is AI then a tool or a medium to keep the status quo (of power structures)? Because if the few people in power, driven by capitalism, are invested in maintaining the power structures, then how will AI be of help in decisions about resource allocation? This points also to the much-needed democratization of AI and these tools. The human centric AI otherwise will remain a paradox.
Looking at Responsible AI and humanitarian principles
Can we employ AI that does no harm? For AI and similar tools to therefore be viable and inclusive, one must ensure transparency and inclusion in data gathering that forms the data sets. This requires conscious effort that is not technology driven, rather policy driven that invites people with diverse thought processes from diverse communities and especially minorities and vulnerable populations to be in a position of action and not just participate. One way is to rethink the humanitarian sector and its functioning. The other is to have a more community centered approach while thinking of AI applications, as James Landay puts it. He describes that in a community centered approach, the members of the community discuss and decide how and which resources must be allocated to what, according to their own priorities and needs. This method stands in contrast to the top-down politics, where communities are merely seen as consumers or beneficiaries.
Drawing from Edward Soja’s theory, Anisa Abeytia (2023) distinguishes and adds a fourth sphere or space to the already formed three-layer model by Soja, which Abeytia argues to be relevant in the use of AI. According to the model, “Firstspace” is the geographic location that includes human, non-human (living and nonliving) entities and environments. “Secondspace” is our communal areas (library, schools, etc.). “Thirdspace” is the liminal landscape – the way people accept or reject ideas and technologies such as their apprehensions and fear to new transitions and change. And lastly, Abeytia adds a Fourthspace to represent the digital world which is as real as physical geographies today. An important rubric to measure viability of an AI application is how it will affect each of these spaces – the personal, the communal, the transitional and the digital space. For example, we can witness the use of AI affecting all four spaces in the project run by University of Utah and a refugee resettlement agency that used Virtual Reality (VR) headsets as a reception and resettlement tool to assist refugees to integrate into American societies.
Survey: What are the needs of the sector?
As members of the humanitarian sector, we must strive to develop our own solutions to the challenges we face, ensuring inclusivity for all. The identification of these challenges should also come from within the sector itself. Recently, a survey was conducted among key stakeholders to identify areas where AI could make a significant contribution. The most commonly highlighted areas of interest were as follows:
● Can AI assist in creating bias free intelligence that improves victim-state relationship with others?
● Can AI be utilized in measuring intolerance and widening hatred between communities, thereby causing riots such as in the UK and South Asia?
● Can AI provide guidance in identifying uncertainties of risks and resilience, along with humanitarian action insights that we have not spotted?
● Can AI conduct contribution analysis for impact evaluation?
● How to employ AI to identify methods of empowerment in decision making and developing strategies to offer universal humanitarian assistance?
● How can we harness the power of AI in analyzing epidemic preparedness and response improvement in health crises like monkeypox or Covid?
It is essential to actively investigate the use of AI and emerging technologies across the identified spheres. Efforts to make AI more equitable should include advocating for inclusive methodologies, creating transparent and diverse data sets, and amplifying the voices of Indigenous, marginalized and vulnerable populations.
While working towards more equitable systems, several critical questions arise: How can these projects be funded? Are they viable in a landscape where only a fraction of resources reaches those in need? What is the carbon footprint of developing AI and deep learning tools? How can Indigenous knowledge from resilient communities be integrated into AI systems? Each of these issues warrants thorough discussion, and every major humanitarian organization should address them.
Further reading:
Tom L. Beauchamp and James F. Childress, Principles of Biomedical Ethics, 8th ed., Oxford University Press, Oxford, 2019; Luciano Floridi and Josh Cowls, “A Unified Framework of Five Principles for AI in Society”, Harvard Data Science Review, Vol. 1, No. 1, 2019.
Authors: Anisa Abeytia, Shanyal Uqaili, Mihir Bhatt and Khayal Trivedi are members of the Humanitarian Observatory Initiative South Asia (HOISA)
Comments