top of page
  • Youtube
  • Grey Twitter Icon
  • Grey Instagram Icon

Model Drift and AI in Humanitarian Contexts: Strengthening Humanitarian Justice in the Age of AI

  • 4 days ago
  • 5 min read

Authors: Anisa Abeytia, Asma Derja, Alphoncina Lyamuya, Heriberto Tapia, John Jaeger, Khayal Trivedi, Mihir R. Bhatt



The rise of AI contains the possibility  to  accelerate the achievement of humanitarian justice across humanitarian systems, but not accounting for Model Drift may derail this potential. The rise of AI holds the potential to accelerate humanitarian justice across global systems. Yet that promise is fragile. Anisa Abeytia, the author of Model Drift: How Subtle Shifts in AI Responses Could Undermine Crisis Response, borrows the term “model drift” from machine learning, where it describes a loss of predictive accuracy as real-world conditions change. In fragile states, however, the risk is not only technical degradation but narrative volatility — subtle shifts in AI responses that can quietly undermine crisis response itself.


Artificial intelligence is rapidly reshaping every sector, and humanitarian action is no exception. As conflicts intensify, climate shocks worsen, and displacement reaches historic levels, humanitarian actors are increasingly turning to AI-driven tools to inform decision-making, manage data, and improve operational delivery. With such tools influencing the lives of people in urgent need, Humanitarian Observatories and organizations face a critical responsibility: to understand, monitor, and  guide AI’s evolving role in upholding the “do no harm” principle.


Building on earlier conversations initiated in 2024 titled, “AI & Emerging Technologies for Humanitarian Action: Opportunities and Beyond”, HOISA convened a panel of experts to examine how AI is being deployed across humanitarian settings, the opportunities emerging but also the risks such as “model drifts” that AI systems tend to carry, as elaborated by AI Governance Expert, Anisa Abeytia in her recently published article. (Click here). These observations bring about broader ethical and governance challenges accompanying technological expansion at an accelerated scale.


How AI Is Being Used in the Humanitarian Sector?

In her reading of the integration of AI in the humanitarian sector, researcher and PhD candidate, Alphoncina Lyamuya outlined four predominant approaches humanitarian organizations take when integrating AI: off-the-shelf solutions, bespoke partnerships with tech providers, in-house AI development, and locally driven initiatives. The choice of approach depends largely on organizational capacity, available resources, and often, the operational context. Across the sector, AI is already supporting needs assessments through satellite imagery analysis, automating translation in multilingual crises, mapping population displacement, and forecasting extreme weather or disease outbreaks. 


From a macro perspective, Heriberto Tapia presented how AI can support human development in the recently published report: People and Possibility in the Age of AI. AI tools can improve data collection for indices such as the Human Development Index, strengthen risk assessment, and help governments understand disparities more systematically.


These applications demonstrate how AI can augment limited human resources and accelerate lifesaving decisions. Yet they also highlight the dependency on data quality, access, and context sensitivity—factors that often remain uneven across humanitarian environments. 


Risks, Challenges and in Search of Better AI Governance

Abeytia (2025) elaborated on these  limitations of relying on AI in humanitarian settings. Abeytia  introduced model drift as a serious AI system risk. AI systems have a tendency to behave unpredictably when deployed in environments different from those in which they were trained because the framing of questions or conditions on the ground changes rapidly during crises. AI systems can deliver inconsistent or misleading outputs. This instability limits trust and reliability, particularly where consequences directly affect safety or resource allocation. 


Further, in the age of rapid digitalization, it must not be ignored that the digital divide affecting disaster or conflict-affected populations is significant. The most vulnerable are often absent from the data required to train the AI models, thereby rendering them inconsistent and dangerous in some situations. Asma Derja, Founder of Ethical AI Alliance with her team has been tracking an AI Harm Map to record verified cases of AI-related harm worldwide, using transparent methodologies to document impact across conflict, surveillance, rights and the environment. This map provides well-documented proof and a platform to rapidly move towards better AI governance models for course correction.

Today, there is a growing disconnect between the proliferation of AI tools and the degree to which humanitarian workers perceive them as improving decision-making. While AI promises efficiency, it cannot replace the value of human rights–centered governance or the lived experience of crisis-affected communities. Without ethical safeguards, AI risks reinforcing existing inequalities or producing outcomes misaligned with community needs: the idea of humanitarian justice remains sidelined.


Case Studies: AI Deployment with the White Helmets in Syria and the use of AI to read unequal cooling access during extreme heat in India

To better understand how best AI can be deployed in sensitive regions, HOISA had invited John Jaeger, CEO of Hala Systems to share practical insights from building an AI system for the White Helmets in Syria. His experience highlighted several lessons essential for responsible AI deployment in conflict-affected environments:

  • Contextualization is non-negotiable: AI models must be adapted to the local environment, culture, and information ecosystem.

  • Trust-building matters as much as technology: Communities must understand—and trust—what AI does and does not do.

  • Testing before, validating after: Both pre-deployment hypothesis testing and post-deployment verification are critical to ensure AI tools do not introduce new risks.

  • Meet people where they are: Delivering outputs through familiar communication channels reduces confusion and increases adoption.


For the White Helmets, integrating AI was not simply a technical exercise but a collaborative process that required sensitivity to the realities of frontline responders. Hala’s detection and warning system, known as Sentry, uses mobile applications, natural language processing, sensors, and remotely controlled warning devices to make civilians in war zones aware of an impending threat. The company claims that after first deploying the technology in Syria in 2016, it provided an average of seven to ten minutes to civilians – a short, but crucial timeframe that allowed people to seek safety. 


This case study lays out how effectively AI systems can be integrated with local action to produce results that can truly impact the frontliners positively. Mihir Bhatt of AIDMI described their work in extreme heat-affected communities and shared an example from urban India, where AI was tested to address unequal access to cooling among thousands of small businesses suffering from extreme heat in over ten cities (Click here).  Such pilots illustrate how AI can identify gaps that traditional assessments might overlook.


Toward AI Equity for Marginalized Communities

While increasingly decentralized, AI systems are a product of technological colonialism in which AI systems built far from crisis contexts impose external worldviews and biases onto vulnerable populations. As humanitarians, addressing these inequities demands:

  • Inclusive data and participatory design,

  • Regulations that protect communities,

  • Market incentives favoring ethical innovation, and

  • Education for both the public and decision-makers.


Ultimately, AI’s future in humanitarian action must be grounded in community empowerment and should shift away from extractive development models. By centering dignity, agency, and justice, humanitarian AI can move from simply optimizing systems to genuinely improving lives.


We must always remind ourselves that the promise of AI must not overshadow its political and social implications. AI reads from our society and therefore can reinforce the same power imbalances, especially in contexts where communities have limited agency to influence how they are represented in data or how technologies are deployed. Preserving identity, consent, and self-determination must remain central to any AI-driven development initiative. In the end, the success of AI and its tools in the humanitarian system is measured not in terms of water or food delivered but lives saved with the delivery of humanitarian justice.


 
 
 

Comments


Don’t Miss a Session.
Be One of Us 

Thanks for submitting!

© 2023 by HOISA. 

  • Grey Instagram Icon
  • Grey Twitter Icon
  • Youtube
bottom of page