Israel’s AI Use in Gaza Normalises Civilian Killings with Big Tech Complicity

Sat Apr 05 2025
icon-facebook icon-twitter icon-whatsapp

Key points

  • Amazon, Google, and Microsoft explicitly working with IDF: Heidy Khlaaf
  • AI models used for targeting have accuracy rates as low as 25pc
  • Israel’s AI-generated targets operate under very loose guidance: Khlaaf

ISLAMABAD: Israel’s use of artificial intelligence (AI) in its ongoing assault on the Gaza Strip – supported by tech giants like Google, Microsoft, and Amazon – is raising concerns about the normalisation of mass civilian casualties and prompting serious questions about the complicity of these companies in potential war crimes.

Several reports have confirmed that Israel has deployed AI models such as Lavender, Gospel, and Where’s Daddy? to conduct mass surveillance, identify targets, and direct strikes on tens of thousands of individuals in Gaza – often in their own homes – with minimal human oversight, according to Anadolu Ajansi.

Rights groups and experts argue that these systems have played a pivotal role in Israel’s relentless and seemingly indiscriminate attacks, which have devastated large parts of the besieged enclave and resulted in the deaths of over 50,000 Palestinians, the majority of whom are women and children.

Normalising civilian casualties

“With the explicit use of AI models known for lacking precision, we are witnessing the normalisation of mass civilian casualties, as we’ve observed in Gaza,” Heidy Khlaaf, a former systems safety engineer at OpenAI, told media.

20170306 Adelard 030 HR scaled 1

Khlaaf, now the chief AI scientist at the AI Now Institute, warned that this trend could set a dangerous precedent in warfare, where military forces shift responsibility for potential war crimes onto AI systems, benefiting from the absence of a robust international mechanism to intervene or hold them accountable.

“This creates a perilous combination that could result in military entities avoiding responsibility for potential war crimes by simply pointing to an AI system and claiming, ‘It was the algorithm that made the decision, not me,’” she explained.

She stressed that Israel is using AI systems at “nearly every stage” of its military operations, from intelligence gathering and planning to final target selection.

Predicting future targets

The AI models, she explained, are trained using various data sources, such as satellite imagery, intercepted communications, drone surveillance, and the tracking of individuals or groups.

“They develop multiple AI algorithms that use statistical or probabilistic calculations based on historical data to predict where future targets might be,” she added.

However, she pointed out that these predictions “do not necessarily reflect reality.”

Khlaaf highlighted recent reports revealing that commercial large language models (LLMs) like Google’s Gemini and OpenAI’s GPT-4 were used by the Israeli military to translate and transcribe intercepted Palestinian communications, automatically adding individuals to target lists “based purely on keywords.”

Unverified AI targets

She noted that investigations have confirmed one of the Israeli military’s operational strategies involves generating large numbers of targets via AI, without verifying their accuracy.

gettyimages 1797517034 slide cf75cfabb5641fd06cea3ddd0f9cf613d7c9536d

The expert underlined that AI models are fundamentally unreliable for tasks that require high precision, such as military targeting because they rely on statistical probabilities rather than verified intelligence, Anadolu Ajansi.

“Unfortunately, assessments have shown that AI models used for targeting can have an accuracy rate as low as 25 per cent,” Khlaaf said.

“So, given AI’s high error rates, and a force like the IDF (Israel Defense Forces) willing to accept significant civilian casualties to eliminate a single target… this inaccurate automation of target selection is dangerously close to indiscriminate bombing on a large scale.”

Automation without accountability

Khlaaf further emphasised that the growing use of AI in warfare is setting a dangerous precedent, where accountability is obscured.

“AI is normalising inaccurate targeting practices, and due to the scale and complexity of these models, it becomes impossible to trace their decisions or hold any individual or military accountable,” she asserted.

Even the so-called “human in the loop” safeguard, often presented as a fail-safe against AI errors, appears insufficient in the case of the IDF, she added.

Investigations revealed that those overseeing Israel’s AI-generated targets operated under “very loose guidance,” raising doubts about whether any efforts were made to minimise civilian casualties, according to Khlaaf.

In the age of 4IR, artificial intelligence (AI) victims that might have been blips on a screen are further dehumanised as mere numbers in a spreadsheet, an outcome of a machine learning model that has decided based on past data that this person, on the basis of probability, deserves to die, according to the New Arab.

Avoiding war crimes

She warned that the current trajectory could allow militaries to avoid war crime by blaming AI for erroneous targeting.

“If it’s hard to trace why an AI may have contributed to civilian casualties, you can easily imagine a situation where AI is used specifically to avoid accountability for killing large numbers of civilians,” she said.

Khlaaf confirmed that major US tech firms are directly involved in providing AI and cloud computing capabilities to the Israeli military.

“This is not a new trend,” she noted, recalling that Google has been supplying AI and cloud services to the Israeli military since 2021 through its $1.2 billion Project Nimbus, alongside Amazon.

Microsoft’s involvement deepened after October 2023, as Israel relied more on its cloud computing services, AI models, and technical support, she added.

Other companies, including Palantir, have also been linked to Israeli military operations, though details of their roles remain unclear.

Khlaaf stressed that these partnerships go beyond merely selling general-purpose AI tools.

Amazon, Google, and Microsoft backing IDF

“It’s important to note that the IDF isn’t just using off-the-shelf cloud or AI services. Amazon, Google, and Microsoft are explicitly working with the IDF to develop or allow them to use their technologies for intelligence and targeting, despite being aware of AI’s low accuracy rates, its failure modes, and how the IDF intends to use these systems for targeting,” she explained.

The implications are that these tech companies are “complicit and directly enabling” Israeli actions, including those that could be deemed unlawful or considered war crimes, Khlaaf said.

“If it’s determined that the IDF is committing specific war crimes, and the tech companies have guided them in committing those crimes, then yes, that makes them very much complicit,” she added.

AI-driven warfare

Khlaaf warned that the world is witnessing “the full embrace of automated targeting without due process or accountability,” a trend supported by increasing investments from Israel, the US Department of Defense, and the EU.

“Our legal and technical frameworks are not prepared for this type of AI-driven warfare,” she said.

While existing international law, such as Article 36 of the 1949 Geneva Convention, requires legal reviews for new weapons, there are currently no binding international regulations specific to AI military technologies, according to Anadolu Ajansi.

Furthermore, although the US maintains export controls on certain AI-enabling technologies, such as GPUs and certain datasets, there is no “wholesale ban on AI military technology specifically,” she noted.

“There’s an enormous gap that hasn’t been addressed,” Khlaaf concluded.

icon-facebook icon-twitter icon-whatsapp