Connect with us

News

ADL Report Says Leading AI Models Show ‘Concerning’ Bias Against Israel, Jewish People

Published

on

Aaron Schwartz - CNP / MEGA

The Anti-Defamation League (ADL) says it has identified anti-Jewish and anti-Israel bias patterns in leading AI models such as OpenAI’s GPT, Anthropic’s Claude, Google’s Gemini, and Meta’s Llama.

The report, released Tuesday, was conducted by the ADL Center for Technology and Society, and reveals these models’ tendencies to propagate misinformation related to Jewish people, Israel, and antisemitic tropes.

“Artificial intelligence is reshaping how people consume information, but as this research shows, AI models are not immune to deeply ingrained societal biases,” ADL CEO Jonathan A. Greenblatt said in a statement. “When LLMs amplify misinformation or refuse to acknowledge certain truths, it can distort public discourse and contribute to antisemitism. This report is an urgent call to AI developers to take responsibility for their products and implement stronger safeguards against bias.”

According to the report, Meta’s Llama exhibited the most pronounced biases, providing “unreliable and sometimes outright false” responses to questions about Israel and Jewish people. Llama, the only open-source model in the analyzed group, was the only model whose lowest score was on a question about the role of Jews in the “great replacement” conspiracy theory.

GPT and Claude, meanwhile, showed notable biases concerning the Israel–Hamas conflict. According to ADL, both models “struggled to provide consistent, fact-based answers.”

The report also indicated that LLMs refuse to answer questions about Israel more often than questions about other topics.

“LLMs are already embedded in classrooms, workplaces, and social media moderation decisions, yet our findings show they are not adequately trained to prevent the spread of antisemitism and anti-Israel misinformation,” said Daniel Kelley, Interim Head of the ADL Center for Technology and Society. “AI companies must take proactive steps to address these failures, from improving their training data to refining their content moderation policies. We are committed to working with industry leaders to ensure these systems do not become vectors for hate and misinformation.”

ADL’s research was conducted in partnership with Builders for Tomorrow (BFT). Each LLM was queried 8,600 times, totaling 34,400 responses. This project marks the first phase of a broader initiative to address LLM biases.

Moving forward, ADL urges AI developers and government bodies alike to prioritize safety and reliability in AI systems. By following recommended frameworks, stakeholders can ensure these technologies do not perpetuate harmful stereotypes.

Trending News