Until the announcement of a two-week ceasefire[i] between the United States and Iran, much of the discussion that permeated political circles was centred around missile and drone strikes[ii], President Donald Trump’s provocative remarks on Truth Social[iii], and the recurring rounds of retorts[iv] between Tehran and Washington.
However, these dynamics obscure a more subtle, yet highly consequential tool of communication in this war: disinformation and the collapse of verification processes.
From information scarcity to information saturation: How war reporting has transformed
During ancient wars, from the chronicles of Thucydides to the court records of Achaemenid Persia, knowledge of the battlefield was scarce and delayed. Envoys travelled slowly to relay messages between adversaries, but these reports retained a certain authority. The messenger may have exaggerated events, and rulers may have embellished victories, yet, at its core, the truth remained intact.
Fast forward to the Napoleonic Wars and the armed conflicts of the early twentieth century. Information could be manipulated and weaponised, but with great enough effort, the truth would often come to light to offer a coherent and accurate account of events. Historically, the problem in war was opacity.
The Iran War today, however, presents a different problem entirely. As academic researcher at Oxford University and Kings College London Broderick McDonald aptly described, this is “one of the most polluted information environments… ever seen within a conflict.” Where there was once a dearth of information, the Iran conflict reflects a highly saturated information system. Disinformation embeds itself seamlessly within the noise.
AI disinformation in the Iran war: Why verification is no longer a stabilising force
Contrary to the old adage, “seeing is believing”, disinformation has created a structurally disorienting environment where this no longer holds. Instead, we are witnessing the steady collapse of verification.
Typically, this verification relies on open-source intelligence, journalistic standards and institutional processes that involve[v] looking for visual inconsistences in images and videos, digital watermarks by using tools such as SynthID, and finding the source via reverse image searching to see where the content first appeared online.
Some social media companies have introduced safeguarding measures[vi] such as changing their community guidelines to prevent unverified information from being promoted to users’ feeds and flagging this content before users send it to others.
Similarly, the EU AI Act[vii], which becomes broadly applicable in August 2026, is the world’s first comprehensive legal framework regulating AI, imposing strict requirements on transparency and risk management. This includes informing users that they are interacting with a machine when using AI systems such as chatbots.
Much of the tech-focused analysis of the conflict has highlighted the presence of AI-generated visual content in spreading disinformation. Fabricated images of U.S. troops surrendering[viii] to Iranian forces, the destruction[ix] of critical infrastructure in Gulf cities, and videos of the aircraft carrier USS Abraham Lincoln[x] burning at sea provide examples.
Admittedly, the proliferation of AI-generated content already began blurring the lines between real and artificial content prior this war, during the 2022 Russian invasion of Ukraine[xi] and the Sudan civil war.[xii] The defining feature of the Iran War is not the mere presence of falsehood, but the erosion of mechanisms that once enabled one to distinguish fact from fiction.
Disinformation was systematically institutionalised by the KGB as a core tool of statecraft during the Cold War. It ran several active disinformation campaigns[xiii] that influenced many across the world throughout the 1970s and 80s, using forged documents, planted media narratives and proxy outlets to shape perceptions of the U.S., and the West more broadly.
However, through rigorous intelligence analysis and investigative reporting, these narratives were eventually exposed as a fabrications[xiv] and removed from credible discourse. The crucial point here is that the information environment in which disinformation spread was structurally different from what we experience today where verification still served its purpose as a corrective mechanism.
Most recently, several Republican politicians were misled into disseminating an AI-generated image[xv] that falsely depicted the rescue of the pilot of a downed U.S. warplane. Most striking is that the image retained credibility long enough for it to heavily influence and mould public and political discourse. By the time verification processes debunked the image and a warning was issued stating, “this photo is probably AI generated”, the factual status of the image had already become secondary to its narrative impact.
The inversion of truth: When real becomes fake
Perhaps even more revealing about the communications landscape surrounding the ongoing war is the epistemic denial of authentic material.
Long before the birth of AI, Danish philosopher Søren Kierkegaard summed up this idea stating that there are two ways to be fooled: One is to believe what isn’t true; the other is to refuse to believe what is true.[xvi]
There is a critical inversion where footage of Iranian ballistic missile strikes on Tel Aviv were presented as genuine, but still met with public scepticism, with myriad observers dismissing the video as “AI generated” and “fake”.[xvii] The result? Disinformation through AI-generated content has become so pervasive that the public feel they must treat everything as suspect. Verification no longer settles disputes over authenticity.
Perhaps some public mistrust is warranted – verification tools themselves are fallible. AI systems tasked with verifying footage taken in the Iran War have mislabelled real videos as fake and AI-generated videos as authentic. To compound challenges, X’s Grok[xviii] has gone as far as to share its own AI-generated content about the war, adding further artificial content to an already polluted information stream. Meta’s Oversight Board[xix] argued that the company is “neither robust nor comprehensive enough to handle the scale and speed of AI-generated misinformation, particularly during crises and conflicts”.
Verification has seldom been instantaneous. However, with the surge of AI-generated media during the Iran War, disinformation is being weaponised at an unprecedented rate. This results in verification processes being outpaced by the flood of false information being shared on social media by a network of fake social media accounts[xx] allegedly linked to Iran’s Islamic Revolutionary Guard Corps, but also pro-Trump accounts.[xxi]
Hence, verification merely becomes another voice in a highly saturated and contested information environment. The Iran War marks a crucial transition point, where the issue is no longer merely whether falsehoods are spread, but whether the truth can meaningfully assert itself at all amongst high-velocity and high-volume digital circulation. Where verification once promised eventual clarity, it no longer carries such authority in dispelling fabrications as the public no longer knows who or what to believe.
Information warfare is no longer a peripheral concern but a core dimension of modern conflict—one that policymakers must treat with the same seriousness as kinetic operations, as the struggle now extends beyond missiles and narratives to the very credibility of reality itself.
[i] Wendler, J. and McLeary, P. (2026). “Trump announces Iran ceasefire ahead of 8p.m. deadline”, Politico, 7 April 2026, retrieved from: https://www.politico.com/news/2026/04/07/donald-trump-iran-war-ceasefire-00863103.
[ii] Hall, R. (2026). “Iran’s Retaliatory Strikes Challenge Image of Gulf Stability”, Time, 1 March 2026, retrieved from: https://time.com/7381884/iran-missiles-dubai-palm-gulf/.
[iii] Reuters (2026). “Trump says ‘a whole civilization will die tonight’ if Iran does not make a deal”, 7 April 2026, retrieved from: https://www.reuters.com/world/middle-east/trump-says-a-whole-civilization-will-die-tonight-if-iran-does-not-make-deal-2026-04-07/.
[iv] Reuters (2026). “Iran says Trump’s statements on Tehran requesting ceasefire are false and baseless”, 1 April 2026, retrieved from: https://www.reuters.com/world/middle-east/iran-says-trumps-statements-tehran-requesting-ceasefire-are-false-baseless-2026-04-01/.
[v] BBC (2026). “How we detect AI images and videos – in 90 seconds”, YouTube, 5 March 2026, retrieved from: https://www.youtube.com/watch?v=fiFWKnFa7xA.
[vi] McDonald, B. and Stockwell, S. (2026). “AI Information Threats and Crisis Response Practitioners’ Handbook”, Centre for Emerging Technology and Security, 9 April 2026, retrieved from: https://cetas.turing.ac.uk/publications/ai-information-threats-and-crisis-response-practitioners-handbook.
[vii] European Commission (N.D.) “AI Act”, retrieved from: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai.
[viii] CyberPeace (2026). “AI-Generated Video Falsely Shows US Soldiers Surrendering to Iranian Forces”, 14 March 2026, retrieved from: https://cyberpeace.org/resources/blogs/factcheck—ai-generated-video-falsely-shows-us-soldiers-surrendering-to-iranian-forces.
[ix] Ward, G. (2026). “How misinformation and AI deepfakes on social media are reshaping the Iran War”, EuroNews, 30 March 2026, retrieved from: https://www.euronews.com/next/2026/03/30/how-misinformation-and-ai-deepfakes-on-social-media-are-reshaping-the-iran-war.
[x] Ibid.
[xi] Allyn, B. (2022). “Deepfake video of Zelenskyy could be ‘tip of the iceberg’ in info war, experts warn”, NPR, 16 March 2022, retrieved from: https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia.
[xii] Goodman, J. and Hashim, M. (2023). “AI: Voice cloning tech emerges in Sudan civil war”, BBC News, 5 October 2023, retrieved from: https://www.bbc.co.uk/news/world-africa-66987869.
[xiii] Kramer, M. (N.D.) “Lessons From Operation ‘Denver’, the KGB’s Massive AIDS Disinformation Campaign”, The MIT Press Reader, retrieved from: https://thereader.mitpress.mit.edu/operation-denver-kgb-aids-disinformation-campaign/.
[xiv] Ibid.
[xv] Helmore, E. (2026). “Republicans fooled by AI-generated image of US airman rescued in Iran”, The Guardian, 6 April 2026, retrieved from: https://www.theguardian.com/us-news/2026/apr/06/republicans-ai-image-us-plane-member-rescue-iran.
[xvi] Kirkegaard, S. (1946). Works of Love, Princeton University Press. The popular quote is a modern paraphrase or smoothing of the original passage.
[xvii] Reddit (2026). “Brutal images from Israel show Iranian Ballistic missiles destroying everything in their path”, retrieved from: https://www.reddit.com/user/Kidnxpperr/comments/1rjxz1l/the_footage_has_been_verified_as_genuine/?solution=3339faa25779faf53339faa25779faf5&js_challenge=1&token=bbbe4bf1c9a2b5160829c4be34da5861aeed530357a676b9e1b87c4d511c9350.
[xviii] Gilbert, D. (2026). “Fake AI Content About the Iran War Is All Over X”, 10 March 2026, retrieved from: https://www.wired.com/story/fake-ai-content-about-the-iran-war-is-all-over-x/.
[xix] Oversight Board (2026). “Board calls for new rules on deceptive AI during conflicts”, 10 March 2026, retrieved from: https://www.oversightboard.com/news/board-calls-for-new-rules-on-deceptive-ai-during-conflicts/?utm_medium=email&_hsenc=p2ANqtz-_FL3zokmg1jcUz2Wo3UfOhiKESmIASxMbRg0tFK4Wj-Upnc7Wlpdg8mOkbiRPE1wnH4XRThKs60Pi-q3_E5_YaR58OPA&_hsmi=407915054&utm_content=407915054&utm_source=hs_email.
[xx] Zadrozny, B. (2026). “Iranian trolls are flooding social media with pro-Tehran, anti-war propaganda”, MS Now, 11 March 2026, retrieved from: https://www.ms.now/news/iran-propaganda-network-social-media.
[xxi] Helmore E. (2026). “Republicans fooled by AI-generated image of US airman rescued in Iran”, The Guardian, 6 April 2026, retrieved from: https://www.theguardian.com/us-news/2026/apr/06/republicans-ai-image-us-plane-member-rescue-iran.












