When Iranian drones struck Amazon Web Services data centers in the UAE and Bahrain in early March 2026[i], millions in Dubai and Abu Dhabi awoke to locked-down digital lives. Suddenly people found they “couldn’t pay for a taxi, order dinner, or check their bank balance” on their phones according to reporting on the strikes.[ii] Online, citizens scrambled for explanations: was it a cyberattack, a spillover of war, or just a fluke? The reality was clear: the war had come to the cloud. This public confusion over a disrupted data center highlights how modern Middle Eastern conflicts have quietly extended into cyberspace and increasingly involve AI-driven operations.
This episode is far from isolated. Around the same time, cybersecurity analysts reported a spate of digital incidents linked to the Israel–Iran confrontation. Hacktivist groups on Telegram boasted of new cyber strikes on ports and ministries, old leaks of Israeli infrastructure reappeared on dark forums, and official networks came under sustained probe. The United Kingdom’s National Cyber Security Centre (NCSC) even warned[iii] organizations with Middle East ties to brace for Iranian cyberattacks amid the escalation. What we are witnessing may represent the first large-scale test of AI-enabled cyber warfare, where algorithms assist states in targeting infrastructure, conducting cyber operations, and shaping the strategic environment of war.
AI in warfare
In effect, the region’s war machine now runs as much on algorithms as on artillery. We are seeing a structural shift: states are beginning to use automated cyber tools as strategic assets as part of an early test of what it means when governments delegate parts of warfare to machines and code.
Behind these attacks is technology advancing at machine speed. Today’s AI systems help adversaries at every stage of a cyber campaign. During reconnaissance, machine-learning algorithms can scan millions of internet-connected devices in seconds, identifying vulnerable targets like exposed routers, servers or IoT cameras. Indeed, researchers observed Iranian-affiliated actors sweeping through[iv] thousands of Hikvision and Dahua security cameras in Israel and Gulf countries for known exploits, that is, code that can take advantage of a vulnerability. Compromised cameras have been surveying[v] sites like Israel’s Weizmann Institute just before missile strikes. AI supercharges the enemy’s ability to locate such sites. Modern AI systems can analyze massive streams of data, satellite imagery, intercepted communications, online activity and network traffic to identify patterns or vulnerabilities that human analysts might miss. These capabilities allow military planners to generate potential targets, evaluate risk scenarios and simulate possible enemy responses within minutes.
In the current conflict, U.S. and Israeli forces reportedly used AI systems integrated into intelligence platforms to process surveillance data and identify more than 1,000 potential targets within the first 24 hours of military operations.[vi]
In the attack delivery phase, however, generative AI is proving dangerous. Large language models are now routinely used to craft highly convincing phishing messages and malware lures in Arabic, Hebrew, Persian and English alike. Palo Alto Networks’ Unit42 reports that Iran-linked groups use “AI-enhanced targeted spear-phishing campaigns,” generating payloads that adapt tone[vii] and context to their victims. For example, in March 2026 CloudSEK analysts discovered a fake Israeli “Red Alert” missile warning app distributed via SMS – a malware-laced APK sideloaded by panicked users that steals[viii] SMS messages, contacts, and precise GPS locations under the guise of a war emergency. This “trojanized” emergency app, which victims only downloaded to stay safe, became a high-value spy tool. AI’s ability to write realistic messages and mimic official software flows makes such attacks far more scalable and stealthier than old-school scripts.
Another emerging development is the blending of digital and physical warfare. Analysts now describe a category of operations known as kinetic cyber attacks – cyber operations that directly trigger physical disruption or accompany military strikes.
The recent targeting of cloud infrastructure illustrates this shift. During the conflict, Iranian drone strikes reportedly targeted commercial data centres used by major cloud providers in the Gulf, disrupting digital services for millions of users in the UAE and Bahrain.
On the flip side, governments and companies also use AI for defense. Machine learning systems now power intrusion detection, log analysis and automated response. In the face of thousands of alerts per second from critical networks, AI filters help human analysts prioritize real threats. Faster containment of breaches owes much to these tools. Indeed, industry data suggests that average breach costs are now declining because of quicker detection. An IBM report found[ix] that faster AI-driven response led to the first-ever drop in average breach cost in 2025. Still, almost all organizations hit by recent AI-aided attacks lacked adequate countermeasures, according to the report. In short, AI both sharpens the sword and powers the shield.
The advantage is clear: automated tools remove human delay and bias. AI defends by continuously monitoring and learning normal behavior, potentially catching intruders that slip past routine checks. Offensively, AI enables a tiny team to launch hundreds of attacks at once or tailor exploits in real time. In a rapidly evolving conflict, speed and scale can be decisive.
The governance challenge of AI tools
But this technological edge comes with a stark governance problem. AI systems make mistakes as well as miracles, and in war, those mistakes can be catastrophic. If an AI falsely identifies a safe server as hostile or misreads flight patterns, entire networks or drones might be shut down unnecessarily. Citizens feel this when security cameras malfunction or data disappear and no one can explain why. The Guardian noted how Gulf investment in AI could be undermined by attacks on data centers, bluntly calling[x] it the “new frontier in asymmetric warfare.” The Emirates’ push to be an AI superpower suddenly raised urgent security questions. As a US expert observed, UAE authorities “will have to resolve [these] very quickly” after the strikes on their cloud hubs.[xi]
The problem is not AI itself so much as the lack of oversight. In many Middle Eastern states, cyber and AI strategies emphasize ambition over guardrails. An academic analysis of Gulf countries’ AI policies describes[xii] a “soft regulation” approach to grand national AI plans and ethical principles are laid out, but with few binding rules or enforcement mechanisms. In practice, this means AI and cyber units can operate in legal gray zones. When an autonomous intrusion-detection system blocks a legitimate foreign guest or an automated counterstrike goes awry, there is often no one to hold publicly accountable. Unlike mature AI governance in some Western contexts (for example, the EU’s draft AI Act or auditing requirements being tested elsewhere), states in the region rarely subject their cyber-AI tools to independent review or require transparency about their operation.
This erosion of trust is dangerous. In wartime a government’s first duty is protection, but if the tools it deploys cannot be audited or appealed, public confidence vanishes. People begin to wonder: who authorized this kill-switch on our banking app? Which ministry signed off on scanning my home camera? A lack of transparency means citizens see only an opaque “system” governing their lives. Accountability crumbles when no law clearly assigns blame for AI mistakes. Was it the security agency, the software provider, or the commander who pressed ‘go’?
Meanwhile, other nations are grappling with these questions. Japan, the EU and even China are drafting rules around AI transparency and third-party audits; the US has military ethics guidelines for autonomous systems. The Gulf is not leading in this, even as it leads in cloud infrastructure. In the global AI arms race, Gulf states have tied themselves tightly to Silicon Valley’s networks. Microsoft has poured[xiii] over $15 billion into the UAE’s AI projects, AWS is building multi-billion-dollar data hubs in Saudi Arabia, and Google just announced a $10 billion AI cloud center there. All this investment presumes stability. But as the recent drone strikes showed, the very technology they build can become a target.
For now, Middle East policymakers must act. First, they need clear cyber-AI accountability frameworks, laws or charters that define how AI may be used in national security, including required accuracy levels and response protocols. Independent audits of AI-driven cyber tools should be mandated, even behind closed doors, to catch subtle flaws. Second, legal rights for citizens should include appeal against automated security actions for example, the right to see evidence and contest an AI’s conclusion in a cyber incident. Third, transparency is essential: public dashboards or reports on AI security systems could track metrics like false positives, incident rates and reversals, helping rebuild trust through data. Finally, technology cannot be divorced from strategy: regional powers should embed cyber-AI tactics in a broader smart defence plan. This means training and retaining local talent, so “AI defence” is not a black box run by foreign contractors but a sovereign capability.
These steps must be realistic for developing and transitional systems. They rely more on governance and laws than on throwing money at new weapons. Even without expensive hardware, a country can insist that its AI programmes include human-in-the-loop oversight or periodic red-teaming by outside experts. Small states in the Gulf and beyond can form alliances or share best practices on AI cyber controls, much as they share anti-missile batteries.
Conclusion
In the end, the AWS strikes and social media uproar signal a turning point. The Middle East’s next wars will mix drones and servers, pipelines and processors. How governments respond now will shape their future use of AI. As one analyst warned[xiv], being good at AI is one thing protecting it is another. The technology will not stop evolving, but strong governance can ensure it does not run out of our control. The war between Israel, the United States and Iran, with Gulf states entangled in its geopolitical orbit, may ultimately be remembered not only for its missiles and drones, but also as the moment when AI-powered cyberwar became a permanent feature of global conflict. And if that is the case, the most decisive battles of the next generation may not be fought in the skies or on the ground but in code too.
[i] Boffey, D. (2026).“’It means missile defence on datacentres’: drone strikes raise doubts over Gulf as AI superpower”, The Guardian, 7 March 2026, retrieved from: https://www.theguardian.com/world/2026/mar/07/it-means-missile-defence-on-data-centres-drone-strikes-raises-doubts-over-gulf-as-ai-superpower.
[ii] Ibid.
[iii] Gatlan, S. (2026). “UK warns of Iranian cyberattack risks amid Middle-East conflict”, Bleeping Computer, 2 March 2026, retrieved from: https://www.bleepingcomputer.com/news/security/uk-warns-of-iranian-cyberattack-risks-amid-middle-east-conflict/.
[iv] Paganini, P. (2026). “Iran-linked hackers target IP cameras across Israel and Gulf states for military intelligence”, Security Affairs, 7 March 2026, retrieved from: https://securityaffairs.com/189069/cyber-warfare-2/iran-linked-hackers-target-ip-cameras-across-israel-and-gulf-states-for-military-intelligence.html.
[v] Ibid.
[vi] Copp, T. et al. (2026). “Anthropic’s AI tool Claude central to U.S. campaign in Iran, amid a bitter feud, The Washington Post, 4 March 2026, retrieved from: https://www.washingtonpost.com/technology/2026/03/04/anthropic-ai-iran-campaign/.
[vii] Unit 42 (2026). “Threat Brief: March 2026 Escalation of Cyber Risk Related to Iran”, 2 March 2026, retrieved from: https://unit42.paloaltonetworks.com/iranian-cyberattacks-2026/.
[viii] Cloud SEK (2026). “RedAlert Trojan Campaign: Fake Emergency Alert App Spread via SMS Spoofing Israeli Home Front Command”, 3 March 2026, retrieved from: https://www.cloudsek.com/blog/redalert-trojan-campaign-fake-emergency-alert-app-spread-via-sms-spoofing-israeli-home-front-command.
[ix] Gupta, K. and Rourke, D. (2026). “AI vs. AI: The arms race for security”, J.P.Morgan, 27 February 2026, retrieved from: https://privatebank.jpmorgan.com/nam/en/insights/markets-and-investing/tmt/ai-vs-ai-the-arms-race-for-security.
[x] Boffey (2026).“’It means missile defence on datacentres’: drone strikes raise doubts over Gulf as AI superpower”.
[xi] Ibid.
[xii] Ibid.
[xiii] Reuters (2026). “Escalating tensions turn spotlight on Big Tech’s AI investments in Middle East”, 2 March 2026, retrieved from: https://www.reuters.com/business/retail-consumer/escalating-tensions-turn-spotlight-big-techs-ai-investments-middle-east-2026-03-02/.
[xiv] Boffey (2026).“’It means missile defence on datacentres’: drone strikes raise doubts over Gulf as AI superpower”.












