AI’s invisible risk: Why CISOs see 2026 as a turning point

Artificial intelligence is accelerating faster than any security framework, governance model or organisational maturity curve. And while AI promises efficiency and augmentation, several Australian CISOs say the real challenge isn’t the technology itself, it’s the lack of visibility behind it.

Joe Cozzupoli, Principal Security Advisor and Field CISO for Cosive, describes 2026 as the first true “AI-accelerated threat era”, where defenders will face machine-generated phishing, adaptive malware and automated attack paths that shift faster than traditional controls can detect.

“We’re moving into a world where SOCs [security operations centres] must become intelligence-driven,” he explains, emphasising the pairing of contextual threat intelligence with AI-assisted analytics. “Leaders who invest early in cyber threat intelligence alignment, telemetry and governance will gain the visibility needed to operate confidently; those who wait risk being overwhelmed.”

Ahmed Hussein, Head of Security Architecture – Federal Government for ARCA Information Security, frames it even more bluntly: AI is currently a visibility black hole for data. Australian organisations, he says, are using AI systems without understanding where their information is stored, processed or replicated. Many are already pasting sensitive data into AI tools for convenience and discovering too late that their inputs may have been retained or transferred. With no clear frameworks and no mature AI assurance models, data leakage becomes not a theoretical risk but a present, growing one.

Leslie Nagy, Head of Cyber Security for Gumtree Group, echoes this uncertainty, but views the risk through the lens of workforce impact. “As AI automates routine SOC work – summarising alerts, triaging events, assembling incident reports – the entry-level talent pipeline begins to shrink,” he says.

Without early career roles, future defenders may lack the foundational experience needed to handle complex incidents or novel threats. It’s the long-term structural impact, not the short-term efficiency, that concerns Leslie most: “AI doesn’t just change tools, it changes the shape of the team.”

Arun Singh, Chief Information Security Officer for Tyro Payments, adds a governance dimension. The surge in SaaS (software as a service), third-party enrichment tools and AI-enabled integrations has created fragmented data flows and heightened supply chain risk. Organisations now rely on distributed systems with uneven maturity, variable transparency and inconsistent evidence practices. Without clear visibility into what vendors collect, store or train models on, boards cannot fully understand their exposure, let alone assure it.

After speaking with all of these leaders, one clear theme emerged: AI is not just a technology shift, it’s a visibility shift. Where data goes, how it is used, how decisions are made and how attackers adapt are all becoming more opaque.

And they all agree on three imperatives:

  1. Invest in AI governance before AI adoption.
  2. Embed CTI-driven decision loops into every function.
  3. Prepare the workforce now for an AI-augmented future, not an AI-dependent one.

AI will redefine cybersecurity in 2026, but only for organisations prepared to move from blind trust to informed oversight.

This article was originally featured in Connector magazine, published by the Australian Information Industry Association (AIIA).

Share the Post:

Related Posts