The narrative around enterprise artificial intelligence (AI) has largely focused on productivity, automation and the race to deploy copilots into every workflow. But across conversations with four senior Australian cyber leaders, a far more urgent truth came into view: AI isn’t just software anymore – it is becoming an identity inside the enterprise.
AI agents are now reading data, triggering workflows, making decisions, interpreting policy, synthesising context and sometimes acting faster than any human could. Yet while organisations carefully manage human onboarding, access rights, entitlements and role-based controls, most have no equivalent structure for AI.
The result is a range of quietly escalating risks: AI has privileges without accountability, access without ownership, influence without auditability. And almost nobody is managing it with the discipline it requires.
Identity for humans is mature; identity for AI barely exists
Michael Hamilton, Director of Cybersecurity, Data & Analytics for Dipole Group, was the most direct: Leaders must now treat AI “like another user”. That means every agent needs an identity, an access profile, an owner and audit trails, just like a human employee.
Michael sees this first-hand in his work across telecom, government and large enterprise environments: “When AI agents are granted broad access, they can traverse systems at machine speed, pulling insights across boundaries no-one intended.” He argues for a three-pillar architectural model – public models for external discovery, private open-source models for internal sensitive domains, and specialised SaaS AI for narrow use cases – each with isolation, segregation and controlled identity boundaries.
His point is simple yet profound: If an AI system can touch data, it must be treated as an identity, not a utility.
Poor data pipelines turn AI into an unmanaged insider
Amit Yadav, vCISO and Principal Cybersecurity Executive Advisor for Verizon Business, approaches the same problem from the data plane. He warns that organisations still underestimate the risks of uncontrolled ingestion: the way logs, events, content and structured/unstructured data flow into AI systems.
He has long argued against “ingest everything” patterns, which he saw inflate SIEM costs and collapse observability programs. Instead, he champions careful pipeline design: filter, normalise, store cheaply and only forward what is needed.
Applied to AI, the consequences become stark.
When employees paste internal content into public tools – or when agents are wired directly into production systems – the AI effectively becomes an undeclared insider. It has access, context and influence, without the organisation ever granting it an identity, an owner or a governance path.
Amit stresses that cyber must now be framed in business-outcome language: leaders need to understand not only what AI can do, but what it is allowed to do, as well as what value its identity produces.
SOC teams already drowning in identity sprawl will feel this first
Chief Information Security Officer Bradley Busch brings a grounded operational lens. His perspective comes from years inside SOCs and uplift programs, where visibility, identity sprawl and response times are already stretched thin.
Bradley explains that modern environments generate overwhelming identity complexity: human users, service accounts, SaaS entitlements, cloud roles, ephemeral tokens, keys that never expire, legacy accounts that were never deprovisioned. Detection engineering teams are already working at capacity just keeping up with privilege creep.
AI changes the scale equation entirely.
Agents don’t just add one more identity. They add dozens, for the model endpoint, for each integration, for automation layers, for data pipeline connectors, for every environment the agent touches.
Every agent becomes both a consumer and generator of signals – and, unless governed, SOC teams inherit a new form of ‘shadow identity’. As Bradley puts it: “Uplift efforts fail not because teams lack tools, but because organisations underestimate operational load. AI identities multiply that load.”
Boards will soon treat AI identity as a governance failure, not an IT issue
For Andrew Brown, Virtual CISO, Cybersecurity GRC & Privacy for NexusCyber, the conversation elevates to the strategic plane: Identity is no longer an IAM housekeeping function, it’s a governance obligation.
Andrew emphasises that boards increasingly expect security leaders to provide clear, quantifiable explanations of risk, maturity and capability gaps. In his experience, identity-related blind spots fast become executive-level concerns because they affect compliance, resilience and business continuity.
AI introduces a new category of governance questions:
- Who approved the AI’s identity and permissions?
- What decisions can the AI influence or execute?
- How do we audit an agent’s behaviour?
- How do we demonstrate compliance when actions are probabilistic, not deterministic?
- Where does accountability sit when an AI identity makes a harmful decision?
For boards, these are not technical details, they are fiduciary responsibilities.
The organisations that lead will treat AI as a colleague, not a toy.
Australia is entering a phase where AI capability accelerates faster than organisational governance. The companies that thrive won’t be the first to deploy agents, they’ll be the first to control them.
Because once AI becomes a user, the question isn’t ‘What can it do?’, it’s ‘What should it be allowed to do — and who is accountable when it does it?’.
The path forward: Treat AI identity as a program, not a policy
If AI is a user, organisations must adopt the same discipline they apply to humans.
1. Establish an AI identity register
Every agent, model, integration, API key and workflow must have an owner, purpose and risk profile.
2. Enforce least-privilege access
Agents should start at zero privilege and gain access only through explicit approvals tied to business outcomes.
3. Segregate AI environments
Follow Michael’s multi-pillar model: public, private and specialised environments must be isolated.
4. Harden data pipelines
Amit’s guidance applies directly here: Filter, normalise and govern every flow to prevent accidental exposure.
5. Embed AI identity in SOC operations
Bradley’s warning is clear: Without detection rules, logging standards and audit controls, AI becomes invisible.
6. Make AI identity a board-level topic
Andrew’s governance lens ensures AI is tied to risk appetite, compliance and strategic outcomes, not IT experimentation.
This article was originally featured in Connector magazine, published by the Australian Information Industry Association (AIIA).

