
Your newest employee might have no LinkedIn profile. No badge. No contract. No HR briefing. But probably access to your strategy documents, your customer files, and your financial forecasts. Welcome to the age of AI agents.
Picture a Monday morning. A new hire walks in. HR has prepared it all: laptop, badge, signed contract, briefing, a coffee chat with her manager. Within a week she’ll have a performance objective, a mentor, and a spot on the org chart. It will typically take a few more weeks, if not months, to get access permissions to all relevant data to get her job done efficiently.
Now picture another arrival, on the same Monday morning. Somewhere in your Microsoft 365 environment, someone in marketing has just deployed an AI agent in Copilot Studio. It can read her SharePoint and OneDrive files, send emails on her behalf, summarise customer calls, and draft proposals. It started working the moment she clicked “publish.” No badge. No briefing. No manager. And quite possibly, no one in IT or compliance who knows it exists.
Multiply that scene by every team in your company, and you start seeing the problem.
A year ago, most Belgian companies were experimenting with a handful of Microsoft 365 Copilot licences. Today the same companies are quietly accumulating dozens, sometimes hundreds, of custom agents across Copilot Studio, Power Platform, and Azure AI Foundry, alongside a growing list of third-party agents from Salesforce, ServiceNow, and others.
We used to call this sort of thing “shadow IT.” But that term no longer fits. Shadow IT was, at worst, a rogue Dropbox account. Shadow AI is something completely different: it is software that acts. It reads, writes, decides, and sometimes talks to the outside world, often on behalf of an employee who clicked “deploy” without thinking about what comes next.
When we start discussing AI governance with our customers, the uncomfortable truth is that most of them cannot answer this one simple question: “How many AI agents are running in our environment right now, and what are they doing?”
In our intro we took the sharp turn by introducing agents as taking over full employee responsibilities. Of course, the shift that matters for most organisations is quieter.
It starts with simple AI chatbots. A chatbot answering HR policy questions instead of a help desk colleague. An agent summarising client files before a meeting instead of an analyst. A workflow that drafts a tender response from past proposals. None of these replaces a job. They replace a task. But they do it autonomously, often with access to data the original human had earned through years of role-based permissions.
Then come the agents that decide. Which support ticket gets escalated. Which quote gets approved. Which customer gets a promotional offer. Each individual decision is small, but the cumulative effect on accountability is not.
This is where the risk lives. Not in dramatic mass replacements, but in a steady drip of delegated decisions and unsupervised access to potentially sensitive information, spread across hundreds of agents that never had a manager to begin with.
It’s tempting to file all of this under “something for IT to sort out.” But that would be a mistake. Three risks land squarely on the desk of the C-suite.
The first is data exposure. AI agents often inherit by default the access rights of whoever created them. If your marketing manager has access to customer pricing and, through some legacy permission, a folder of HR salary data (and yes, this happens), her agent inherits both. This way, one misconfigured agent forwarding the wrong document to the wrong partner is a GDPR incident, reputational damage, and a very uncomfortable board meeting all at once.
The second is accountability. Advanced agents make real decisions. They approve discounts, send quotes, classify complaints, escalate tickets. When something goes wrong, who is on the hook? The employee who deployed the agent? The IT team that allowed the platform? Under the EU AI Act and DORA, regulators expect you to know. “We weren’t sure” is not enough.
The third is cost. Every agent burns tokens, API calls, compute. Without governance, AI spend can grow faster than AI value. Some CFOs are watching their AI budgets explode month after month with no clear answer to which agents are useful, which are dormant, and which are running in a loop nobody noticed.
Forget the abstract frameworks. The most useful way to think about AI agents is to treat them as what they functionally are: digital colleagues. And every HR department already knows the lifecycle.
There’s hiring and the job description. Who in your company is allowed to create and deploy an agent, with what approval? In most companies today the honest answer is: anyone with a Copilot Studio licence and ten minutes between meetings. And once an agent is live, what is it actually allowed to do? Which data can it touch, which external systems can it call? An agent without a clear scope is the digital equivalent of a freelancer with a master key and no end date.
There’s the performance review. You need to know what agents are actually doing, not what they were supposed to do. This is where tools like AvePoint AgentPulse, Microsoft Purview, and the Copilot Control System play their role. They show your agent activity at scale, flag anomalies, and let you audit behaviour after the fact.
And finally, there’s offboarding. When an agent is no longer needed, how do you know? And who shuts it down? In most organisations, nobody. Dormant agents keep their permissions indefinitely, waiting patiently for the day someone exploits them.
European regulation is sharpening the urgency. The EU AI Act, DORA, and NIS2 each push organisations toward demonstrable oversight of automated systems. But regulation shouldn’t be the main reason to act. Reputation and money are at stake long before the auditor arrives.
Most leadership teams today genuinely don’t know. Not because they’re careless, but because AI tooling crept in and nobody has asked the right questions yet. The agents are already in the building, the budgets are already moving, and the assumption has been that someone, somewhere, has it under control.
That assumption is the problem.
A short governance assessment is the cheapest way to find out where you actually stand. We built a focused diagnostic that maps your maturity: how many agents you have, who owns them, what they can access, and where the gaps are before someone else finds them for you
Sixteen questions across four governance dimensions. Roughly five minutes. A personalised score, a breakdown by pillar, and three concrete recommendations at the end.
Every major technology shift eventually forces a new governance vocabulary on us. The industrial revolution gave us labour law. The digital revolution gave us GDPR. The agentic revolution is now asking the same question.
Leaders who treat AI agents as a curiosity for the IT team will spend the next two years catching up. Leaders who treat them as a new kind of workforce, with the rights and risks that implies, will spend the next two years getting ahead.
Your newest employee won’t be on LinkedIn. But she’s already at work. The only question worth asking is whether anyone is watching.