Every few months, another survey lands in a finance executive's inbox confirming what they already suspect: Agentic AI, which refers to artificial intelligence systems that can act autonomously to perform tasks, is coming for the spreadsheet, the reconciliation report and the quarterly close.
The technology, the vendors promise, will do in minutes what analysts spent days on. And the executives nod, approve the budget line, and then... wait. For the governance framework. For IT to sort out the ERP integration. For someone, somewhere, to figure out what happens when an autonomous system touches financial data it wasn't supposed to touch.
That gap -- between what finance leaders say they want and what their organizations have actually built -- is the subject of a new report from Savant Labs, which surveyed director-level and above finance, tax, and accounting leaders at North American enterprises across 22 industries. The results are less a portrait of a sector charging into the AI future than one that keeps circling the runway, unable to land.
While 76% of respondents say 2026 is the year for strategic agentic automation investment, only 30% have functional pilots underway. A mere 6% have reached anything resembling enterprise-wide implementation. The rest are running isolated experiments or haven't started at all -- not because the money isn't there, but because the trust isn't.
When companies talk about what's blocking AI adoption, the conversation usually gravitates toward cost, murky ROI projections, or a talent pipeline that can't keep pace with demand. In the finance industry, these concerns are largely irrelevant. More than 60% of leaders ranked data governance and security -- alongside ERP integration complexity -- as their primary barriers to widespread adoption, a concern that registered five times higher than cost and ROI combined.
Finance is a domain with a very short memory for mistakes. An autonomous AI agent that misclassifies transactions, generates a flawed tax filing, or reconciles the wrong accounts doesn't just create a messy Monday morning -- it creates regulatory exposure, audit findings, and the kind of institutional embarrassment that ends careers.
So when finance leaders hesitate, it isn't technophobia. It's a rational response to the specific weight of consequences in their world, where every output needs to be explainable, traceable and defensible to someone who isn't impressed by benchmark scores.
ERP systems sharpen this anxiety considerably. These are often the oldest, most load-bearing pieces of enterprise infrastructure -- systems that have been customized, patched, and jury-rigged over decades, and that touch nearly every financial process an organization runs.
Nearly a quarter of leaders cited ERP integration complexity as a top barrier. Their major reason wasn't because integration is technically impossible, but that threading agentic automation through that architecture without unraveling something critical requires a level of institutional coordination that most organizations haven't yet managed to pull off.
The report's findings on headcount deserve more attention than they'll probably get. More than 80% of finance leaders expect no net change in staffing levels through 2026, positioning AI as a productivity multiplier rather than a workforce reduction tool. Only 3% anticipate net job growth.
That has become something of a corporate comfort blanket, deployed whenever the conversation turns to displacement. And to be fair, in many cases, it reflects genuine intent.
But it also papers over a more uncomfortable reality: When headcount stays flat while AI handles growing volumes of work, the pressure on individual contributors doesn't disappear. It just becomes invisible, absorbed into expanded role expectations and performance standards that quietly move upward. So, even though the jobs stay, they get harder to do at the level now required.
The report reveals a meaningful split between different corners of the finance function, and it tracks exactly the way you'd expect. Accounting and finance operations teams -- where the work is high-volume, rules-driven and repetitive -- have moved with more confidence because the use cases are legible and the outcomes are measurable. Automated reconciliations, scheduled reporting and month-end accruals: These are workflows where an agent can demonstrate clear value without wandering into territory that makes the legal team nervous.
Tax is a different story entirely. The caution there isn't reluctance for its own sake; it's a clear-eyed reading of the stakes. Tax errors carry regulatory consequences that extend well beyond any single department and the audit trail requirements are specific and unforgiving.
Any AI system operating in that space needs to produce not just accurate outputs but fully traceable ones; the kind that can survive an IRS examiner's scrutiny rather than simply satisfying a CFO's dashboard. CPA Trendlines recently noted that regulators and clients alike are already pushing for more explainability and accountability as AI moves deeper into tax and audit workflows -- a pressure that will only intensify.
The broader industry data suggests this dynamic -- ambition running well ahead of execution -- is far from unique to finance. According to KPMG's Quarterly AI Pulse Survey, agent deployment more than doubled across enterprises through 2025, but governance and security remained persistent drags on scaling.
A separate analysis by Neurons Lab drawing on research from Deloitte McKinsey PwC found that 99% companies plan put agents production yet only 11% done so blocked same data governance security concerns showing up Savant's data. The ambition-to-execution ratio holds stubbornly across industries research firms alike.
What separates the 6% who have reached genuine enterprise-wide implementation from the majority still circling isn't a bigger AI budget or a more sophisticated vendor relationship. It's sequencing.
The organizations scaling agentic automation in finance built the governance infrastructure first -- audit trails, explainable outputs, role-based controls and clear escalation paths -- and treated those elements not as compliance obligations to be satisfied before launch but as parts of their core architecture. And according to industry experts it's the unglamorous foundation that makes everything else trustworthy enough to actually use.
An AI system that produces accurate reconciliations no one can audit is, for practical purposes, useless at scale in a regulated environment. While AI's capabilities continue to advance impressively, what most organizations are still working out is the organizational scaffolding -- the policies, the internal alignment and the cultural willingness to slow down long enough to earn trust in the system before asking it to do more.
For finance leaders watching their more aggressive peers stumble into governance problems mid-deployment, the Savant data offers a useful perspective: The organizations that will look smartest in three years aren't necessarily the ones that moved fastest. They're the ones that understood early that finance has always been, at its core, a trust business -- and that no amount of automation can change what happens when that trust breaks.