Published 23 Apr 2026

Every ERP vendor is shipping AI agents this year. Every BI vendor is shipping AI copilots. If you run finance or operations, your calendar has demos for both, and the pitches do not quite fit together. Futurum has called this the agentic reasoning gap. Anyone who's spent a week tracking down why a variance moved knows the shape of the work it points to.
Agents act. Dashboards display. Neither reasons. And the reasoning step, the one that turns a number on a chart into a decision somebody can act on, is where most of the finance and operations day actually happens.
For a long stretch of my career I built the dashboards people are now trying to fit AI on top of. Close the month at an enterprise manufacturer, and the first question is always what moved. Cost of goods. Material variance. Overhead absorption. The dashboard would show the number, and a version of the same workflow kicked in every time. Someone pulled the detail from the ERP, reconciled it against the plan in the financial model, cross-checked it against plant-level production. Someone else wrote the memo. Then the slides. Then walking the CFO through it, usually the morning of the board pack.
Two or three days, per variance, every close. The dashboards were accurate and the reports were on time. The decisions still needed a person sitting in front of a spreadsheet, rebuilding the context every month, before anyone could act. That gap, between what the dashboard surfaced and what leadership actually needed to decide, is the thing I have been trying to close ever since.
Most of what finance and operations teams actually do lives between those two jobs. Repetitive execution (invoicing, payment chasing, journal posting, approvals that follow a rule) is agent work. Monitoring a defined metric against a defined KPI is dashboard work. Everything else, which is most of the calendar, is reasoning work: the variance investigations, the forecast revisions, the margin walks, the supplier risk calls. That is where most of the judgment, and most of the cost of a wrong call, lives.
Think of your AI stack as three layers doing three different jobs.
Agents execute defined tasks and are very good at it when the steps are stable and the rules are well understood. Use them where the workflow genuinely repeats. Where agents stop is a separate question.
Dashboards are the layer I know best, because I built them. A dashboard monitors what you decided to track against what you decided to track it against. Done well, it is the single best operational view a finance team has. Done routinely, which is most of the time, it is a map that shows you the variance and stops there.
Decision Intelligence is the layer most teams are still missing. It reads the same data the dashboard does, applies the business context the analyst would have applied, and returns a Playbook: a briefing with evidence, drivers, and a recommended next action. A finance leader can act on that, or push back on it, in a way a chart does not support.
The pattern I see most often in AI rollouts this year is a vendor pitching agents as the answer to a problem that is not agent-shaped. Leadership signs off on automating the workflow, the agent ships cleanly, and the quarterly readout reports a win. Six months later the underlying decision, the one the workflow was supposed to make faster, is still being made the same way: a senior analyst, a spreadsheet, three days. The automation was real; the improvement was not.
This shows up most clearly in organizations where AI readiness was framed as a data or tooling problem rather than a layer problem. Fix the data, the thinking goes, and AI will take it from there. The data may well need fixing. But if the reasoning layer is missing, cleaner data gets you faster dashboards, not better decisions.
The practical version of this is to stop picking tools by category. Start with the question you need answered and work backward to the layer that actually answers it. Most evaluations I see this year skip that step and jump to comparing vendors.
This is the gap I kept running into. First from the finance seat at a manufacturer, then from the inside at a BI company, watching enterprises buy more of the same thing and hope it would close a gap it was never built to close. At some point my co-founder and I decided the only way the missing layer was going to ship was if we built it. That is why we started eyko.
What we built was the missing layer. We call it a Playbook. It reads the same data your dashboard does, runs the investigation a senior analyst would have run, and returns a briefing a finance leader can act on, or push back on. What it does not do is make somebody rebuild the context from a spreadsheet the night before the board pack.

COO & Co-Founder
23 Apr 2026
Jon Louvar is the COO and co-founder of eyko. He was previously VP of Product Marketing at insightsoftware and, before that, Manager of Financial Reporting at Silgan Containers, building BI and reporting platforms across finance, operations, and supply chain for enterprise organizations. At eyko he leads operations and delivery, translating customer insight into product execution.
It is the phrase Futurum Group has used to describe the missing layer between AI agents that execute tasks and dashboards that display data. Neither layer reasons about a business question. The agentic reasoning gap is where investigation, explanation, and recommended action sit, and where most finance and operations decisions are actually made.
Usually yes. Agents are very good at executing tasks with stable steps and clear rules. They do not investigate variance, weigh drivers, or explain why a number moved. The reasoning layer is what produces the decision that the agent then acts on. Without it, you risk automating the workflow around a decision that was never quite right.
AI search and BI copilots answer questions about the data on a dashboard. They make exploration faster and are useful inside the BI layer. They do not produce a structured investigation with drivers and a recommended action. Decision Intelligence sits above both and closes that gap.
Playbooks are the product surface for the reasoning layer. A Playbook investigates a question against live data, identifies the drivers, and returns a briefing with evidence and a recommended next action. It sits above dashboards and alongside agents, and it produces the output that neither of those layers is built for.
Start with the question you need answered. If the answer is to do a task faster, an agent is the right fit. If it is to monitor a metric, a dashboard is the right fit. If it is to understand why something changed and what to do about it, you are in reasoning territory and neither agents nor dashboards will close the loop. Match the tool to the layer the question actually lives in.
Join the enterprises replacing weeks of manual analysis with a single prompt. See what eyko Playbooks can do with your data.
The latest industry news, interviews, technologies, and resources.

Somewhere between 60 and 80 percent of enterprise BI dashboards go unused. The common response is to build better dashboards. That is not the fix. The problem is that dashboards answer the questions you thought to ask, and decisions are triggered by the ones you did not.
Read post

An AI Playbook is a structured, evidence-based business briefing generated from your live data. It progresses through three analytical layers: what happened, why it happened, and what to do about it.
Read post

Traditional dashboards only show what happened historically without providing context or recommended actions. Playbooks and autonomous reporting are the next evolution.
Read post
Be the first to know about releases and industry news and insights.
We care about your data in our privacy policy.