Let me paint a picture. It’s Tuesday morning, and you just finished fourteen months of rebuilding its data stack. Numbers are cleaner than they've ever been. Reports that used to take a week show up in minutes. Three clean dashboards open on the CFO's laptop. And for the next forty-five minutes, the team has exactly the same argument about the sales pipeline they were having before any of that got built.

Why does the forecast meeting sound exactly the same after you've spent a year rebuilding the reporting layer?

I keep coming back to this question. Every finance team I know has better numbers than they did five years ago. Faster closes. Working dashboards. Data that people across the company can actually get to. And the Tuesday conversation sounds the same as it did before any of it got built.

This is the part of the transformation cycle nobody wants to say out loud. Gartner's own forecast is that by 2027, more than 70% of recently implemented ERP initiatives will fail to fully meet their original business case. A quarter will fail catastrophically. A separate Gartner analyst, after surveying 185 CFOs, put it plainly: digital initiatives aren't meeting CFO expectations.

Most commentary reads those numbers as an implementation failure. Wrong scope. Weak change management. Consultants oversold. I think they're saying something sharper. Most of these projects actually finished “successfully”. What they missed was following from “delivered” to the real business outcome. That's not an implementation problem. That's us building the wrong thing.

Every finance transformation I've watched is built around the data layer. Almost none of them touch the decision layer.

They're different things. The data layer is the numbers themselves. What exists, how clean it is, how fast it shows up, and who can get to it. That's what projects fund. The decision layer is where the business actually uses those numbers. Which meetings run on them. What documents they live in. How people debate them. Whether the assumptions you made before a forecast are the same assumptions you go back and check afterward.

You can rebuild every report in the company and still have the same Tuesday forecast meeting.

Here's what I mean, concretely. A head of finance walks into board prep already knowing the argument she needs to make. She's lived inside the numbers for three months. She can feel where the business is headed. She knows which thing the board should actually be worried about. Then she opens her model and finds it can't produce the cut of data that would show her point. So she ships the report the model can produce, not the argument the business needs her to make to support a sharper decision. Everyone leaves the meeting thinking finance did its job. She leaves knowing she didn't.

No amount of cleaner numbers solves that. The data was great. The decision artifact never showed up.

The reason the decision layer stays neglected is that finance grades itself on the wrong thing.

Annie Duke has a term for this. She's a poker player who writes about decision-making, and in Thinking in Bets she calls it resulting: "when we make too tight a connection between the quality of an outcome and the quality of the decision that preceded it." Most finance meetings run on resulting. Variance review grades whether the number hit. We spend less time debating whether the reasoning behind the forecast held up.

Now QBR, variance review. Sales numbers hit the quarter. Everyone nods. Move on to marketing. Pipeline target was exceeded. The meeting ends twenty minutes later. Good quarter, team high-fives.

Nobody asked how the number hit. If you dig in with most teams, here's what actually happened. The pipeline velocity assumption was off. Deals were closing a week or two slower than the team had modeled. But a deal they'd written off for the quarter closed in the final two weeks. Two errors pointing in opposite directions. Netted to hit the number. The reasoning underneath was broken in two places, and the meeting treated it as a success. Next quarter, the team builds the next forecast with the same velocity assumption. Eventually, it stops getting canceled out, and everyone is confused about why the model fell apart.

The reverse happens too. A team makes a careful, well-reasoned forecast that ends up wrong because a customer nobody could have anticipated churned. They get graded on the miss. So they spend the next quarter adding conservatism to everything. Their reasoning was fine. The thing that was missed was outside anyone's model. But the grading system can't tell the difference, so they're now making worse forecasts, slower, because they got punished for something that wasn't actually their mistake.

Two teams can hit a number, one by accident and one because the analysis was sharp. Two teams can miss, one because the market changed and one because they bet on the wrong thing. A function graded purely on the number can't tell those apart. And it can't improve.

If any of this feels familiar, the good news is that real decision-layer work can start today.

No project approval cycle. No new software. A handful of practices I've watched teams install one at a time.

One page at the top of every plan. Three headings: the bets we're making this quarter, what would have to be true for each one to work, and what we'd see if a bet is going wrong. The rest of the deck stays the same. Three hours of work the first time, forty-five minutes every quarter after. Boards that read this one-pager before the numbers have sharper conversations immediately, because they're arguing about what to believe rather than absorbing what already happened.

Five minutes of assumption review before any variance line. Open last month's plan. Read out loud the three assumptions it rested on. For each one, answer: did this hold, did it not, or did something change we didn't see coming? Only then look at variance. Variance tells you what happened. Assumptions tell you what you believe, and the beliefs are the things you can actually correct.

No splitting the difference on disagreements. If sales thinks the pipeline converts at 22% and finance modeled 18%, nobody gets to land at 20 to keep the peace. Someone has to be wrong. Pick one number. Both parties sign off on the specific observable thing that would tell them the number is wrong. Revisit in thirty days. Run this discipline for a year or two, and forecast accuracy tightens sharply. Not because anyone got better at forecasting, but because the team stopped averaging out its real disagreements.

A thirty-second rule for what goes in the model. If a line item takes more than thirty seconds to explain to a new analyst, it probably doesn't belong in the structured model yet. Keep it in a separate sheet. Document the logic in prose. Move it into the model when the logic is stable. Premature structure costs you a quarter of formalizing something that wasn't ready, only to tear it out when the business moves. Hybrid is fine. Premature structure is worse than no structure.

If something in here rang true, start with one of these four. My pick would be the assumption review before variance. It's the cheapest to try and the one that shifts the grading problem most directly. If it works for you, the rest follow.

The teams that make these choices deliberately will compound a real advantage over the next few years. The habit of exposing your reasoning, being argued with, and updating the argument is slow to build and hard to copy. It doesn't show up in any software category. It shows up in the quality of the meetings, which is where the business has always been run.

Get ready for budgeting season with Abacum

New Guide: Lessons from the Trenches for Scaling Companies in 2026

Download the eBook for a chance to receive the limited print edition!

Download the eBook for a chance to receive the limited print edition!

For all the decisions you need to make.

For all the decisions you need to make.

For all the decisions you need to make.

Abacum Intelligence, a Platform-Wide Intelligence Layer
Abacum Intelligence, a Platform-Wide Intelligence Layer