It is the second week of May. The audit closed in April and the Q1 board update has shipped. The mid-year reforecast is still a few weeks out. For most finance teams this can be a quiet stretch on the calendar, relative breathing room before the second-half planning push begins. The team is using it on close hygiene, audit follow-ups, and the long backlog of work nobody had time for in March.
The mid-year reforecast itself is one of the most expensive recurring exercises in the FP&A calendar. By the time it is done, the team has usually produced a single artifact: an updated full-year number with a variance bridge underneath it. That artifact is what the board sees, what the commentary explains, and what the operating plan gets re-cut against. And it often misses the point.
The reason most reforecasts feel grinding is that they are designed to produce a single number when they should produce a set of decisions. A reforecast that lands at “the new full-year is $X, down 4% from plan” has done accounting. A reforecast that lands at “we are accelerating two investments, pausing one, holding three assumptions we got wrong, and watching one we are not sure about” has done FP&A.
This is the rare moment in the year when you actually get to choose which kind of exercise you are running.
Why this year is different
What is genuinely different about reforecasting in 2026, more than at any point in the last decade, is that the mechanics of forecast production have collapsed. A year ago, building a credible statistical baseline on a single P&L line was a small project. Producing ten alternative scenarios on the full model was a multi-week effort that finance could rarely justify. Both are now an afternoon of work. Whatever stack your team is running, the production constraint has moved.
That changes what the reforecast is for.
When you could only produce one forecast in three weeks, the reforecast was inherently a single artifact: a number, a bridge, a commentary. When you can produce ten in an afternoon, the question becomes which one do we pick, and why. That is a question of judgment, not production.
The pattern I see among the teams doing this well is that they have stopped treating the reforecast as a single question and started treating it as two. The first question is mechanical: what does a baseline derived from your historical data and current actuals imply for the H2 forecast? That is now achievable in an afternoon. The second question is judgment: where do we push that baseline up or down, and on what evidence?
Questions to run before the cycle starts
There is a short list of questions worth running through before you kick off the formal reforecast.
Of your top five January assumptions, how many turned out to be materially wrong? The honest answer is informative. If fewer than two of your top five were wrong by month four, the team was probably too cautious in January and missed an upside case. If more than four were wrong, either the world moved a lot or the inputs were broken. Two or three is typical. The reforecast pre-read is a good place to surface them by name.
Where did finance push back on a function head’s input and turn out to be right? Where did finance push back and turn out to be wrong? Most teams keep an informal running tally of the first question and ignore the second. The second is where the calibration is. If you cannot remember a single time finance was wrong against operations in the last year, the team has either stopped pushing back hard enough or stopped recording it.
What is the one assumption in your current model that, if it moves 20 percent, blows up the year? Are you tracking it? In most plans, there is a single assumption that carries the year, usually a top-of-funnel pipeline number, a retention rate, or a hiring ramp. Teams can usually name theirs. Fewer have built a real tracker on it. The reforecast is a good moment to set one up before H2 gets going.
Which methodologies in your model did you choose, versus which did you inherit? The January model is usually 70 percent inherited from the previous year, with deltas applied to each line. The forecasting method on most lines was set by an analyst who is no longer on the team, in a year that did not look like this one. Subscription revenue gets a top-down growth rate when it should be cohort-based. The S&M efficiency line, modeled as a fixed percentage of revenue, is actually a function of channel mix. Headcount-driven costs get a rate-plus-three-percent when the only honest method is bottom-up by role and geography. None of these methods is wrong in general. The question is whether each one still fits your business in 2026, or whether it is the method that has been sitting there.
Where are you using human judgment that a competent baseline could replace, and where are you using a baseline where human judgment is genuinely additive? This is the AI question. Most teams are over-judging on lines with stable historical patterns, usually opex categories, regulated revenue, and mature product revenue, where a baseline produced from twenty-four months of clean data would beat the team’s intuitions almost every time. And they are under-judging on lines where the business has structurally changed in the last six months, a new channel, a pricing change, a product launch, an org restructure, where the baseline gets fooled by stale data and only the human can catch the pivot.
Run a retrospective before you run the cycle
Before any of those questions can be answered well, the reforecast itself needs a retrospective.
Many teams skip this because it feels like overhead. It is the most consequential 90 minutes you will spend in May. Pull the team together: senior FP&A, the controller, whoever owns the model. Walk through the last reforecast. What did we predict? What actually happened? Where did the process work, and where did it waste time? Which conversations with function heads moved the number, and which were performative? What artifact did we produce that nobody actually used?
Every team I talk to has at least one artifact in their reforecast process that exists because someone three years ago asked for it once, and nobody has questioned it since. Most cycles also include at least one meeting that everyone privately agrees is not load-bearing.
The retrospective is also where you find out what the team learned that nobody wrote down. Every reforecast surfaces something, a customer behavior change, a vendor pricing pattern, a hiring market signal, that nobody bothered to capture because it felt obvious in the moment. Six months later, it is gone. The retrospective is the moment to extract that learning before the next cycle erases it.
Decisions, not numbers
When the reforecast is done, the artifact you hand to the CEO and the board should not be a number. It should be a short narrative about the state of the business. Here are the two things we now know that we did not know in January. Here are the three decisions we are recommending because of that. Here is the new full-year that results from those decisions. Here are the things we are still uncertain about and how we will know.
A great reforecast sparks a strategic conversation. A version that just produces a bridge slide is a status update. Which will you deliver?









