In the previous article, we discussed the dream of AI. But like any other tech, it takes the right setup to make it useful. History is littered with good intentioned finance systems, gone wrong:
There was Hershey Foods’ infamous 1999 ERP and CRM go-live disaster over Halloween resulting in over $100 million of undelivered inventory. Cadbury Schweppes’ 2006 ERP rollout led to a $12 million chocolate inventory pile-up from bad order projections. Mission Produce’s ERP launch? It resulted in a complete loss of visibility into how their avocados were ripening, costing millions in spoiled fruit and the indignity of having to buy the fruit from competitors.
But it’s not just chocolate and avocados.
In 2017, Amazon.com spent two years implementing a cloud-based HRIS system, which quietly failed. And then there was good ol’ JP Morgan’s “London Whale” incident? A simple formula error in an Excel-based VaR model, dividing by the sum of rates instead of the average, contributed to a $2 billion loss by drastically underestimating risk. (And some more fun ones).
Currently, nearly 42% of big companies that have piloted AI projects have now abandoned them.
It’s essential to get this it right. When implemented properly, it can pay back in 6–12 months. These benefits compound, driving dramatically better performance over time.
As we outlined in the last article, AI’s core strengths for FP&A right now are in two areas:
Automating rule-based tasks. Think data cleanup, consolidations, accounting entries.
Answering questions quickly. Analyses, reports, scenario planning, alerting in user-friendly chat-based interfaces that work across *a lot* of data.
So how do you solve the right problems? The clearest way:
Consolidate your data into a warehouse or FP&A platform that ingests everything, so that you have all the data available to your model.
Run an AI model on this data warehouse so that it oversees the entire company.
Relegate internal tool AI use (as in Salesforce AI) to answer specific operational questions where consolidation isn’t necessary.
Of course, there are also risks to letting an AI run across all your data, but we will discuss that in a future article.

The Bullets
|
Let’s get to work.
1. Consolidate Data and Questions
AI is powerful when it runs across a lot of data, not when it’s siloed in five different corners of the business. You wouldn’t want five different teams setting up custom data warehouses, right? So why would you want Salesforce, HRIS, SalesGong, and NetSuite all spitting out their own “AI answers” that you now have to reconcile with your central data warehouse? It’s hard enough to do that in reports. To get real value, you need to start by mapping out a few key things:
Explain what questions are answered where. AI agents are now embedded in nearly every system from Notion to Salesforce to NetSuite. Your team needs a clear policy: What’s handled by these localized tools? What should be routed to your centralized AI implementation that sees everything? Which should have a dedicated report and which should be answered by chat?
Show what data is where. At a detailed level. Pulling CRM data into your warehouse is great, but does that include customer success call transcripts? Sales emails? If not, where does that data live, and how can it be integrated? If you want to answer the hard questions, get everything you can in.
Know how the data moves. If data starts in your CRM, when and how does it land in your warehouse? Is it a real-time sync, nightly batch job, or manual export? Knowing the pipeline flow is critical for data accuracy reports and transparency on when things should be correct.
Map into a clear process. Make it clear to anyone how this all works through a simple data diagram that you present across to the teams. This gives a clear visual reference for how data is connected and where to find the answers to questions.
This is no different from what you should already be doing to centralize your data. The only difference now is that your “user” isn’t a person, it’s an AI agent trying to connect dots across systems which loves data, structured or not.
Tip: Centralizing your data into one granular, unified overview isn’t optional. It’s essential. That’s where AI actually becomes useful. Don’t hesitate to push everything possible (transcripts, meeting notes, etc.).
2. Prioritize Problems to Solve
AI is only as useful as the problems it’s asked to solve. So, start with just a few high-impact, solvable issues that show off AI’s strengths. For inspiration, check out our previous article. How to do so:
Identify your issues. Look for repetitive tasks that eat up time or require excessive attention to detail. Think data cleaning, creating different report versions from the same core set of numbers, or drafting initial responses to inbound requests.
Calculate the projected savings. Talk to the people doing the work. Don’t assume that just because a process involved a copy-paste, automating it will save more than a few seconds. You need real numbers on time spent.
Analyze underlying tasks. If the work is still manual, why? Is the data incomplete? Is it scattered across systems? And yes, AI should help you fix this by scanning across your data and suggesting the correct fixes.
Assess the ease of implementation. Some tasks, like rolling up 10 subsidiaries in 15 countries, might sound exciting but are tough to automate. Go for the low-hanging fruit. Automate recurring reports. Set up a single successful chatbot or alert system.
Make the list. Now that you have surveyed the scene, put the roadmap together.
The projects you choose will define how the rest of the company views AI. If your first few deliver only a couple hours of time-savings, skepticism will skyrocket. If they succeed, adoption accelerates.
Tip: Pick early wins that show the ‘wow’ factor for your implementations.
Here’s a framework to evaluate potential AI use cases:
Finance Problem | Time Spent | Time Saved | Ease | Data Requirement |
Monthly reporting package assembly | High | High | Medium | Clean GL + departmental input data |
Headcount variance explanations | Medium | High | Easy | HRIS + budget vs. actuals mapping |
Forecast version comparisons | Medium | Medium | Easy | Structured forecast versions |
Ad hoc P&L analysis for execs | High | High | Medium | Granular P&L by entity/cost center |
Cash flow trend diagnostics | Medium | Medium | Medium | Historical cash flows + AP/AR aging |
Financial policy Q&A (chatbot) | Low | Medium | Easy | Uploaded policy docs + chart logic |
3. Centralize the team
You don’t need to hire a battalion of machine learning PhDs for internal AI use. They’re rare, expensive, and often not what’s actually needed. What you do need are sharp operators. People who can implement tools, integrate systems, map data flows, and train others. To do so:
Centralize AI learning. Treat AI like any other enterprise tool. You don’t let 10 departments each figure out their own ERP strategy, so why would you let everyone run wild with AI? Build one core team that becomes your internal AI experts, trains others, and captures learnings.
Run it through Finance. We’ve said it before: centralize tools and data under Finance. And, where the data lives, the AI should run. Finance is uniquely positioned to take charge. It already leads implementations, understands the systems, and knows how to train business users.
It’s a tool, not a build. Don’t waste time trying to build proprietary models from scratch. The market has already spent hundreds of billions developing powerful, flexible AI tools. Your job is to plug them in where they add value.
Leverage executive and board support. This isn’t hard to sell, especially when you pair your hands-on team with a board member or advisor who’s seen AI deployed successfully elsewhere. That outside validation adds credibility and speeds up buy-in.
You want your team to know how to use AI, where it fits, and how to deploy it, just like they would any reporting or planning tool.
Tip: Treat AI the same way you treat other tools: central ownership, business-led implementation, and ongoing training. That’s how you get real adoption.
4. Communicate Openly and Often
We’ve touched on this before, but it deserves its own spotlight because it can make or break your rollout. People are nervous about AI. It brings up concerns about job security, relevance, and change. And the best antidote is proactive, transparent communication. To do it right:
Set realistic expectations. AI isn’t magic. Be upfront that there will be a learning curve, iterations, and some trial and error. Position it as a tool that improves over time.
Explain the “Why.” Be clear about why the company is investing in AI. It is a tool, not a replacement. ERPs didn’t replace accountants. Emphasize how it will eliminate low-value work and create space for strategic thinking, analysis, and business partnership.
Be specific about impact. Avoid vague promises. Lay out exactly which processes are changing and how roles might evolve. Highlight opportunities for upskilling and taking on higher-value tasks, not just process automation.
Show, don’t just tell. When you land early wins (see next section), share them widely. Demonstrate how AI made someone’s job easier, saved hours, or helped close the books faster. Real stories will drive real buy-in.
Ignore the human element, and even the best AI tools will get quietly sidelined. Engage the team early and often, and you’ll turn skeptics into advocates.
Tip: Over-communicate at the start. Regular updates, live demos, Q&A sessions, and executive sponsorship will build trust and momentum.
5. Bring it all together in a pilot or two
Like any significant initiative, AI adoption should begin with focused pilot projects. These pilots should directly tackle the high-priority problems that dovetail AI’s strengths with the organization’s needs. Some examples:
Pilot: All systems source of truth
Problem: Data inconsistencies across systems are difficult to catch and correct. A customer disappears from CRM but still shows up in billing. A new deal closes but doesn’t show in HubSpot.
AI Application: An AI agent runs hourly across CRM, billing, ERP, and CS tools to detect data mismatches.
Success Looks Like: The CRM logs a $20K ARR churn. The agent checks billing and ERP to confirm the drop. It identifies the churn as outside the normal range, flags a Slack channel with recommended actions, and confirms the issue once resolved.
Pilot: Automated month-end variance analysis
Problem: Finance teams waste time gathering data and combing through dozens of views to identify budget variances.
AI Application: An agent runs at month-end, analyzes key movements in the financials, and generates a summary with key drivers and supporting reports.
Success Looks Like: “MRR missed by 25% due to a 5% drop in close rate, driven by increased competitor mentions on calls. Usage-based revenue dropped 10% among top 3 customers due to seasonality. Additionally, customer calls declined 20% due to an unexpected employee absence.”
Pilot: Executive daily briefing automation
Problem: Executives lack a consistent daily view of business performance, making it harder to stay aligned and proactive.
AI Application: AI aggregates KPIs from sales, finance, operations, and customer service, flags significant changes, and compiles a daily executive briefing.
Success Looks Like: By 8 AM each day, executives receive a concise summary highlighting 3–5 critical metrics or changes requiring attention, with links to relevant data.
Choose pilot projects that are visible, achievable in 1–2 months, and clearly tied to business outcomes. Every win sets the stage for broader adoption and deeper investment in AI.
In conclusion
To get real value from AI, you need structure and a strong roll-out. Those both start with defining how you’ll use AI in your organization and building the right foundation to support it.
Centralize your data. Centralize the team. Identify the right problems. And then pilot smart, fast solutions.
