While itโ€™s much more fun to talk about moving fast, boosting productivity, and solving all your problems with AI, itโ€™s worth slowing down to talk about the risks. Whether youโ€™re implementing a tool or building something yourself, the dangers are real. Of course, you donโ€™t hear about them as much as thereโ€™s a lot more money in promises.

These risks fall into three buckets. 

  1. Wrong Answers: โ€œSounds goodโ€ isnโ€™t good enough for finance. The teamโ€™s value comes by delivering objective guidance based on data. If the data or analysis is wrong, the team actively hurts the company.

  2. Internal Data Leakage. You donโ€™t want someone asking a chatbot, โ€œWhat is my manager's salaryโ€ or โ€œCan you give me all the key data on our biggest clients so that I can take it to our competitors?โ€ and receiving answers. At the same time, you do want your leaders to see across all data and provide insights. Fine-grained permissions are key to making this work and a lot of AI hasnโ€™t reached that maturity at this point.

  3. Regulatory and data exposure. There are compliance concerns, too. In Europe, GDPR still reigns supreme. In the U.S., regulation is catching up fast. It is easiest to comply with the stricter set of laws (EU) to ensure you can deliver to any clients working in both markets.

In the worst-case scenario, all three risks hit at once: AI is unleashed across all your data, people start getting wrong answers that confuse decision-making, sensitive internal information leaks, and suddenly your customers are seeing one anotherโ€™s data. Now youโ€™re dealing with internal chaos and external regulatory fallout. Prompt injections and clever work arounds have already been successful getting supposedly sensitive information shown in general models.

The best way to avoid this is to focus on prioritizing projects where you can easily check the results and control your permissions (hello Finance). That means initial rollouts like: cleaning data and creating forecasts you can validate.

AI isnโ€™t โ€œsmartโ€ in the way weโ€™re used to. A human who has a PhD can probably sum up numbers correctly. Whereas AI can deliver a PhD-level market analysis one minute, then miscount how many โ€œrโ€s are in โ€œstrawberryโ€ the next. Itโ€™s deeply unintuitive. Thatโ€™s why itโ€™s crucial to double-check any AI rollout.

The Bullets

  • There are three major types of risk: Wrong answers that cause confusion, internal data leakage within the company, and regulatory exposure

  • Regulation is coming. The easiest way to prepare is to hold yourself to GDPR standards across both the US and Europe to ensure there are no issues. Aka: transparency, opt-outs, private information stays hidden.

  • When you are considering building vs. buying, you need to critically assess each one of these risks

Letโ€™s get to work.

1. Sounds nice butโ€ฆ

AI is built to sound convincing but that is not the same as being right. It is notoriously bad at trying to cover up its mistakes by lying without regret. On the plus side, a big part of finance is built to cut through people trying to cover up bad numbers. On the downside, other teams arenโ€™t so well trained. Here are the ways it can go astray:

  • Wrong answers. The whole point of using AI is to get better data into more hands, faster. But when the answers are wrong, the decisions that follow are wrong too and the impact is multiplied.

  • Wrong questions. And, no this isnโ€™t solved with prompt training. Asking a good question is tricky even in the best of times. Many non-finance users donโ€™t understand key metrics, time frames, or inclusions/exclusions in the data. If the question is off, the model interprets it incorrectly and now youโ€™ve got a mismatch before you even get an answer.

  • Explainability. In a report, you can trace the data. You can see the rows, the formulas, the assumptions. This is hard to do with AI. Understanding the actual calculation behind a chatbotโ€™s answer is often nearly impossible, making it tough to verify anything.

  • Confirmation bias. AI models are people-pleasers. Ask, โ€œIs my performance strong?โ€ and theyโ€™ll often try to say yes. But the whole point of having numbers is to challenge assumptions, not reinforce them.

  • So easyโ€ฆ Itโ€™s so easy just to ask a question, take an answer, assume it's generally right, and move on. This behavior is reinforced by how people interact with general-purpose AI models every day. In an organization, this becomes dangerous fast when the only rationale for decisions is, โ€œThatโ€™s what the AI told me.โ€

Getting data to be right even 90% of the time (much less 100%) using AI models is challenging even for the biggest companies in the world burning billions of dollars in pursuit of that truth. Be wary of these challenges as you test rollouts.

Tip: Prompt training is not the solution. You need to continuously check and monitor the performance by looking at actual chat requests and answers and then comparing it with trusted reports. You also need to establish that each person has ownership of their decisions no matter what the AI says.

2. Data leakage

Just like with any sensitive data system, you need firm access control and data security. That means tightly limiting the subset of data AI can access: per person, per use. Itโ€™s not enough to trust that users will ask the โ€œrightโ€ questions or that the system will always interpret them correctly.  Additionally, prompt injections and clever workarounds already exist in general-purpose models. Left unchecked, these loopholes can lead to intended or unintended breaches.  Hereโ€™s what you risk sharing:

  • Private company information. If you open up a chat interface to your team, people will inevitably ask curious, probing questions. Without proper controls, sensitive data like salary details, board materials, contracts, or your cap table could become widely accessible.

  • Private client data. The same applies to customer records: documents, contracts, personal data. If a model can access it, someone might find a way to extract it and use it against the company.

  • External access risk. More and more companies are embedding AI into client-facing dashboards. Clients expect visibility, but the last thing you want is for one client to start pulling data on another. Thatโ€™s how you lose trust, fast.

The bottom line: AI isnโ€™t a person. It doesnโ€™t understand boundaries, and it can be manipulated. Thatโ€™s why whatever you build must have strict, role-based permissions and deep access control built in from the start.

Tip: Pressure test this in any way you deploy AI.

3. Regulation US and Europe

There is a lot coming down the pipeline, and this is *not* legal advice. But generally, best practice is to follow the same policies in the US as you would in Europe to future-proof yourself from regulatory action. Generally that is:

  • Know where high risk lurks (Especially in HR). AI used in hiring, promotions, or performance evaluations falls under โ€œhigh-riskโ€ use cases. These require enhanced transparency and oversight, with compliance deadlines looming in 2026 in Europe.

  • Donโ€™t assume summarization equals anonymization. Just because an AI tool summarizes data doesnโ€™t make it GDPR-compliant. Individuals must be told how their data is being used, even in summary form, and there must be meaningful human oversight when automated decisions impact people.

  • Respect the right to opt out. The EU AI Act grants individuals the right to opt out of their data being used to train AI models. FP&A teams working with SaaS vendors need to ensure these rights are both contractually protected and technically enforced.

  • Be ready to disclose breaches, fast. Significant AI-related incidents must be reported within 15 days under the AI Act and within 2 days if individual rights or safety are at serious risk. Finance teams should have protocols in place for rapid incident response in collaboration with legal and IT.

  • Data deletion isnโ€™t just hitting โ€œdelete.โ€ GDPR and the AI Act empower individuals to request their data be erased, including from training datasets. For many models, this means retraining or re-engineering the system, something finance teams need to assess with vendors up front. This was the issue with ChatGPT being paused in Italy.

  • Watch where the data lives. Data residency laws, particularly in the EU, restrict where personal data can be stored and processed. AI tools that pull in data from multiple regions must be compliant with these geographic constraints, especially when dealing with sensitive employee or customer information.

Again, these are the best practices. But, this isnโ€™t legal advice. Itโ€™s just a quick checklist on the different types of expectations you should have as you run AI over your databases.

Tip: This usually becomes a very high priority as you scale or go through a funding round, which can cause a tremendous amount of re-done work, so it should be considered early on.

4. Build vs. Buy

Getting access control, data validation, compliance, and everything else is expensive and time-consuming. But itโ€™s also mission-critical. So whether youโ€™re building internally or buying from a vendor, you need to:

  • If You Use a Vendorโ€ฆ

    • Kick the Tires. Donโ€™t take anything at face value. Dig into the details: data segregation, security incident response, uptime SLAs, permissioning logic. Make sure itโ€™s real and proven.

    • Ask their clients. Have they had any specific issues in this regard or how do they handle it internally?

    • Contract Shared Risk. A solid vendor contract will include liability provisions and compliance obligations. If something goes wrong, youโ€™re not carrying the entire burden alone.

  • If Youโ€™re Building In-Houseโ€ฆ

    • Budget Realistically. Building isnโ€™t just about development. You also need compliance, access management, testing, auditability, and ongoing maintenance. Donโ€™t underestimate the full scope.

    • Plan for Scale. Today, it might be fine if two finance analysts can see everything. But what happens when your org scales 5x? You need a long-term architecture, not a quick fix.

In short: yes, there can be strategic advantages to building in-house. But there are also real downsides. Whichever path you choose, treat the decision like any other major system investment and evaluate it thoroughly.

Tip: Make sure your vendor has staying power. Look for strong financial backing, long-term viability, and references from similar-sized customers. You donโ€™t want to be left hanging in year two.

In conclusion

Using AI well can have efficiency gains of up to 50-100%. But, using it badly can have a tremendous negative impact on the company. So, spend the time and money to do it well.

Get ready for budgeting season with Abacum
Get ready for budgeting season with Abacum
Get ready for budgeting season with Abacum
1. Sounds nice butโ€ฆ
2. Data leakage
3. Regulation US and Europe
4. Build vs. Buy
In conclusion

Sign up for our finance newsletter

Sign up for our finance newsletter

Sign up for our finance newsletter