Go live in days
Create real-time dashboards and visualizations without coding. Simply ask “Show me customer profitability by region” and Sapien builds the dashboard on demand.
We’ve spent over a year talking to finance executives about AI. They’re often excited by AI demos and the potential to streamline workflows, but consistently apprehensive about whether their data is AI-ready. We think AI-ready data is largely a fallacy.
If finance teams can work with their data manually, AI systems can process it as well. The real challenge is that teams feel like they’re stuck in 1990 trying to evaluate technology from 2030, struggling to understand AI’s capabilities and value proposition.
This is Part 2 in our series on why CFOs don't trust AI. Part 1 explored how misinformed expectations affect adoption. We analyzed how finance teams are often looking for overnight automation, when the actual value of AI is acceleration of existing analytical processes. This piece focuses on the challenge of communication. When teams don’t understand how to communicate with AI or how it processes their queries, they aren’t comfortable using it for critical processes.
And in finance, oversight requires understanding. When systems work in unfamiliar ways, it’s difficult to build trust. We've addressed this concern by allowing users to ask Sapien to explain data or perform analysis in plain language, without technical jargon. But sometimes Sapien returns an answer that doesn't seem quite right.
This occurs because language is inherently imprecise. When someone asks "What's the weather?", there is implicit context in their question about location, time, and current conditions. AI working from words alone might respond "80 degrees," which is technically accurate but useless if the questioner meant New York and the system thought the question was about Hawaii.
In financial analysis, this ambiguity is more subtle and consequential. A query about "cost centers" might be interpreted as general ledger accounts when the user meant organizational departments. Revenue data requests might include internal transfers when the analysis requires clean external revenue. The AI isn't wrong, but it's answering a slightly different question than the user intended to ask.
When Sapien encounters ambiguous terms, it handles this in two ways. For deeper analyses, it surfaces clarifications upfront. You can challenge an assumption, and if you do, it becomes knowledge the system remembers for next time. For quick answers, it takes its best guess but makes all assumptions visible. You can click into any figure to see the reasoning used to calculate it. The basic idea is that Sapien can either clarify its assumptions first or make its assumptions obvious enough that a user can catch it if it’s interpreting data imperfectly.
Like new analysts, AI needs to be trained on company-specific language. Unlike human analysts, AI learns rapidly and retains this information perfectly.
When the system misunderstands, corrections can occur in plain language: "When we say cost centers, we mean spend category, not GL accounts." The system adjusts and retains this context. "That account column isn't customer accounts; it's general ledger accounts. Customer names are in a different field."
An LLM isn't doing the math or pulling the data itself. It's only deciding what to do next. All calculations and data retrieval happen through what we call "verifiable tools," essentially structured instructions that create a traceable record. Every number has a lineage showing which database it touched, which calculation it performed, and which filters it applied. When you correct Sapien, you're updating this knowledge layer. The system stores that "cost centers = spend category for this company" and applies it going forward.
Over time, the system builds what we call the “company engine,” which is all the specific ways a business talks about itself, organizes data, and defines metrics.
A system that has learned a business for six months becomes fundamentally more capable than a fresh system, regardless of the underlying model’s sophistication. One team described it as "learning how to debug the platform." When AI misunderstands, users correct it, building implicit organizational knowledge into the system.
The flexibility creating communication challenges is also what makes AI valuable for financial operations. Rather than forcing businesses into predetermined categories, the technology adapts to how specific organizations actually work day-to-day.
Understanding expectations and communication solves two trust barriers. But a critical question remains. What if the AI is wrong? The final piece in our series on trust will explore how AI systems can actually be more transparent than manual processes, and why verifiability matters more than deterministic accuracy.
- Ron Nachum & the Sapien team
This is Part 2 of a 3-part series on building trust in financial AI.

