November 17, 2025

The New EUC Problem: Why End-User AI May Be Finance’s Next Governance Crisis

The New EUC Problem: Why End-User AI May Be Finance’s Next Governance Crisis

Twenty years ago, spreadsheets quietly became one of the biggest operational risks in finance. Analysts built hundreds of thousands of end-user computing (EUC) applications: models, macros, and dashboards that ran critical processes outside IT control. Many firms still don’t know how many of these exist, what they do, or how they’re being used.

Today, it’s happening again. Only this time, faster and far harder to govern.

The next EUC wave isn’t Excel macros. It’s end-user AI: Microsoft 365 Copilot, ChatGPT, Claude, and countless task-specific agents now being embedded across capital markets firms.

And just like before, what begins as empowerment could end in exposure.

From Spreadsheets to Copilots: Old Risk, New Form

The story feels oddly familiar...

AI copilots are arriving under the banner of productivity. Ask a question in natural language, and your assistant analyzes data, drafts a report, or explains a model. What once required Excel wizardry now takes a sentence.

But behind every Copilot prompt is logic, business logic that determines decisions, numbers, and actions. And that logic is created by end users.

Where yesterday’s trader built a Value-at-Risk macro in Excel, tomorrow’s will build a Copilot workflow that does the same. The firm will end up with tens of thousands of invisible AI “mini-apps,” each performing calculations, analyses, or recommendations that no one centrally governs.

The difference this time? The outputs aren’t deterministic.

Non-Determinism: When Governance Meets Probability

A spreadsheet, for all its flaws, is predictable. A formula produces the same result every time. It may be wrong perhaps, but it is at least consistent and traceable.

Generative AI breaks that certainty. Its outputs are stochastic, meaning the same input can yield different results based on subtle context shifts or model updates.

Now imagine a large bank with 200,000 known EUC spreadsheets. In the next few years, it may have 200,000 Copilot or Claude agents performing similar tasks, each slightly different, each capable of “drifting” as the underlying models evolve and contextual factors change.

From a governance standpoint, this is chaos:

  • No reproducibility - prompts and outputs can change over time.
  • No transparency - reasoning paths are opaque.
  • No lineage - an AI might reference data sources no one knew it could access.

What used to be a problem of knowing what your spreadsheets do becomes one of not even knowing what your AI thinks.

Why It Will Happen Anyway

The same forces that caused EUC proliferation are now driving end-user AI adoption:

  • Empowerment: Analysts want control and speed. When IT can’t deliver, they’ll build their own solutions—now at the speed of conversation.
  • Low friction: Using an LLM to accomplish a task takes a license, not a project plan.
  • Perceived safety: Each EUC created item (spreadsheet or AI) feels low stakes at time of creation.
  • Invisible complexity: Every prompt encodes decisions, assumptions, and data dependencies. None of which are documented or versioned.

AI democratizes capability, but it also democratizes risk creation.

Governing the Un-Governable

To avoid replaying the EUC crisis, institutions must expand governance from end-user computing to end-user AI. That means:

  1. AI Workflow governance: Prompt workflows (single-shot and multi-turn) are the new Excel Macros. These should be save-able, versioned, auditable and measurable.
  2. Usage and culture: Capture the data to know who is getting to success and who is not in your AI ecosystem (and why).
  3. Determinism thresholds: Identify where probabilistic systems are acceptable (research, exploration) and where they’re not (regulatory reporting, pricing).
  4. Explainability: Demand “show your work” functionality that exposes how an AI arrived at its answer.

Without these controls, AI adoption will repeat every EUC governance failure, only this time, hidden inside natural-language interfaces.

Opportunity—and a Warning

Done right, end-user AI could fix the EUC problem by embedding governance and audit-ability at the core of every model. Done wrong, it will create an exponentially more complex version of the same problem.

The irony is striking: the very tools promising to reduce human error could multiply systemic risk through opacity, scale, and unpredictability.

Financial institutions don’t need to stop end-user AI. Start by recognizing that teams will use AI to solve problems either through the system or outside of it, just as they have done with Excel for years.  Firms can either harness that creativity and turn governance into a flywheel for productivity or create a new and bigger EUC crisis with AI.

Build with Connectifi

Build with Connectifi and let us help you, accelerate time to value, remove complexity, and reduce costs. Talk to us now.