The 19th century banking problem that AI hasn’t solved yet

The 19th-Century Banking Problem AI Still Solves—Or Doesn't

Artificial intelligence promises to transform financial services, from credit decisions to fraud detection. Yet one challenge that feels steeped in 19th‑century banking remains: making trustworthy, fair decisions when information is imperfect or biased. In this post, we explore how this historical problem shows up in modern AI and how organizations can address it with responsible AI practices and strong governance.

Historic bank building
Historic banking halls remind us that trust, data, and risk assessment are timeless challenges.

What was the 19th-century problem in banking?

In the 1800s, banks faced a fundamental information gap: they lacked reliable data about who would repay loans, how risky a borrowerreally was, and how markets might evolve. Lenders relied on personal networks, sparse records, and judgment calls—often leading to inconsistent decisions and unequal access to credit.

Fast forward to today, and AI models confront a similar dilemma: decisions are only as good as the data that feeds them. If data is incomplete, biased, or poorly governed, AI can reproduce or amplify those flaws at scale. The result can be unfair lending, opaque decision processes, and erosion of trust—outcomes the banking industry can ill afford.

AI and data science concept
AI models learn from data, but data quality and explanations matter as much as the model itself.

Key parallels between the past and present

  • The accuracy and completeness of data shape risk assessments.
  • Decision processes must be understandable to users and customers, not just optimized for metrics.
  • Humans should oversee, audit, and intervene when needed to prevent biased or unfair outcomes.
Team discussing dashboards
Collaboration between data science and risk governance teams helps keep AI decisions accountable.

What modern banks can do to address the problem

To move beyond the "data gap," financial institutions can combine robust data governance with responsible AI practices. This includes:

  • Building trusted data foundations: data lineage, quality controls, and clear provenance.
  • Implementing explainable AI: ensuring model decisions can be interpreted by humans and customers alike.
  • Establishing model monitoring and risk controls: continuous checks for drift, bias, and fairness.
  • Incorporating human-in-the-loop: enabling risk professionals to review edge cases and override decisions when appropriate.
  • Aligning with regulatory and consumer expectations: transparency, consent, and accountability.

How Salesforce approaches this challenge

Salesforce emphasizes responsible AI, governance, and an integrated platform that unifies customer data with AI capabilities. The goal is to deliver confident, explainable insights at scale while safeguarding fairness and trust in customer experiences.

People collaborating over analytics dashboards
Centralized dashboards and governance tools help teams monitor AI outcomes in real time.

Practical steps to start tackling the problem today

  1. Audit data quality and bias: identify gaps that could lead to unfair outcomes and address them proactively.
  2. Define clear explanations for model decisions: what factors matter, and why those factors influenced the outcome.
  3. Establish governance, risk, and compliance (GRC) processes for AI models: role-based access, documentation, and approvals.
  4. Experiment with protected attributes and fairness constraints to mitigate disparate impact.
  5. Engage with customers and stakeholders: explain how AI decisions are made and offer pathways for appeal or review.

For a deeper dive into this topic, read the Salesforce piece linked below, which expands on the ongoing tension between powerful AI systems and the timeless need for trustworthy, responsible banking.

Read the full piece: The 19th-Century Banking Problem That AI Hasn't Solved Yet .

By embracing data governance, transparency, and human-centered design, banks can leverage AI to improve decision quality while preserving trust—echoing the lessons learned from history and applied to modern technology.

Post a Comment

Previous Post Next Post