AI in lending: Progress, risks, and the critical governance imperative

Artificial intelligence (AI) is no longer experimental in lending. Financial institutions globally now use machine learning to assess creditworthiness, speed up approvals, and expand access to finance. What was once ambition has become operational reality, with AI embedded in core lending processes.

Adoption levels reflect this shift. A 2023 McKinsey survey found that over 60% of financial institutions have implemented AI in at least one key function, particularly in credit decisioning, highlighting how risk management is becoming more data-driven. Real-world applications reinforce this momentum. In the United States, fintech firms like Upstart report higher approval rates than traditional systems while maintaining similar loss levels, reflecting improved risk assessment models. In China, Ant Group has scaled AI-driven lending to millions of small businesses, often delivering decisions within minutes while embedding real-time risk evaluation. The appeal is clear: faster decisions, improved predictive accuracy, stronger risk control, and the potential to extend credit to underserved populations without significantly increasing default risk.

- Advertisement -
Ad imageAd image
risk

Governance Struggling to Keep Pace

Despite these gains, governance has not kept up. Traditional credit models, such as scorecards, are transparent and easily understood. Their logic can be explained to regulators, boards, and customers, aligning well with established oversight frameworks. AI models, however, are more complex and often lack interpretability, raising significant risk concerns. Many operate as “black boxes,” making it difficult to understand how decisions are reached or how underlying risk assumptions are weighted. This creates a fundamental governance challenge: institutions are expected to manage and oversee systems they cannot fully explain, increasing operational, compliance, and reputational risk exposure.

As a result, existing governance frameworks designed for simpler models are being stretched beyond their limits. This mismatch can create a false sense of control, where risks are underestimated rather than properly managed.

Regulatory Responses and Emerging Challenges

Regulators are beginning to respond. The EU AI Act classifies credit scoring systems as high-risk, requiring greater transparency and oversight. The UK’s Financial Conduct Authority has highlighted concerns around algorithmic bias and consumer outcomes. Meanwhile, global bodies like the Basel Committee continue to emphasise model risk management, though much of their guidance predates modern AI.

In practice, many institutions still rely on legacy governance structures. Dynamic, data-driven models are assessed using outdated tools, limiting effective oversight. This gap between innovation and regulation remains a central issue.

Visible Risks and Real-World Examples

The risks are no longer theoretical. In 2019, the Apple Card faced scrutiny over alleged gender bias in credit decisions, highlighting how opaque models can lead to reputational and regulatory consequences. Beyond fairness, model stability is a major concern. Research from the Bank for International Settlements shows that machine learning models are highly sensitive to changes in data patterns. The COVID-19 pandemic demonstrated this clearly, as sudden shifts in borrower behaviour reduced the reliability of many models. Unlike traditional scorecards, which degrade gradually, AI systems can fail abruptly and without clear warning. This makes risk detection more difficult and delays corrective action.

Challenges in Developing Economies

These issues are amplified in developing markets, where AI adoption is accelerating alongside digital financial growth. Ghana illustrates this trend. The expansion of mobile money driven largely by MTN Ghana has brought millions into the financial system, generating new data for credit assessment. AI models now analyse transaction behaviour rather than relying solely on formal credit histories.

However, regulation is still evolving. The Bank of Ghana has introduced important measures around licensing and consumer protection, but AI introduces new challenges related to data governance, transparency, and fairness. Structural constraints persist, including fragmented data systems, limited credit bureau integration, and gaps in technical expertise. Without strong governance, there is a risk that AI will be deployed without full understanding of its limitations.

Finding Balance, The Path Forward

For banks in Ghana and similar markets, balance is essential. AI can improve financial inclusion and decision-making, but only if supported by robust governance, better data infrastructure, and close regulatory engagement. In some cases, simpler and more interpretable models may be more appropriate, particularly where oversight capacity is still developing.

More broadly, institutions must recognise that AI does not remove risk it transforms it. Poorly governed systems can undermine trust, attract regulatory scrutiny, and introduce new systemic vulnerabilities. Transparency is no longer optional; it is a requirement.

Conclusion: The Governance Imperative

The question is no longer whether AI should be used in lending, but how it should be governed. Addressing this requires more than minor adjustments. Institutions must strengthen model monitoring, improve validation processes, and embed accountability throughout decision-making.

There is also a human dimension. Credit professionals must be equipped to challenge complex models, while technology teams must prioritise transparency alongside performance. Ultimately, the future of lending will depend less on how advanced AI becomes and more on how well it is governed. Innovation without effective oversight is not progress, it is risk.

About the Writer

Writer, Daniel Arhin is a lending professional with over 15 years of experience focused on responsible credit, data-driven decision-making, and strong governance.

TAG:
Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *