The rapid adoption of artificial intelligence (AI) in the financial sector is bringing both opportunities and significant risks, according to Swaminathan J, Deputy Governor of the Reserve Bank of India (RBI). Speaking at the Shri V Narayanan Memorial Lecture at SASTRA University in Thanjavur, he cautioned that while AI can transform finance, unchecked implementation could create systemic vulnerabilities and erode trust.
Swaminathan J, Deputy Governor of the Reserve Bank of India (RBI) highlighted that AI is reshaping financial services by enhancing efficiency, improving credit assessment, and strengthening fraud detection. However, he stressed that these benefits must be balanced with safeguards to ensure fairness, accountability, and stability.
He outlined five major risk areas that financial institutions must address to prevent unintended consequences.
“The first is bias and unfair outcomes. AI systems learn from data. But data does not emerge from a vacuum. It carries the imprint of past behaviour, existing inequalities and structural exclusions. If these distortions are embedded in the data, they can be reproduced by the model, sometimes with even greater efficiency and scale,” he said. “In credit assessment, this can create outcomes that are difficult to justify and harder to detect.”
Such biases could disproportionately affect underserved populations, undermining efforts to expand financial inclusion. If left unchecked, AI-driven decisions may reinforce existing inequalities rather than reduce them.
Swaminathan raised concerns about the opacity of advanced AI systems. “Many advanced systems operate like black boxes… But finance cannot become a black box,” he said. “A decision that materially impacts a citizen’s economic life cannot be defended by saying, machine decided.”
In financial services, transparency is critical. Customers and regulators must be able to understand how decisions—such as loan approvals or risk assessments—are made. This makes explainable AI a key requirement for the sector.
AI systems rely heavily on large datasets, many of which include sensitive financial information. Swaminathan emphasised the need for robust data governance frameworks.
“Institutions must therefore think seriously about consent, storage, sharing, access controls and purpose limitation. Data governance cannot be treated as a side issue. In the age of AI, trust becomes central.”
As data becomes a critical asset, financial institutions must invest in secure infrastructure and adhere to strict compliance standards to protect customer information.
Swaminathan warned that AI-driven systems can amplify risks at scale. “The fourth concern is model risk and concentration risk… a flawed model can affect decisions across millions of customers,” he said, adding, “even a local weakness can acquire broader systemic significance.”
Reliance on a limited number of AI vendors or platforms could further increase systemic risks, making the financial system more vulnerable to disruptions.
AI is not only strengthening defence mechanisms but also enabling more sophisticated cyberattacks. “AI can strengthen defences, but it can also equip attackers,” Swaminathan said.
He cautioned that malicious actors can use AI to create “more convincing phishing attempts, create deepfakes… and automate malicious activity.”
This evolving threat landscape requires continuous investment in cybersecurity infrastructure and advanced threat detection systems.
Despite the risks, Swaminathan acknowledged that AI holds immense potential. It can streamline customer interactions, improve access to credit for underserved populations, and enhance regulatory oversight.
AI-driven analytics can help detect fraudulent activities in real time, reducing financial losses and improving system integrity.
Swaminathan stressed that human oversight must remain at the core of financial decision-making. “A bank or NBFC cannot outsource responsibility to an algorithm, a vendor or a platform. Technology may help process information at speed and scale, but judgement and responsibility must continue to reside where they belong,” he said.
He called for embedding “fairness and explainability” into AI systems, strengthening governance frameworks, and ensuring that inclusion remains a key objective.
Swaminathan underscored that trust remains the foundation of banking and financial services. He cautioned that innovation must align with ethical principles and regulatory standards.
“The enduring task is therefore to make finance more intelligent, without making it less human; to make it more digital, without making it less accountable; and to make it more inclusive, without making it less prudent,” he added.
The RBI’s warning highlights the dual nature of AI in finance—offering transformative benefits while posing significant risks. As financial institutions increasingly adopt AI-driven technologies, the need for transparency, accountability, and robust governance becomes paramount. Swaminathan’s remarks serve as a timely reminder that while technology can enhance efficiency, the responsibility for fair and ethical outcomes must always remain with humans. Ensuring this balance will be key to building a resilient and trustworthy financial ecosystem in the digital age.