Author: Zhang Feng
I. Summary of the Bank of England Roundtable: Five Major Challenges in AI Applications in the Financial Industry
At the end of 2025, the Bank of England organized three roundtables, inviting representatives from UK financial regulators, systemically important banks, insurance companies, and emerging digital banks to conduct in-depth discussions on the current status and challenges of artificial intelligence applications in the financial industry. The meetings summarized five major challenges currently faced by financial institutions in promoting AI applications, which are highly representative of the industry and provide important lessons for other markets, including China.
Build a high-quality data strategy and data sovereignty response mechanism.Data is the fuel of AI, and data quality issues directly affect the performance of AI models. Financial institutions should:
Establish a closed-loop data quality management system, forming full-process control from collection, cleaning, labeling to updating;
Regarding data sovereignty issues, establish a "data localization response mechanism," including local deployment, edge computing, federated learning, and other technical means to both meet compliance requirements and ensure model training efficiency.
III. Taking Banks as an Example: Specific Countermeasures in Intelligent Risk Control Scenarios Taking a globally systemically important bank as an example, its application of AI in credit approval and anti-fraud identification in the field of intelligent risk control has encountered difficulties in progress due to the aforementioned challenges. The following are the specific countermeasures it has taken: Constructing a "Risk Control AI Joint Verification Mechanism". The bank has formed an "AI Risk Control Validation Team" comprised of model risk management team members, data scientists, and business risk control personnel, intervening from the early stages of model development. Instead of pursuing "complete interpretability" of the model, the focus is now on the stability, fairness, and business adaptability of the model's output, establishing a dynamic validation mechanism based on business results. A dual-track risk control system of "behavioral monitoring + outcome intervention" has been introduced. For anti-fraud models, the bank has set up a real-time behavioral monitoring system. Once the model outputs a high-risk warning, the system automatically triggers a manual review process. Simultaneously, a "false alarm feedback mechanism" has been established, feeding back the manual review results to the model training set to continuously optimize model accuracy. A unified cross-border compliance template has been established. To address the regulatory differences between the US, EU, and UK, the bank's legal and technical teams jointly developed a "Unified Compliance Template for AI Risk Control Models," covering core content such as model development documents, test reports, and risk impact assessments. Each region only needs to supplement this with localized requirements, significantly reducing compliance costs. Federated learning is used to address data sovereignty restrictions. To circumvent cross-border data restrictions between the EU and the UK, the bank deployed model training nodes in both locations, employing federated learning technology to achieve collaborative model training without transmitting raw data, thus meeting compliance requirements while ensuring model performance. IV. Enterprise Service Perspective: AI Application Response Manual Guide To help enterprises systematically address the challenges of AI applications, the following response manual guide is proposed from four dimensions: strategy, process, technology, and compliance: (I) Strategic Level: Defining AI Application Boundaries and Objectives Develop an AI strategic roadmap, clarifying the priorities, business objectives, and risk tolerance for AI applications; establish an AI governance committee, with participation from business, risk, technology, and legal departments, to coordinate the approval and monitoring of AI applications; and regularly conduct AI maturity assessments to identify the organization's shortcomings in talent, data, and technology. (II) Process Layer: Constructing End-to-End AI Lifecycle Management Development Phase: Establishing AI model development specifications, including data collection standards, model training requirements, and test case design; Validation Phase: Introducing a "results-oriented" validation mechanism, focusing on the business impact and risk exposure of model output; Deployment Phase: Set up the model launch approval process, and clarify the rollback mechanism and contingency plan; Monitoring Phase: Establish a real-time monitoring system and conduct regular model re-examination and performance evaluation. (III) Technical Layer: Enhancing the Observability and Controllability of AI Systems Introducing model interpretability tools (such as SHAP, LIME) to assist risk departments in understanding model behavior; Building a model monitoring platform to track changes in model input and output in real time and identify drift and anomalies; The text="">Adopts technologies such as federated learning and differential privacy to balance data sovereignty and model performance;
Establishes an AI system redundancy mechanism to ensure the ability for human intervention in critical scenarios. ...p>