E-23

Published

December 23, 2025

Implement E-23 Compliance using ​ValidMind.

Follow this detailed, step-by-step guide which will:

  1. Walk you through the practical steps required to implement end-to-end compliance using ​ValidMind.

  2. Link to supporting documentation and training for specific product features.

  3. Focus on actionable implementation rather than just explaining requirements.

Overview

OSFI Guideline E-23 (Enterprise-Wide Model Risk Management for Deposit-Taking Institutions)1 contains the most comprehensive treatment of AI among major MRM regulations, reflecting the 2027 effective date and the anticipated evolution of AI. By emphasizing outcomes over prescribed processes, E-23 ensures that institutions retain flexibility in how they achieve regulatory objectives.

1 Office of the Superintendent of Financial Institutions:
Guideline E-23 – Model Risk Management (2027)

This guide is organized around the expectations for federally regulated deposit-taking institutions in Canada to manage model risk effectively by covering traditional model risk management (MRM)2 requirements while incorporating forward-looking considerations for AI and machine learning models.

Traditional MRM requirements

E-23 aligns with established MRM frameworks, requiring:

  • Model inventory — Comprehensive registry of all models
  • Development standards — Clear documentation of model design and implementation
  • Validation framework — Independent validation and effective challenge
  • Ongoing monitoring — Performance tracking and periodic review
  • Governance — Board oversight and clear accountability

AI/ML-specific considerations

E-23 introduces enhanced requirements for AI and ML models in addition to traditional MRM requirements. ​ValidMind helps you address these considerations:

Explainability

Enhanced requirements for model interpretability ensure stakeholders can understand AI-driven decisions.

Steps

  1. Configure documentation templates3 to capture explainability information.

  2. Run interpretability tests to generate explanations.4

  3. Document the level of explainability appropriate for model risk tier.5

  4. Establish processes for explaining decisions to affected parties.6

Alternative controls

Compensating controls for “black box” approaches where full interpretability is not achievable.

Steps

  1. Document the rationale for using less interpretable models.7

  2. Implement enhanced monitoring8 as a compensating control.

  3. Establish human oversight requirements.

  4. Configure alerts for unexpected model behavior.

Bias assessment

Evaluation of ethical implications and discrimination risks in AI models.

Steps

  1. Run fairness and bias tests9 during development.

  2. Document bias assessment methodology.10

  3. Establish thresholds for acceptable bias levels.

  4. Implement ongoing bias monitoring.11

Autonomous decision-making

Governance for self-learning capabilities and automated decisions.

Steps

  1. Document the scope of autonomous decision-making.

  2. Establish human oversight requirements.

  3. Configure review workflows12 for autonomous decisions.

  4. Implement safeguards and intervention mechanisms.

Model drift

Monitoring for performance degradation over time.

Steps

  1. Configure ongoing monitoring13 for drift detection.

  2. Set up alerts for performance degradation.

  3. Establish thresholds for re-validation triggers.

  4. Document drift monitoring methodology.

Dynamic learning

Management of continuously updating models.

Steps

  1. Document model update frequency and methodology.

  2. Establish change control for model updates.

  3. Configure validation requirements14 for dynamic models.

  4. Implement rollback procedures.

Implementation checklist

Use this checklist to track your E-23 implementation progress: