Model Risk Management (MRM) in AI models is a process that involves identifying, assessing, and managing risks that could impact the accuracy or performance of a model. MRM is a subset of Governance, Risk, and Compliance (GRC) that deals specifically with the risks associated with models.
MRM in AI models requires a combination of data science, ML engineering, and risk management practices to help organizations design and implement procedures to ensure the accuracy, robustness, and reliability of their data science models.
There are a number of ways to approach model risk management, but one common approach is to establish a model risk management framework. This framework should identify the key risks associated with AI models and establish processes for assessing and mitigating those risks.
To do this, organizations need to have a clear understanding of the potential risks associated with AI models and develop a framework that mitigates and manages the risks of the deployed models.
SR 11-7 (Supervision and Regulatory Letter SR 11-7) guidance on MRM of AI models in the banking sector was released by the United States Federal Reserve and Office of the Comptroller of the Currency (OCC) in 2011 specific to the financing sector for algorithmic accountability act providing requirements for how a model should be developed, tested, validated and governed.
The guidelines are intended to help banks whenever to identify, assess, and manage risks due to inaccurate models, data quality issues, model complexity, or incorrect model implementation.
SR 11-7 implementation will vary depending on the specific AI models being used in the banking sector. However, some general tips on how to implement SR 11-7 for AI models in the banking sector include:
Dark
Light
Dark
Light