graph LR A[Model<br>registration] --> B[Initial<br>validation] B --> C[Validation<br>approval] C --> D[In production] D --> E[Periodic review<br>and revalidation] E --> B
About ValidMind
ValidMind is a suite of tools helping developers, data scientists and risk and compliance stakeholders identify potential risks in their AI and large language models, and generate robust, high-quality model documentation that meets regulatory requirements.
Adept at handling many use cases, including models compatible with the Hugging Face Transformers API, and GPT 3.5, GPT 4, and hosted LLama2 and Falcon-based models (focused on text classification and text summarization use cases), ValidMind is purpose-built for model risk management teams.
In addition to LLMs, ValidMind can also handle testing and documentation generation for a wide variety of models, including:
- Traditional machine learning models (ML), such as tree-based models and neural networks
- Natural language processing models (NLP)
- Traditional statistical models, such as OLS regression, logistic regression, time series, etc.
- And many more model types
What sets ValidMind apart is its focus on simplifying complex tasks for both model developers and validators. By automating critical and often tedious aspects of the model lifecycle, such as documentation, validation, and testing, we enable model developers to concentrate on building better models.
We do all of this while making it easy to align with regulatory guidelines on model risk management in the United States, the United Kingdom, and Canada. These regulations include the Federal Reserve’s SR 11-7, the UK’s SS1/23 and CP6/22), and Canada’s Guideline E-23.
ValidMind is designed to streamline the management of risk for AI models, including those used in machine learning (ML), natural language processing (NLP), and large language models (LLMs).
ValidMind offers tools that cater to both model developers and validators, simplifying key aspects of model risk management.
What do I use ValidMind for?
Model developers and validators play important roles in managing model risk, including risk that stems from generative AI and machine learning models. From complying with regulations to ensuring that institutional standards are followed, your team members are tasked with the careful documentation, testing, and independent validation of models.
The purpose of these efforts is to ensure that good risk management principles are followed throughout the model lifecycle. To assist you with these processes of documenting and validating models, ValidMind provides a number of tools that you can employ regardless of the technology used to build your models.
The ValidMind AI Risk Platform provides two main product components:
The ValidMind Library is a Python library of tools and methods designed to automate generating model documentation and running validation tests. The library is designed to be platform agnostic and integrates with your existing development environment.
For Python developers, a single installation command provides access to all the functions:
%pip install validmind
The ValidMind Platform is an easy-to-use web-based interface that enables you to track the model lifecycle:
- Customize workflows to adhere to and oversee your model risk management process.
- Review and edit the documentation and test metrics generated by the library.
- Collaborate with and capture feedback from model developers and model validators.
- Generate validation reports and approvals.
ValidMind AI Risk Platform
How do I get started?
On the ValidMind Platform, everything starts with the model inventory — you first register a new model and then manage the model lifecycle through the different activities that are part of your existing model risk management processes.
Approval workflow
A typical high-level model approval workflow looks like this:
- New model registration1
- Select a documentation template when registering a new inventory model to start your model documentation. You then use the model inventory to manage the metadata associated with the model, including all compliance and regulatory attributes.
- Initial validation
- Triggers a new workflow2 to yield a model that will be ready for production deployment after its documentation and validation reports have been approved.
- Validation approval
- Perform validation of the model to ensure that it meets the needs for which it was designed. You can also connect to third-party systems to send events when a model has been approved for production.
- In production
- Use the model in production while ensuring its ongoing reliability, accuracy, and compliance with regulations by monitoring the model’s performance.
- Periodic review and revalidation
- As part of regular performance monitoring or change management, you follow a process similar to that seen in the Initial validation step.
Ready to try out ValidMind?
Our quickstarts are the quickest and easiest ways to try out our product features.