%pip install -q validmindRun dataset-based tests
Learn how to use the ValidMind Library to run tests that take any dataset or model as input. Identify specific tests to run, initialize ValidMind dataset objects in preparation for passing them to your tests, and then run the chosen tests — generating outputs that can be automatically logged to your model's documentation in the ValidMind Platform.
About ValidMind
ValidMind is a suite of tools for managing model risk, including risk associated with AI and statistical models.
You use the ValidMind Library to automate documentation and validation tests, and then use the ValidMind Platform to collaborate on model documentation. Together, these products simplify model risk management, facilitate compliance with regulations and institutional standards, and enhance collaboration between yourself and model validators.
Before you begin
This notebook assumes you have basic familiarity with Python, including an understanding of how functions work. If you are new to Python, you can still run the notebook but we recommend further familiarizing yourself with the language.
If you encounter errors due to missing modules in your Python environment, install the modules with pip install, and then re-run the notebook. For more help, refer to Installing Python Modules.
New to ValidMind?
If you haven't already seen our documentation on the ValidMind Library, we recommend you begin by exploring the available resources in this section. There, you can learn more about documenting models and running tests, as well as find code samples and our Python Library API reference.
Register with ValidMind
Key concepts
Model documentation: A structured and detailed record pertaining to a model, encompassing key components such as its underlying assumptions, methodologies, data sources, inputs, performance metrics, evaluations, limitations, and intended uses. It serves to ensure transparency, adherence to regulatory requirements, and a clear understanding of potential risks associated with the model’s application.
Documentation template: Functions as a test suite and lays out the structure of model documentation, segmented into various sections and sub-sections. Documentation templates define the structure of your model documentation, specifying the tests that should be run, and how the results should be displayed.
Tests: A function contained in the ValidMind Library, designed to run a specific quantitative test on the dataset or model. Tests are the building blocks of ValidMind, used to evaluate and document models and datasets, and can be run individually or as part of a suite defined by your model documentation template.
Metrics: A subset of tests that do not have thresholds. In the context of this notebook, metrics and tests can be thought of as interchangeable concepts.
Custom metrics: Custom metrics are functions that you define to evaluate your model or dataset. These functions can be registered with the ValidMind Library to be used in the ValidMind Platform.
Inputs: Objects to be evaluated and documented in the ValidMind Library. They can be any of the following:
- model: A single model that has been initialized in ValidMind with
vm.init_model(). - dataset: Single dataset that has been initialized in ValidMind with
vm.init_dataset(). - models: A list of ValidMind models - usually this is used when you want to compare multiple models in your custom metric.
- datasets: A list of ValidMind datasets - usually this is used when you want to compare multiple datasets in your custom metric. (Learn more: Run tests with multiple datasets)
Parameters: Additional arguments that can be passed when running a ValidMind test, used to pass additional information to a metric, customize its behavior, or provide additional context.
Outputs: Custom metrics can return elements like tables or plots. Tables may be a list of dictionaries (each representing a row) or a pandas DataFrame. Plots may be matplotlib or plotly figures.
Test suites: Collections of tests designed to run together to automate and generate model documentation end-to-end for specific use-cases.
Example: the classifier_full_suite test suite runs tests from the tabular_dataset and classifier test suites to fully document the data and model sections for binary classification model use-cases.
Setting up
Install the ValidMind Library
Python 3.8 <= x <= 3.11
To install the library:
Initialize the ValidMind Library
Register sample model
Let's first register a sample model for use with this notebook.
In a browser, log in to ValidMind.
In the left sidebar, navigate to Inventory and click + Register Model.
Enter the model details and click Next > to continue to assignment of model stakeholders. (Need more help?)
Select your own name under the MODEL OWNER drop-down.
Click Register Model to add the model to your inventory.
Apply documentation template
Once you've registered your model, let's select a documentation template. A template predefines sections for your model documentation and provides a general outline to follow, making the documentation process much easier.
In the left sidebar that appears for your model, click Documents and select Documentation.
Under TEMPLATE, select
Binary classification.Click Use Template to apply the template.
Get your code snippet
ValidMind generates a unique code snippet for each registered model to connect with your developer environment. You initialize the ValidMind Library with this code snippet, which ensures that your documentation and tests are uploaded to the correct model when you run the notebook.
- On the left sidebar that appears for your model, select Getting Started and click Copy snippet to clipboard.
- Next, load your model identifier credentials from an
.envfile or replace the placeholder with your own code snippet:
# Load your model identifier credentials from an `.env` file
%load_ext dotenv
%dotenv .env
# Or replace with your code snippet
import validmind as vm
vm.init(
# api_host="...",
# api_key="...",
# api_secret="...",
# model="...",
# document="documentation",
)Preview the documentation template
Let's verify that you have connected the ValidMind Library to the ValidMind Platform and that the appropriate template is selected for your model.
You will upload documentation and test results unique to your model based on this template later on. For now, take a look at the default structure that the template provides with the vm.preview_template() function from the ValidMind library and note the empty sections:
vm.preview_template()Explore a ValidMind test
Before we run a test, use the vm.tests.list_tests() function to return information on out-of-the-box tests available in the ValidMind Library.
Let's assume you want to generate the pearson correlation matrix for a dataset. A Pearson correlation matrix is a table that shows the Pearson correlation coefficients between several variables.
We'll pass in a filter to the list_tests function to find the test ID for the pearson correlation matrix:
vm.tests.list_tests(filter="PearsonCorrelationMatrix")We've identified from the output that the test ID for the pearson correlation matrix test is validmind.data_validation.PearsonCorrelationMatrix.
Use this ID combined with the describe_test() function to retrieve more information about the test, including its Required Inputs:
test_id = "validmind.data_validation.PearsonCorrelationMatrix"
vm.tests.describe_test(test_id)Since this test requires a dataset, you can expect it to throw an error when we run it without passing in a dataset as input:
try:
vm.tests.run_test(test_id)
except Exception as e:
print(e)Check out our Explore tests notebook for more code examples and usage of key functions.
Working with ValidMind datasets
Create a sample dataset
Since we need a dataset to run tests, let's use the sklearn make_classification function to generate a random sample dataset for testing.
In the code example below, note that:
- The
make_classificationfunction generates a synthetic binary classification dataset with10,000samples and10features, where theweights=[0.1]parameter creates a class imbalance (roughly 10% positive class). - The
random_state=42parameter ensures reproducibility so you get the same dataset each time you run the code. - The generated feature matrix
Xand target arrayyare combined into a single Pandas DataFrame with columns namedfeature_0throughfeature_9, plus atargetcolumn that has a value of1for the positive class and0otherwise.
import pandas as pd
from sklearn.datasets import make_classification
X, y = make_classification(
n_samples=10000,
n_features=10,
weights=[0.1],
random_state=42,
)
X.shape
y.shape
df = pd.DataFrame(X, columns=[f"feature_{i}" for i in range(X.shape[1])])
df["target"] = y
df.head()Initialize the ValidMind dataset
The next step is to connect your data with a ValidMind Dataset object. This step is always necessary every time you want to connect a dataset to documentation and produce test results through ValidMind, but you only need to do it once per dataset.
ValidMind dataset objects provide a wrapper to any type of dataset (NumPy, Pandas, Polars, etc.) so that tests can run transparently regardless of the underlying library.
Initialize a ValidMind dataset object using the init_dataset function from the ValidMind (vm) module. For this example, we'll pass in the following arguments:
dataset— The raw dataset that you want to provide as input to tests.input_id— A unique identifier that allows tracking what inputs are used when running each individual test.target_column— A required argument if tests require access to true values. This is the name of the target column in the dataset.
# Initialize the ValidMind dataset for the previously created sample `df`
vm_dataset = vm.init_dataset(
df,
input_id="my_demo_dataset",
target_column="target",
)Running ValidMind tests
Now that we know how to initialize a ValidMind dataset object, we're ready to run some tests!
You run individual tests by calling the run_test function provided by the validmind.tests module. For the examples below, we'll pass in the following arguments:
test_id— The ID of the test to run, as seen in theIDcolumn when you runlist_tests.inputs— A dictionary of test inputs, such asdataset,model,datasets, ormodels. These are ValidMind objects initialized withvm.init_dataset()orvm.init_model().
Run test using ValidMind dataset
Given that our test_id is currently set to validmind.data_validation.PearsonCorrelationMatrix, we'll get the results of the Pearson Correlation Matrix test as output when we call run_test():
result = vm.tests.run_test(
test_id,
inputs={"dataset": vm_dataset},
)Run and log test requiring parameters
Our vm_dataset can also be used for any other test that requires a dataset input, including tests that take additional parameters.
Let's find a class imbalance test to understand the distribution of the target column in the dataset to demonstrate. Class imbalance is a common problem in machine learning, particularly in classification tasks, where the number of instances (or data points) in each class isn't evenly distributed across the available categories.
Tags describe what a test applies to and help you filter tests for your use case. Use list_tags() to view all unique tags used to describe tests in the ValidMind Library:
# Sort the tags in ABC order
sorted(vm.tests.list_tags())Use list_tests(), this time filtering tests by tags for binary_classification relating to tabular_data:
vm.tests.list_tests(tags=["binary_classification", "tabular_data"])Let's use describe_test() again to retrieve more information about the test, including confirmation that it accepts some additional parameters, such as min_percent_threshold which allows you configure the threshold for an acceptable class imbalance:
vm.tests.describe_test("validmind.data_validation.ClassImbalance")Log ClassImbalance test with default parameters
Every test result returned by the run_test() function has a .log() method that can be used to send the test results to the ValidMind Platform.
Let's first run the class imbalance test without any parameters to see its output using a default value for the threshold and log the results to the ValidMind Platform for later comparison:
result = vm.tests.run_test(
"validmind.data_validation.ClassImbalance",
inputs={"dataset": vm_dataset},
)
result.log()Log ClassImbalance test with custom paramaters
From the output, we've confirmed that the class imbalance test passes the pass-fail criteria with the default threshold of 10%. Let's try to run the test with a threshold of 20% to see if it fails.
When running individual tests, you can use a custom result_id to tag the individual result with a unique identifier, allowing you to submit individual results for the same test to the ValidMind Platform:
- This
result_idcan be appended totest_idwith a:separator. - The
custom_thresholdidentifier will correspond with the results of our adjustedmin_percent_thresholdparameter.
result = vm.tests.run_test(
"validmind.data_validation.ClassImbalance:custom_threshold",
inputs={"dataset": vm_dataset},
params={"min_percent_threshold": 20},
)
result.log()When the threshold is set to 20%, the results show that the class imbalance test fails.
Work with test results
You can look at the output of tests produced by the ValidMind Library right in this notebook where you ran the tests, as you would expect. But there is a better way — use the ValidMind Platform to attach the logged test results your model's documentation (Need more help?):
From the Inventory in the ValidMind Platform, go to the model you connected to earlier.
In the left sidebar that appears for your model, click Documentation under Documents.
Locate the Data Preparation section and click on 2.1. Data Description to expand that section.
Hover under the logged test block for the default Class Imbalance test until a horizontal dashed line with a + button appears, indicating that you can insert a new block.
Click + and then select Test-Driven Block under FROM LIBRARY:
- Click on VM Library under TEST-DRIVEN in the left sidebar.
- Select
ClassImbalance:custom_thresholdas the test.
Finally, click Insert 1 Test Result to Document to add the test result to the documentation.
Confirm that the individual results for the adjusted threshold class imbalance test has been correctly inserted into section 2.1. Data Description of the documentation.
You just worked with a draft of your model's documentation, in an easily consumable format matching the structure of the template you previewed in the beginning of this notebook. When you connect to a model with the ValidMind Library, logged test results automatically populate for easy insertion into your documentation.
In the ValidMind Platform, you can make qualitative edits to model documentation, view guidelines, collaborate with validators, and submit your model documentation for approval when it's ready. Learn more ...
Next steps
Now that you know the basics of how to run out-of-the-box tests in the ValidMind Library, you’re ready to take the next step. Use run_test() with any combination of datasets or models as inputs to run comparison tests, and log your consolidated test results to the ValidMind Platform.
Check out our Run comparison tests notebook for code examples and usage of key functions.
Discover more learning resources
We offer many interactive notebooks to help you automate testing, documenting, validating, and more:
Or, visit our documentation to learn more about ValidMind.
Upgrade ValidMind
Retrieve the information for the currently installed version of ValidMind:
%pip show validmindIf the version returned is lower than the version indicated in our production open-source code, restart your notebook and run:
%pip install --upgrade validmindYou may need to restart your kernel after running the upgrade package for changes to be applied.
Copyright © 2023-2026 ValidMind Inc. All rights reserved.
Refer to LICENSE for details.
SPDX-License-Identifier: AGPL-3.0 AND ValidMind Commercial