Validator Fundamentals — Module 2 of 4
Click to start
“As a validator who has connected to a champion model via the ValidMind Library, I want to identify relevant tests to run from ValidMind’s test repository, run and log data quality tests, and insert the test results into my model’s validation report.”
This second module is part of a four-part series:
Validator Fundamentals
First, let’s make sure you can log in to ValidMind.
Training is interactive — you explore ValidMind live. Try it!
→ , ↓ , SPACE , N — next slide ← , ↑ , P , H — previous slide ? — all keyboard shortcuts
To continue, you need to have been onboarded onto ValidMind Academy with the Validator role and completed the first module of this course:
Be sure to return to this page afterwards.
Jupyter Notebook series
These notebooks walk you through how to validate a model using ValidMind, complete with supporting test results attached as evidence to your validation report.
You will need to have already completed 1 — Set up the ValidMind Library for validation during the first module to proceed.
Our series of four introductory notebooks for model validators include sample code and how-to information to get you started with ValidMind:
1 — Set up the ValidMind Library for validation
2 — Start the model validation process
3 — Developing a potential challenger model
4 — Finalize testing and reporting
In this second module, we’ll run through 2 — Start the model validation process together.
Let’s continue our journey with 2 — Start the model validation process on the next page.
2 — Start the model validation process
During this course, we’ll run through these notebooks together, and at the end of your learning journey you’ll have a fully supported sample validation report ready for review.
For now, scroll through this notebook to explore. When you are done, click to continue.
ValidMind test repository
ValidMind provides a wealth out-of-the-box of tests to help you ensure that your model is being built appropriately.
In this module, you’ll become familiar with the individual tests available in ValidMind, as well as how to run them and change parameters as necessary.
For now, scroll through these test descriptions to explore. When you’re done, click to continue.
ValidMind generates a unique code snippet for each registered model to connect with your validation environment:
Can’t load the ValidMind Platform?
Make sure you’re logged in and have refreshed the page in a Chromium-based web browser.
Connect to your model
With your code snippet copied to your clipboard:
When you’re done, return to this page and click to continue.
Load the sample dataset
After you’ve successfully initialized the ValidMind Library, let’s import the sample dataset that was used to develop the dummy champion model:
When you’re done, return to this page and click to continue.
Identify qualitative tests
Next, we’ll use the list_tests()
function to pinpoint tests we want to run:
When you’re done, return to this page and click to continue.
Initialize ValidMind datasets
Then, we’ll use the init_dataset()
function to connect the sample data with a ValidMind Dataset
object in preparation for running tests:
When you’re done, return to this page and click to continue.
Run data quality tests
You run individual tests by calling the run_test()
function provided by the validmind.tests
module:
When you’re done, return to this page and click to continue.
Remove highly correlated features
You can utilize the output from a ValidMind test for further use, for example, if you want to remove highly correlated features:
When you’re done, return to this page and click to continue.
Every test result returned by the run_test()
function has a .log()
method that can be used to send the test results to the ValidMind Platform:
Configure and run comparison tests
You can leverage the ValidMind Library to easily run comparison tests, between both datasets and models. Here, we compare the original raw dataset and the final preprocessed dataset, then log the results to the ValidMind Platform:
When you’re done, return to this page and click to continue.
Log tests with unique identifiers
When running individual tests, you can use a custom result_id
to tag the individual result with a unique identifier:
When you’re done, return to this page and click to continue.
With some test results logged, let’s head to the model we connected to at the beginning of this notebook and insert our test results into the validation report as evidence.
While the example below focuses on a specific test result, you can follow the same general procedure for your other results:
From the Inventory in the ValidMind Platform, go to the model you connected to earlier.
In the left sidebar that appears for your model, click Validation Report.
Locate the Data Preparation section and click on 2.2.1. Data Quality to expand that section.
Under the Class Imbalance Assessment section, locate Validator Evidence then click Link Evidence to Report.
Select the Class Imbalance test results we logged: ValidMind Data Validation Class Imbalance
Click Update Linked Evidence to add the test results to the validation report.
Confirm that the results for the Class Imbalance test you inserted has been correctly inserted into section 2.2.1. Data Quality of the report.
Once linked as evidence to section 2.2.1. Data Quality note that the ValidMind Data Validation Class Imbalance test results are flagged as Requires Attention — as they include comparative results from our initial raw dataset.
Click See evidence details to review the LLM-generated description that summarizes the test results, that confirm that our final preprocessed dataset actually passes our test:
Link validator evidence
Split the preprocessed dataset
So far, we’ve rebalanced our raw dataset and used the results of ValidMind tests to additionally remove highly correlated features from our dataset. Next, let’s now spilt our dataset into train and test in preparation for model evaluation testing:
When you’re done, return to this page and click to continue.
Running data quality tests
In this second module, you learned how to:
Continue your model development journey with:
Developing Challenger Models
ValidMind Academy | Home