• Documentation
    • About ​ValidMind
    • Get Started
    • Guides
    • Support
    • Releases

    • ValidMind Library
    • Python API
    • Public REST API

    • Training Courses
  • Log In
  1. Run tests & test suites
  2. Run tests
  3. Run dataset-based tests
  • ValidMind Library
  • Supported models

  • Quickstart
  • Quickstart for model documentation
  • Quickstart for model validation
  • Install and initialize ValidMind Library
  • Store model credentials in .env files

  • End-to-End Tutorials
  • Model development
    • 1 — Set up ValidMind Library
    • 2 — Start model development process
    • 3 — Integrate custom tests
    • 4 — Finalize testing & documentation
  • Model validation
    • 1 — Set up ValidMind Library for validation
    • 2 — Start model validation process
    • 3 — Developing a challenger model
    • 4 — Finalize validation & reporting

  • How-To
  • Run tests & test suites
    • Explore tests
      • Explore tests
      • Explore test suites
      • Test sandbox beta
    • Run tests
      • Run dataset-based tests
      • Run comparison tests
      • Configuring tests
        • Customize test result descriptions
        • Enable PII detection in tests
        • Dataset Column Filters when Running Tests
        • Run tests with multiple datasets
        • Understand and utilize RawData in ValidMind tests
      • Using tests in documentation
        • Document multiple results for the same test
        • Run individual documentation sections
        • Run documentation tests with custom configurations
    • Custom tests
      • Implement custom tests
      • Integrate external test providers
  • Use library features
    • Data and datasets
      • Introduction to ValidMind Dataset and Model Objects
      • Dataset inputs
        • Configure dataset features
        • Load dataset predictions
    • Metrics
      • Log metrics over time
      • Intro to Unit Metrics
    • Scoring
      • Intro to Assign Scores

  • Notebooks
  • Code samples
    • Agents
      • Document an agentic AI system
    • Capital markets
      • Quickstart for knockout option pricing model documentation
      • Quickstart for Heston option pricing model using QuantLib
    • Code explainer
      • Quickstart for model code documentation
    • Credit risk
      • Document an application scorecard model
      • Document an application scorecard model
      • Document a credit risk model
      • Document an application scorecard model
      • Document an Excel-based application scorecard model
    • Model validation
      • Validate an application scorecard model
    • NLP and LLM
      • Sentiment analysis of financial data using a large language model (LLM)
      • Summarization of financial data using a large language model (LLM)
      • Sentiment analysis of financial data using Hugging Face NLP models
      • Summarization of financial data using Hugging Face NLP models
      • Automate news summarization using LLMs
      • Prompt validation for large language models (LLMs)
      • RAG Model Benchmarking Demo
      • RAG Model Documentation Demo
    • Ongoing monitoring
      • Ongoing Monitoring for Application Scorecard
      • Quickstart for ongoing monitoring of models with ValidMind
    • Regression
      • Document a California Housing Price Prediction regression model
    • Time series
      • Document a time series forecasting model
      • Document a time series forecasting model

  • Reference
  • Test descriptions
    • Data Validation
      • ACFandPACFPlot
      • ADF
      • AutoAR
      • AutoMA
      • AutoStationarity
      • BivariateScatterPlots
      • BoxPierce
      • ChiSquaredFeaturesTable
      • ClassImbalance
      • DatasetDescription
      • DatasetSplit
      • DescriptiveStatistics
      • DickeyFullerGLS
      • Duplicates
      • EngleGrangerCoint
      • FeatureTargetCorrelationPlot
      • HighCardinality
      • HighPearsonCorrelation
      • IQROutliersBarPlot
      • IQROutliersTable
      • IsolationForestOutliers
      • JarqueBera
      • KPSS
      • LaggedCorrelationHeatmap
      • LJungBox
      • MissingValues
      • MissingValuesBarPlot
      • MutualInformation
      • PearsonCorrelationMatrix
      • PhillipsPerronArch
      • ProtectedClassesCombination
      • ProtectedClassesDescription
      • ProtectedClassesDisparity
      • ProtectedClassesThresholdOptimizer
      • RollingStatsPlot
      • RunsTest
      • ScatterPlot
      • ScoreBandDefaultRates
      • SeasonalDecompose
      • ShapiroWilk
      • Skewness
      • SpreadPlot
      • TabularCategoricalBarPlots
      • TabularDateTimeHistograms
      • TabularDescriptionTables
      • TabularNumericalHistograms
      • TargetRateBarPlots
      • TimeSeriesDescription
      • TimeSeriesDescriptiveStatistics
      • TimeSeriesFrequency
      • TimeSeriesHistogram
      • TimeSeriesLinePlot
      • TimeSeriesMissingValues
      • TimeSeriesOutliers
      • TooManyZeroValues
      • UniqueRows
      • WOEBinPlots
      • WOEBinTable
      • ZivotAndrewsArch
      • Nlp
        • CommonWords
        • Hashtags
        • LanguageDetection
        • Mentions
        • PolarityAndSubjectivity
        • Punctuations
        • Sentiment
        • StopWords
        • TextDescription
        • Toxicity
    • Model Validation
      • BertScore
      • BleuScore
      • ClusterSizeDistribution
      • ContextualRecall
      • FeaturesAUC
      • MeteorScore
      • ModelMetadata
      • ModelPredictionResiduals
      • RegardScore
      • RegressionResidualsPlot
      • RougeScore
      • TimeSeriesPredictionsPlot
      • TimeSeriesPredictionWithCI
      • TimeSeriesR2SquareBySegments
      • TokenDisparity
      • ToxicityScore
      • Embeddings
        • ClusterDistribution
        • CosineSimilarityComparison
        • CosineSimilarityDistribution
        • CosineSimilarityHeatmap
        • DescriptiveAnalytics
        • EmbeddingsVisualization2D
        • EuclideanDistanceComparison
        • EuclideanDistanceHeatmap
        • PCAComponentsPairwisePlots
        • StabilityAnalysisKeyword
        • StabilityAnalysisRandomNoise
        • StabilityAnalysisSynonyms
        • StabilityAnalysisTranslation
        • TSNEComponentsPairwisePlots
      • Ragas
        • AnswerCorrectness
        • AspectCritic
        • ContextEntityRecall
        • ContextPrecision
        • ContextPrecisionWithoutReference
        • ContextRecall
        • Faithfulness
        • NoiseSensitivity
        • ResponseRelevancy
        • SemanticSimilarity
      • Sklearn
        • AdjustedMutualInformation
        • AdjustedRandIndex
        • CalibrationCurve
        • ClassifierPerformance
        • ClassifierThresholdOptimization
        • ClusterCosineSimilarity
        • ClusterPerformanceMetrics
        • CompletenessScore
        • ConfusionMatrix
        • FeatureImportance
        • FowlkesMallowsScore
        • HomogeneityScore
        • HyperParametersTuning
        • KMeansClustersOptimization
        • MinimumAccuracy
        • MinimumF1Score
        • MinimumROCAUCScore
        • ModelParameters
        • ModelsPerformanceComparison
        • OverfitDiagnosis
        • PermutationFeatureImportance
        • PopulationStabilityIndex
        • PrecisionRecallCurve
        • RegressionErrors
        • RegressionErrorsComparison
        • RegressionPerformance
        • RegressionR2Square
        • RegressionR2SquareComparison
        • RobustnessDiagnosis
        • ROCCurve
        • ScoreProbabilityAlignment
        • SHAPGlobalImportance
        • SilhouettePlot
        • TrainingTestDegradation
        • VMeasure
        • WeakspotsDiagnosis
      • Statsmodels
        • AutoARIMA
        • CumulativePredictionProbabilities
        • DurbinWatsonTest
        • GINITable
        • KolmogorovSmirnov
        • Lilliefors
        • PredictionProbabilitiesHistogram
        • RegressionCoeffs
        • RegressionFeatureSignificance
        • RegressionModelForecastPlot
        • RegressionModelForecastPlotLevels
        • RegressionModelSensitivityPlot
        • RegressionModelSummary
        • RegressionPermutationFeatureImportance
        • ScorecardHistogram
    • Ongoing Monitoring
      • CalibrationCurveDrift
      • ClassDiscriminationDrift
      • ClassificationAccuracyDrift
      • ClassImbalanceDrift
      • ConfusionMatrixDrift
      • CumulativePredictionProbabilitiesDrift
      • FeatureDrift
      • PredictionAcrossEachFeature
      • PredictionCorrelation
      • PredictionProbabilitiesHistogramDrift
      • PredictionQuantilesAcrossFeatures
      • ROCCurveDrift
      • ScoreBandsDrift
      • ScorecardHistogramDrift
      • TargetPredictionDistributionPlot
    • Plots
      • BoxPlot
      • CorrelationHeatmap
      • HistogramPlot
      • ViolinPlot
    • Prompt Validation
      • Bias
      • Clarity
      • Conciseness
      • Delimitation
      • NegativeInstruction
      • Robustness
      • Specificity
    • Stats
      • CorrelationAnalysis
      • DescriptiveStats
      • NormalityTests
      • OutlierDetection
  • ValidMind Library Python API
  • ValidMind Public REST API

On this page

  • About ValidMind
    • Before you begin
    • New to ValidMind?
    • Key concepts
  • Setting up
    • Install the ValidMind Library
    • Initialize the ValidMind Library
    • Preview the documentation template
  • Explore a ValidMind test
  • Working with ValidMind datasets
    • Create a sample dataset
    • Initialize the ValidMind dataset
  • Running ValidMind tests
    • Run test using ValidMind dataset
    • Run and log test requiring parameters
  • Work with test results
  • Next steps
    • Discover more learning resources
  • Upgrade ValidMind
  • Edit this page
  • Report an issue
  1. Run tests & test suites
  2. Run tests
  3. Run dataset-based tests

Run dataset-based tests

Learn how to use the ValidMind Library to run tests that take any dataset or model as input. Identify specific tests to run, initialize ValidMind dataset objects in preparation for passing them to your tests, and then run the chosen tests — generating outputs that can be automatically logged to your model's documentation in the ValidMind Platform.

About ValidMind

ValidMind is a suite of tools for managing model risk, including risk associated with AI and statistical models.

You use the ValidMind Library to automate documentation and validation tests, and then use the ValidMind Platform to collaborate on model documentation. Together, these products simplify model risk management, facilitate compliance with regulations and institutional standards, and enhance collaboration between yourself and model validators.

Before you begin

This notebook assumes you have basic familiarity with Python, including an understanding of how functions work. If you are new to Python, you can still run the notebook but we recommend further familiarizing yourself with the language.

If you encounter errors due to missing modules in your Python environment, install the modules with pip install, and then re-run the notebook. For more help, refer to Installing Python Modules.

New to ValidMind?

If you haven't already seen our documentation on the ValidMind Library, we recommend you begin by exploring the available resources in this section. There, you can learn more about documenting models and running tests, as well as find code samples and our Python Library API reference.

For access to all features available in this notebook, you'll need access to a ValidMind account.

Register with ValidMind

Key concepts

Model documentation: A structured and detailed record pertaining to a model, encompassing key components such as its underlying assumptions, methodologies, data sources, inputs, performance metrics, evaluations, limitations, and intended uses. It serves to ensure transparency, adherence to regulatory requirements, and a clear understanding of potential risks associated with the model’s application.

Documentation template: Functions as a test suite and lays out the structure of model documentation, segmented into various sections and sub-sections. Documentation templates define the structure of your model documentation, specifying the tests that should be run, and how the results should be displayed.

Tests: A function contained in the ValidMind Library, designed to run a specific quantitative test on the dataset or model. Tests are the building blocks of ValidMind, used to evaluate and document models and datasets, and can be run individually or as part of a suite defined by your model documentation template.

Metrics: A subset of tests that do not have thresholds. In the context of this notebook, metrics and tests can be thought of as interchangeable concepts.

Custom metrics: Custom metrics are functions that you define to evaluate your model or dataset. These functions can be registered with the ValidMind Library to be used in the ValidMind Platform.

Inputs: Objects to be evaluated and documented in the ValidMind Library. They can be any of the following:

  • model: A single model that has been initialized in ValidMind with vm.init_model().
  • dataset: Single dataset that has been initialized in ValidMind with vm.init_dataset().
  • models: A list of ValidMind models - usually this is used when you want to compare multiple models in your custom metric.
  • datasets: A list of ValidMind datasets - usually this is used when you want to compare multiple datasets in your custom metric. (Learn more: Run tests with multiple datasets)

Parameters: Additional arguments that can be passed when running a ValidMind test, used to pass additional information to a metric, customize its behavior, or provide additional context.

Outputs: Custom metrics can return elements like tables or plots. Tables may be a list of dictionaries (each representing a row) or a pandas DataFrame. Plots may be matplotlib or plotly figures.

Test suites: Collections of tests designed to run together to automate and generate model documentation end-to-end for specific use-cases.

Example: the classifier_full_suite test suite runs tests from the tabular_dataset and classifier test suites to fully document the data and model sections for binary classification model use-cases.

Setting up

Install the ValidMind Library

Recommended Python versions

Python 3.8 <= x <= 3.11

To install the library:

%pip install -q validmind

Initialize the ValidMind Library

Register sample model

Let's first register a sample model for use with this notebook.

  1. In a browser, log in to ValidMind.

  2. In the left sidebar, navigate to Inventory and click + Register Model.

  3. Enter the model details and click Next > to continue to assignment of model stakeholders. (Need more help?)

  4. Select your own name under the MODEL OWNER drop-down.

  5. Click Register Model to add the model to your inventory.

Apply documentation template

Once you've registered your model, let's select a documentation template. A template predefines sections for your model documentation and provides a general outline to follow, making the documentation process much easier.

  1. In the left sidebar that appears for your model, click Documents and select Documentation.

  2. Under TEMPLATE, select Binary classification.

  3. Click Use Template to apply the template.

Get your code snippet

ValidMind generates a unique code snippet for each registered model to connect with your developer environment. You initialize the ValidMind Library with this code snippet, which ensures that your documentation and tests are uploaded to the correct model when you run the notebook.

  1. On the left sidebar that appears for your model, select Getting Started and click Copy snippet to clipboard.
  2. Next, load your model identifier credentials from an .env file or replace the placeholder with your own code snippet:
# Load your model identifier credentials from an `.env` file

%load_ext dotenv
%dotenv .env

# Or replace with your code snippet

import validmind as vm

vm.init(
    # api_host="...",
    # api_key="...",
    # api_secret="...",
    # model="...",
    # document="documentation",
)

Preview the documentation template

Let's verify that you have connected the ValidMind Library to the ValidMind Platform and that the appropriate template is selected for your model.

You will upload documentation and test results unique to your model based on this template later on. For now, take a look at the default structure that the template provides with the vm.preview_template() function from the ValidMind library and note the empty sections:

vm.preview_template()

Explore a ValidMind test

Before we run a test, use the vm.tests.list_tests() function to return information on out-of-the-box tests available in the ValidMind Library.

Let's assume you want to generate the pearson correlation matrix for a dataset. A Pearson correlation matrix is a table that shows the Pearson correlation coefficients between several variables.

We'll pass in a filter to the list_tests function to find the test ID for the pearson correlation matrix:

vm.tests.list_tests(filter="PearsonCorrelationMatrix")

We've identified from the output that the test ID for the pearson correlation matrix test is validmind.data_validation.PearsonCorrelationMatrix.

Use this ID combined with the describe_test() function to retrieve more information about the test, including its Required Inputs:

test_id = "validmind.data_validation.PearsonCorrelationMatrix"
vm.tests.describe_test(test_id)

Since this test requires a dataset, you can expect it to throw an error when we run it without passing in a dataset as input:

try:
    vm.tests.run_test(test_id)
except Exception as e:
    print(e)
Learn more about the individual tests available in the ValidMind Library

Check out our Explore tests notebook for more code examples and usage of key functions.

Working with ValidMind datasets

Create a sample dataset

Since we need a dataset to run tests, let's use the sklearn make_classification function to generate a random sample dataset for testing.

In the code example below, note that:

  • The make_classification function generates a synthetic binary classification dataset with 10,000 samples and 10 features, where the weights=[0.1] parameter creates a class imbalance (roughly 10% positive class).
  • The random_state=42 parameter ensures reproducibility so you get the same dataset each time you run the code.
  • The generated feature matrix X and target array y are combined into a single Pandas DataFrame with columns named feature_0 through feature_9, plus a target column that has a value of 1 for the positive class and 0 otherwise.
import pandas as pd
from sklearn.datasets import make_classification

X, y = make_classification(
    n_samples=10000,
    n_features=10,
    weights=[0.1],
    random_state=42,
)
X.shape
y.shape

df = pd.DataFrame(X, columns=[f"feature_{i}" for i in range(X.shape[1])])
df["target"] = y
df.head()

Initialize the ValidMind dataset

The next step is to connect your data with a ValidMind Dataset object. This step is always necessary every time you want to connect a dataset to documentation and produce test results through ValidMind, but you only need to do it once per dataset.

ValidMind dataset objects provide a wrapper to any type of dataset (NumPy, Pandas, Polars, etc.) so that tests can run transparently regardless of the underlying library.

Initialize a ValidMind dataset object using the init_dataset function from the ValidMind (vm) module. For this example, we'll pass in the following arguments:

  • dataset — The raw dataset that you want to provide as input to tests.
  • input_id — A unique identifier that allows tracking what inputs are used when running each individual test.
  • target_column — A required argument if tests require access to true values. This is the name of the target column in the dataset.
# Initialize the ValidMind dataset for the previously created sample `df`
vm_dataset = vm.init_dataset(
    df,
    input_id="my_demo_dataset",
    target_column="target",
)

Running ValidMind tests

Now that we know how to initialize a ValidMind dataset object, we're ready to run some tests!

You run individual tests by calling the run_test function provided by the validmind.tests module. For the examples below, we'll pass in the following arguments:

  • test_id — The ID of the test to run, as seen in the ID column when you run list_tests.
  • inputs — A dictionary of test inputs, such as dataset, model, datasets, or models. These are ValidMind objects initialized with vm.init_dataset() or vm.init_model().

Run test using ValidMind dataset

Given that our test_id is currently set to validmind.data_validation.PearsonCorrelationMatrix, we'll get the results of the Pearson Correlation Matrix test as output when we call run_test():

result = vm.tests.run_test(
    test_id,
    inputs={"dataset": vm_dataset},
)

Run and log test requiring parameters

Our vm_dataset can also be used for any other test that requires a dataset input, including tests that take additional parameters.

Let's find a class imbalance test to understand the distribution of the target column in the dataset to demonstrate. Class imbalance is a common problem in machine learning, particularly in classification tasks, where the number of instances (or data points) in each class isn't evenly distributed across the available categories.

Tags describe what a test applies to and help you filter tests for your use case. Use list_tags() to view all unique tags used to describe tests in the ValidMind Library:

# Sort the tags in ABC order
sorted(vm.tests.list_tags())

Use list_tests(), this time filtering tests by tags for binary_classification relating to tabular_data:

vm.tests.list_tests(tags=["binary_classification", "tabular_data"])

Let's use describe_test() again to retrieve more information about the test, including confirmation that it accepts some additional parameters, such as min_percent_threshold which allows you configure the threshold for an acceptable class imbalance:

vm.tests.describe_test("validmind.data_validation.ClassImbalance")

Log ClassImbalance test with default parameters

Every test result returned by the run_test() function has a .log() method that can be used to send the test results to the ValidMind Platform.

Let's first run the class imbalance test without any parameters to see its output using a default value for the threshold and log the results to the ValidMind Platform for later comparison:

result = vm.tests.run_test(
    "validmind.data_validation.ClassImbalance",
    inputs={"dataset": vm_dataset},
)

result.log()

Log ClassImbalance test with custom paramaters

From the output, we've confirmed that the class imbalance test passes the pass-fail criteria with the default threshold of 10%. Let's try to run the test with a threshold of 20% to see if it fails.

When running individual tests, you can use a custom result_id to tag the individual result with a unique identifier, allowing you to submit individual results for the same test to the ValidMind Platform:

  • This result_id can be appended to test_id with a : separator.
  • The custom_threshold identifier will correspond with the results of our adjusted min_percent_threshold parameter.
result = vm.tests.run_test(
    "validmind.data_validation.ClassImbalance:custom_threshold",
    inputs={"dataset": vm_dataset},
    params={"min_percent_threshold": 20},
)

result.log()

When the threshold is set to 20%, the results show that the class imbalance test fails.

Work with test results

You can look at the output of tests produced by the ValidMind Library right in this notebook where you ran the tests, as you would expect. But there is a better way — use the ValidMind Platform to attach the logged test results your model's documentation (Need more help?):

  1. From the Inventory in the ValidMind Platform, go to the model you connected to earlier.

  2. In the left sidebar that appears for your model, click Documentation under Documents.

  3. Locate the Data Preparation section and click on 2.1. Data Description to expand that section.

  4. Hover under the logged test block for the default Class Imbalance test until a horizontal dashed line with a + button appears, indicating that you can insert a new block.

  5. Click + and then select Test-Driven Block under FROM LIBRARY:

    • Click on VM Library under TEST-DRIVEN in the left sidebar.
    • Select ClassImbalance:custom_threshold as the test.
  6. Finally, click Insert 1 Test Result to Document to add the test result to the documentation.

    Confirm that the individual results for the adjusted threshold class imbalance test has been correctly inserted into section 2.1. Data Description of the documentation.

You just worked with a draft of your model's documentation, in an easily consumable format matching the structure of the template you previewed in the beginning of this notebook. When you connect to a model with the ValidMind Library, logged test results automatically populate for easy insertion into your documentation.

In the ValidMind Platform, you can make qualitative edits to model documentation, view guidelines, collaborate with validators, and submit your model documentation for approval when it's ready. Learn more ...

Next steps

Now that you know the basics of how to run out-of-the-box tests in the ValidMind Library, you’re ready to take the next step. Use run_test() with any combination of datasets or models as inputs to run comparison tests, and log your consolidated test results to the ValidMind Platform.

Learn how to run comparison tests with the ValidMind Library.

Check out our Run comparison tests notebook for code examples and usage of key functions.

Discover more learning resources

We offer many interactive notebooks to help you automate testing, documenting, validating, and more:

  • Run tests & test suites
  • Use ValidMind Library features
  • Code samples by use case

Or, visit our documentation to learn more about ValidMind.

Upgrade ValidMind

After installing ValidMind, you’ll want to periodically make sure you are on the latest version to access any new features and other enhancements.

Retrieve the information for the currently installed version of ValidMind:

%pip show validmind

If the version returned is lower than the version indicated in our production open-source code, restart your notebook and run:

%pip install --upgrade validmind

You may need to restart your kernel after running the upgrade package for changes to be applied.


Copyright © 2023-2026 ValidMind Inc. All rights reserved.
Refer to LICENSE for details.
SPDX-License-Identifier: AGPL-3.0 AND ValidMind Commercial

Test sandbox beta
Run comparison tests
  • ValidMind Logo
    ©
    Copyright 2026 ValidMind Inc.
    All Rights Reserved.
    Cookie preferences
    Legal
  • Get started
    • Model development
    • Model validation
    • Setup & admin
  • Guides
    • Access
    • Configuration
    • Model inventory
    • Model documentation
    • Model validation
    • Workflows
    • Reporting
    • Monitoring
    • Attestation
  • Library
    • For developers
    • For validators
    • Code samples
    • Python API
    • Public REST API
  • Training
    • Learning paths
    • Courses
    • Videos
  • Support
    • Troubleshooting
    • FAQ
    • Get help
  • Community
    • GitHub
    • LinkedIn
    • Events
    • Blog
  • Edit this page
  • Report an issue