February 14, 2024

Release highlights

We’ve improved the ValidMind user experience, from more supportive documentation templates, easier specification of inputs, and better filtering within the library, to the ability to view which user ran actions within the platform.

ValidMind Library (v1.26.6)

Support for tracking each test result with a unique identifier

Documentation templates have been updated to support logging each test run as a unique result, making it possible to run the same test across different datasets or models.

To make use of this new feature, you simply add a unique result_id identifier as a suffix to a content_id identifier in the content block definition of a metric or test content type.

For example, the following content blocks with the suffixes training_data and test_data enable you to log two individual results for the same test validmind.data_validation.Skewness:

- content_type: test
  content_id: validmind.data_validation.Skewness:training_data
- content_type: metric
  content_id: validmind.data_validation.Skewness:test_data

You can configure each of these unique content_id identifiers by passing the appropriate config and inputs in run_documentation_tests() or run_test(). For example, to configure two separate tests for Skewness using different datasets and parameters:

test = vm.tests.run_test(
    test_id="validmind.data_validation.Skewness:training_data",
    params={
        "max_threshold": 1
    },
    dataset=vm_train_ds,
)
test.log()

test = vm.tests.run_test(
    test_id="validmind.data_validation.Skewness:test_data",
    params={
        "max_threshold": 1.5
    },
    dataset=vm_test_ds
)
test.log()

Easier specification of inputs for individual tests

The run_documentation_tests() function has been updated to allow passing both test inputs and params via the config parameter.

Previously, config could already pass params to each test that you declare. In this example, the test SomeTest receives a custom value for the param min_threshold:

full_suite = vm.run_documentation_tests(
    inputs = {
        ...
    },
    config={
        "validmind.data_validation.SomeTest": {
            "min_threshold": 1
        }
    }
)

With the updated function, config can now pass both params and inputs to each declared test. For example, to specify what model should be passed to each individual test instance:

full_suite = vm.run_documentation_tests(
    inputs = {
        "dataset": vm_dataset,
        "model": xgb_model
    },
    config = {
        "validmind..model_validation.Accuracy:xgb_model": {
            "params": { threshold: 0.5 },
            "inputs": { "model": xgb_model }
        },
        "validmind..model_validation.Accuracy:lr_model": {
            "params": { threshold: 0.3 },
            "inputs": { "model": lr_model }
        },
    }
)

Here, the top-level inputs parameter acts as a global inputs parameter, and the individual tests can customize what they see as the input model via their own config parameters.

View available task types and tags to filter tests

To enable model developers to know what task types and tags are available to filter on, we have made some updates to our library:

  • New list_task_types() and list_tags() endpoints enable you to list all available task_type and tags across all test classes
  • New list_tasks_and_tags() endpoint enables you to list which tags are associated to which task_type

Screenshot showing the Explore tests notebook with the Understanding Tags and Task Types code cells run successfully

Explore tests notebook with the Understanding Tags and Task Types code cells run successfully

ValidMind Library documentation inputs tracking

  • We have added a new feature that tracks which datasets and models are used when running tests. Now, when you initialize datasets or models with vm.init_dataset() and vm.init_model(), we link those inputs with the test results they generate.
  • This makes it clear which inputs were used for each result, improving transparency and making it easier to understand test outcomes. This update does not require any changes to your code and works with existing init methods.

ValidMind Platform (v1.13.13)

Updated events to show users

We now display the name of the user who ran the action instead of a generic “ValidMind Library” name whenever you generate documentation:

Screenshot of the new indicator specifying the user who made updates via the ValidMind Library

New indicator specifying the user who made updates via the ValidMind Library

Simplified instructions for developers

We simplified the instructions for getting started with the ValidMind Library in the ValidMind Platform.

These instructions tell you how to use the code snippet for your model documentation with your own model or with one of our code samples:

Screenshot of the Getting Started page for a model with the custom code snippet

Getting Started page for a model with the custom code snippet

Enhancements

Ability to edit model fields

  • You can now edit the values for default fields displayed on the model details page.
  • Previously it was only possible to edit inventory fields defined by your organization.

Performance improvements for the ValidMind Platform

We made improvements to page load times on our platform for a smoother user experience.

Filter the Model Inventory

You can now narrow down models in your Model Inventory with our advanced filter, search, and sort options.

Custom model inventory fields

  • The model inventory has been updated to allow organizations to add additional fields.
  • This enhancement enables administrators to customize the model inventory data schema according to your specific organizational needs.

User mentions in comments

We implemented a toggle feature in the Model Activity and Recent Activity sections under Comments to filter and display only specific user mentions.

Expanded rich-text editor support

  • Forms in the Model Findings and Validation Report sections now support the rich-text editor interface found in the rest of our content blocks.
  • This support enables you to use the editor for your finding descriptions and remediation plans, for example.

Bug fixes

Invalid content blocks for run_documentation_tests()

  • We’ve fixed an issue where previously using an invalid test identifier would prevent run_documentation_tests() from running all available tests.
  • The full test suite now runs as expected, even when an invalid test identifier causes an error for an individual test.

Show all collapsed sections in documentation

  • We’ve fixed an issue where previously the table of contents was not displaying every subsection that belongs to the parent section.
  • The table of contents now accurately reflects the complete structure of the documentation, including all subsections.

Template swap diffs

  • We’ve fixed an issue where previously the diff for validation reports was showing incorrectly when swapping templates.
  • The correct diff between the current and the new template is now displayed.

Documentation updates

New user management documentation

  • Our user guide now includes end-to-end instructions for managing users on the ValidMind Platform.
  • This new content covers common tasks such as inviting new users, adding them to user groups, and managing roles and permissions.

Updated sample notebooks with current input_id usage

We updated our sample notebooks to show the current, recommended usage for input_id when calling vm.init_dataset() or vm.init_model().

How to upgrade

ValidMind Platform

To access the latest version of the ValidMind Platform,1 hard refresh your browser tab:

  • Windows: Ctrl + Shift + R OR Ctrl + F5
  • MacOS: ⌘ Cmd + Shift + R OR hold down ⌘ Cmd and click the Reload button

ValidMind Library

To upgrade the ValidMind Library:2

  1. In your Jupyter Notebook:

    • Using JupyterHub:3 Hard refresh your browser tab.
    • In your own developer environment:4 Restart your notebook.
  2. Then within a code cell or your terminal, run:

    %pip install --upgrade validmind

You may need to restart your kernel after running the upgrade package for changes to be applied.