validmind.tests
ValidMind Tests Module
List all tests in the tests directory.
Arguments:
- filter (str, optional): Find tests where the ID, tasks or tags match the filter string. Defaults to None.
- task (str, optional): Find tests that match the task. Can be used to narrow down matches from the filter string. Defaults to None.
- tags (list, optional): Find tests that match list of tags. Can be used to narrow down matches from the filter string. Defaults to None.
- pretty (bool, optional): If True, returns a pandas DataFrame with a formatted table. Defaults to True.
- truncate (bool, optional): If True, truncates the test description to the first line. Defaults to True. (only used if pretty=True)
Returns:
list or pandas.DataFrame: A list of all tests or a formatted table.
Load a test by test ID
Test IDs are in the format namespace.path_to_module.TestClassOrFuncName[:tag]
.
The tag is optional and is used to distinguish between multiple results from the
same test.
Arguments:
- test_id (str): The test ID in the format
namespace.path_to_module.TestName[:tag]
- test_func (callable, optional): The test function to load. If not provided, the test will be loaded from the test provider. Defaults to None.
Get or show details about the test
This function can be used to see test details including the test name, description, required inputs and default params. It can also be used to get a dictionary of the above information for programmatic use.
Arguments:
- test_id (str, optional): The test ID. Defaults to None.
- raw (bool, optional): If True, returns a dictionary with the test details. Defaults to False.
Run a ValidMind or custom test
This function is the main entry point for running tests. It can run simple unit metrics, ValidMind and custom tests, composite tests made up of multiple unit metrics and comparison tests made up of multiple tests.
Arguments:
- test_id (TestID, optional): Test ID to run. Not required if
name
andunit_metrics
provided. - params (dict, optional): Parameters to customize test behavior. See test details for available parameters.
- param_grid (Union[Dict[str, List[Any]], List[Dict[str, Any]]], optional): For comparison tests, either:
- Dict mapping parameter names to lists of values (creates Cartesian product)
- List of parameter dictionaries to test
- inputs (Dict[str, Any], optional): Test inputs (models/datasets initialized with vm.init_model/dataset)
- input_grid (Union[Dict[str, List[Any]], List[Dict[str, Any]]], optional): For comparison tests, either:
- Dict mapping input names to lists of values (creates Cartesian product)
- List of input dictionaries to test
- name (str, optional): Test name (required for composite metrics)
- unit_metrics (list, optional): Unit metric IDs to run as composite metric
- show (bool, optional): Whether to display results. Defaults to True.
- generate_description (bool, optional): Whether to generate a description. Defaults to True.
- title (str, optional): Custom title for the test result
- post_process_fn (Callable[[TestResult], None], optional): Function to post-process the test result
Returns:
TestResult: A TestResult object containing the test results
Raises:
- ValueError: If the test inputs are invalid
- LoadTestError: If the test class fails to load
Register an external test provider
Arguments:
- namespace (str): The namespace of the test provider
- test_provider (TestProvider): The test provider
Exception raised when an error occurs while loading a test
Inherited Members
- builtins.BaseException
- with_traceback
- add_note
Test providers in ValidMind are responsible for loading tests from different sources, such as local files, databases, or remote services. The LocalTestProvider specifically loads tests from the local file system.
To use the LocalTestProvider, you need to provide the root_folder, which is the root directory for local tests. The test_id is a combination of the namespace (set when registering the test provider) and the path to the test class module, where slashes are replaced by dots and the .py extension is left out.
Example usage:
# Create an instance of LocalTestProvider with the root folder
test_provider = LocalTestProvider("/path/to/tests/folder")
# Register the test provider with a namespace
register_test_provider("my_namespace", test_provider)
# List all tests in the namespace (returns a list of test IDs)
test_provider.list_tests()
# this is used by the list_tests() function to aggregate all tests
# from all test providers
# Load a test using the test_id (namespace + path to test class module)
test = test_provider.load_test("my_namespace.my_test_class")
# full path to the test class module is /path/to/tests/folder/my_test_class.py
Attributes:
- root_folder (str): The root directory for local tests.
Initialize the LocalTestProvider with the given root_folder (see class docstring for details)
Arguments:
- root_folder (str): The root directory for local tests.
Load the test identified by the given test_id.
Arguments:
- test_id (str): The identifier of the test. This corresponds to the relative
- path of the python file from the root folder, with slashes replaced by dots
Returns:
The test class that matches the last part of the test_id.
Raises:
- LocalTestProviderLoadModuleError: If the test module cannot be imported
- LocalTestProviderLoadTestError: If the test class cannot be found in the module
Protocol for user-defined test providers
List all tests in the given namespace
Returns:
list: A list of test IDs
Load the test function identified by the given test_id
Arguments:
- test_id (str): The test ID (does not contain the namespace under which the test is registered)
Returns:
callable: The test function
Raises:
- FileNotFoundError: If the test is not found
List unique tasks from all test classes.
Decorator for creating and registering custom tests
This decorator registers the function it wraps as a test function within ValidMind
under the provided ID. Once decorated, the function can be run using the
run_test
function.
The function can take two different types of arguments:
- Inputs: ValidMind model or dataset (or list of models/datasets). These arguments
must use the following names:
model
,models
,dataset
,datasets
. - Parameters: Any additional keyword arguments of any type (must have a default value) that can have any name.
The function should return one of the following types:
- Table: Either a list of dictionaries or a pandas DataFrame
- Plot: Either a matplotlib figure or a plotly figure
- Scalar: A single number (int or float)
- Boolean: A single boolean value indicating whether the test passed or failed
The function may also include a docstring. This docstring will be used and logged as the metric's description.
Arguments:
- func: The function to decorate
- test_id: The identifier for the metric. If not provided, the function name is used.
Returns:
The decorated function.
Decorator for specifying the task types that a test is designed for.
Arguments:
- *tasks: The task types that the test is designed for.