%pip install -q validmindQuickstart for knockout option pricing model documentation
Welcome! Let's get you started with the basic process of documenting models with ValidMind.
A knockout option is a barrier option that ceases to exist if the underlying asset hits a predetermined price, known as the "barrier." This barrier level, set above or below the current market price, determines whether the option will "knock out" before its expiration date. There are two types: "up-and-out" and "down-and-out." In an up-and-out knockout option, the option expires if the asset price rises above the barrier, while in a down-and-out, it expires if the asset price falls below. Knockout options generally offer a lower premium than standard options since there is a chance they will expire worthless if the barrier is reached.
Pricing knockout options involves accounting for the proximity of the asset's price to the barrier, as well as market volatility and the option’s time to expiration. High volatility and longer expiry increase the likelihood of the barrier being triggered, which reduces the option’s value. Models like modified Black-Scholes are used for simpler cases, while Monte Carlo simulations or binomial trees handle complex scenarios. Knockout options are useful for hedging or cost-effective investment strategies, allowing investors to save on premiums but with the risk of losing the option entirely if the barrier is hit.
You will learn how to initialize the ValidMind Library, develop a option pricing model, and then write custom tests that can be used for sensitivity and stress testing to quickly generate documentation about model.
Contents
About ValidMind
ValidMind is a suite of tools for managing model risk, including risk associated with AI and statistical models.
You use the ValidMind Library to automate documentation and validation tests, and then use the ValidMind Platform to collaborate on model documentation. Together, these products simplify model risk management, facilitate compliance with regulations and institutional standards, and enhance collaboration between yourself and model validators.
Before you begin
This notebook assumes you have basic familiarity with Python, including an understanding of how functions work. If you are new to Python, you can still run the notebook but we recommend further familiarizing yourself with the language.
If you encounter errors due to missing modules in your Python environment, install the modules with pip install, and then re-run the notebook. For more help, refer to Installing Python Modules.
New to ValidMind?
If you haven't already seen our documentation on the ValidMind Library, we recommend you begin by exploring the available resources in this section. There, you can learn more about documenting models and running tests, as well as find code samples and our Python Library API reference.
Register with ValidMind
Key concepts
Model documentation: A structured and detailed record pertaining to a model, encompassing key components such as its underlying assumptions, methodologies, data sources, inputs, performance metrics, evaluations, limitations, and intended uses. It serves to ensure transparency, adherence to regulatory requirements, and a clear understanding of potential risks associated with the model’s application.
Documentation template: Functions as a test suite and lays out the structure of model documentation, segmented into various sections and sub-sections. Documentation templates define the structure of your model documentation, specifying the tests that should be run, and how the results should be displayed.
Tests: A function contained in the ValidMind Library, designed to run a specific quantitative test on the dataset or model. Tests are the building blocks of ValidMind, used to evaluate and document models and datasets, and can be run individually or as part of a suite defined by your model documentation template.
Custom tests: Custom tests are functions that you define to evaluate your model or dataset. These functions can be registered via the ValidMind Library to be used with the ValidMind Platform.
Inputs: Objects to be evaluated and documented in the ValidMind Library. They can be any of the following:
- model: A single model that has been initialized in ValidMind with
vm.init_model(). - dataset: Single dataset that has been initialized in ValidMind with
vm.init_dataset(). - models: A list of ValidMind models - usually this is used when you want to compare multiple models in your custom test.
- datasets: A list of ValidMind datasets - usually this is used when you want to compare multiple datasets in your custom test. See this example for more information.
Parameters: Additional arguments that can be passed when running a ValidMind test, used to pass additional information to a test, customize its behavior, or provide additional context.
Outputs: Custom tests can return elements like tables or plots. Tables may be a list of dictionaries (each representing a row) or a pandas DataFrame. Plots may be matplotlib or plotly figures.
Test suites: Collections of tests designed to run together to automate and generate model documentation end-to-end for specific use-cases.
Example: the classifier_full_suite test suite runs tests from the tabular_dataset and classifier test suites to fully document the data and model sections for binary classification model use-cases.
Install the ValidMind Library
To install the library:
Initialize the ValidMind Library
ValidMind generates a unique code snippet for each registered model to connect with your developer environment. You initialize the ValidMind Library with this code snippet, which ensures that your documentation and tests are uploaded to the correct model when you run the notebook.
Get your code snippet
In a browser, log in to ValidMind.
In the left sidebar, navigate to Model Inventory and click + Register Model.
Enter the model details and click Continue. (Need more help?)
For example, to register a model for use with this notebook, select:
- Documentation template:
Capital markets
You can fill in other options according to your preference.
- Documentation template:
Go to Getting Started and click Copy snippet to clipboard.
Next, load your model identifier credentials from an .env file or replace the placeholder with your own code snippet:
# Load your model identifier credentials from an `.env` file
%load_ext dotenv
%dotenv .env
# Or replace with your code snippet
import validmind as vm
vm.init(
# api_host="...",
# api_key="...",
# api_secret="...",
# model="...",
)Initialize the Python environment
Next, let's import the necessary libraries and set up your Python environment for data analysis:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import minimize
from validmind.tests import run_testPreview the documentation template
A template predefines sections for your model documentation and provides a general outline to follow, making the documentation process much easier.
You will upload documentation and test results into this template later on. For now, take a look at the structure that the template provides with the vm.preview_template() function from the ValidMind library and note the empty sections:
vm.preview_template()Model development
class OptionPricing:
def __init__(self, S0, K, T, r):
self.S0 = S0
self.K = K
self.T = T
self.r = r
def monte_carlo_simulation(self, N, M):
raise NotImplementedError("Must be implemented by subclasses")
def price_option(self, N, M):
raise NotImplementedError("Must be implemented by subclasses")
class BlackScholesModel(OptionPricing):
def __init__(self, S0, K, T, r, sigma):
super().__init__(S0, K, T, r)
self.sigma = sigma
def monte_carlo_simulation(self, N, M):
dt = self.T / M
price_paths = np.zeros((N, M + 1))
price_paths[:, 0] = self.S0
for t in range(1, M + 1):
Z = np.random.standard_normal(N)
price_paths[:, t] = price_paths[:, t - 1] * np.exp((self.r - 0.5 * self.sigma**2) * dt + self.sigma * np.sqrt(dt) * Z)
return price_paths
def price_option(self, N, M):
price_paths = self.monte_carlo_simulation(N, M)
payoffs = np.maximum(price_paths[:, -1] - self.K, 0)
return np.exp(-self.r * self.T) * np.mean(payoffs)
def calibrate(self, market_prices, strikes, maturities):
def objective_function(params):
self.sigma = params[0]
for K, T in zip(strikes, maturities):
self.K = K
self.T = T
model_prices.append(self.price_option(10000, 100))
return np.sum((np.array(market_prices) - np.array(model_prices))**2)
result = minimize(objective_function, [self.sigma], bounds=[(0.01, 1.0)])
self.sigma = result.x[0]
class StochasticVolatilityModel(OptionPricing):
def __init__(self, S0, K, T, r, v0, kappa, theta, xi, rho):
super().__init__(S0, K, T, r)
self.v0 = v0
self.kappa = kappa
self.theta = theta
self.xi = xi
self.rho = rho
def monte_carlo_simulation(self, N, M):
dt = self.T / M
price_paths = np.zeros((N, M + 1))
vol_paths = np.zeros((N, M + 1))
price_paths[:, 0] = self.S0
vol_paths[:, 0] = self.v0
for t in range(1, M + 1):
Z1 = np.random.standard_normal(N)
Z2 = np.random.standard_normal(N)
W1 = Z1
W2 = self.rho * Z1 + np.sqrt(1 - self.rho**2) * Z2
vol_paths[:, t] = np.abs(vol_paths[:, t - 1] + self.kappa * (self.theta - vol_paths[:, t - 1]) * dt + self.xi * np.sqrt(vol_paths[:, t - 1] * dt) * W1)
price_paths[:, t] = price_paths[:, t - 1] * np.exp((self.r - 0.5 * vol_paths[:, t - 1]) * dt + np.sqrt(vol_paths[:, t - 1] * dt) * W2)
return price_paths
def price_option(self, N, M):
price_paths = self.monte_carlo_simulation(N, M)
payoffs = np.maximum(price_paths[:, -1] - self.K, 0)
return np.exp(-self.r * self.T) * np.mean(payoffs)
def calibrate(self, market_prices, strikes, maturities):
def objective_function(params):
self.v0, self.kappa, self.theta, self.xi, self.rho = params
model_prices = []
for K, T in zip(strikes, maturities):
self.K = K
self.T = T
model_prices.append(self.price_option(10000, 100))
return np.sum((np.array(market_prices) - np.array(model_prices))**2)
initial_guess = [self.v0, self.kappa, self.theta, self.xi, self.rho]
bounds = [(0.01, 1.0), (0.01, 5.0), (0.01, 1.0), (0.01, 1.0), (-1.0, 1.0)]
result = minimize(objective_function, initial_guess, bounds=bounds)
self.v0, self.kappa, self.theta, self.xi, self.rho = result.x
class KnockoutOption:
def __init__(self, model, S0, K, T, r, barrier):
self.model = model
self.S0 = S0
self.K = K
self.T = T
self.r = r
self.barrier = barrier
def price_knockout_option(self, N, M):
dt = self.T / M
price_paths = np.zeros((N, M + 1))
vol_paths = np.zeros((N, M + 1)) if isinstance(self.model, StochasticVolatilityModel) else None
price_paths[:, 0] = self.S0
if vol_paths is not None:
vol_paths[:, 0] = self.model.v0
for t in range(1, M + 1):
Z1 = np.random.standard_normal(N)
if vol_paths is None:
# Black-Scholes Model
price_paths[:, t] = price_paths[:, t - 1] * np.exp(
(self.r - 0.5 * self.model.sigma**2) * dt + self.model.sigma * np.sqrt(dt) * Z1
)
else:
# Stochastic Volatility Model
Z2 = np.random.standard_normal(N)
W1 = Z1
W2 = self.model.rho * Z1 + np.sqrt(1 - self.model.rho**2) * Z2
vol_paths[:, t] = np.abs(vol_paths[:, t - 1] + self.model.kappa * (self.model.theta - vol_paths[:, t - 1]) * dt + self.model.xi * np.sqrt(vol_paths[:, t - 1] * dt) * W1)
price_paths[:, t] = price_paths[:, t - 1] * np.exp(
(self.r - 0.5 * vol_paths[:, t - 1]) * dt + np.sqrt(vol_paths[:, t - 1] * dt) * W2
)
# Knockout condition
price_paths[:, t][price_paths[:, t] >= self.barrier] = 0
payoffs = np.maximum(price_paths[:, -1] - self.K, 0)
return np.exp(-self.r * self.T) * np.mean(payoffs)Data Preparation
def generate_synthetic_market_data(model, strikes, maturities):
market_prices = []
market_data = []
for K, T in zip(strikes, maturities):
model.K = K
model.T = T
market_prices.append(model.price_option(10000, 100))
market_data.append({"strike": K, "option_price": model.price_option(10000, 100)})
return market_prices, market_dataN = 10000
M = 100
# Parameters for synthetic data
S0 = 100
K = 100
T = 1
r = 0.05
# BlackSholes
true_sigma = 0.2
# Stochastic Volatility
true_v0 = 0.2
true_kappa = 2.0
true_theta = 0.2
true_xi = 0.1
true_rho = -0.5
# Synthetic data generation parameters
strikes = list(np.linspace(75, 130, 25))
maturities = list(np.linspace(0.2, 3.0, 25))
# Generate synthetic market data using the true parameters
bs_model = BlackScholesModel(S0, K, T, r, true_sigma)
bs_market_prices, bs_market_data = generate_synthetic_market_data(bs_model, strikes, maturities)
sv_model = StochasticVolatilityModel(S0, K, T, r, true_v0, true_kappa, true_theta, true_xi, true_rho)
sv_market_prices, sv_market_data = generate_synthetic_market_data(sv_model, strikes, maturities)Initialize the ValidMind datasets
Before you can run tests, you must first initialize a ValidMind dataset object using the init_dataset function from the ValidMind (vm) module.
bs_market_data_df = pd.DataFrame(bs_market_data)
vm_bs_market_data = vm.init_dataset(
dataset=bs_market_data_df,
input_id="sv_market_data",
)
sv_market_data_df = pd.DataFrame(sv_market_data)
vm_sv_market_data = vm.init_dataset(
dataset=sv_market_data_df,
input_id="sv_market_data",
)Data Quality
Let's check quality of the data using outliers and missing data tests.
Outliers detection using IQR method
Let's visualizes the distribution of outliers in the option_price feature using the Interquartile Range (IQR) method.
result = run_test(
"validmind.data_validation.IQROutliersBarPlot:BlackScholes",
inputs={
"dataset": vm_bs_market_data,
},
title="Outliers detection using IQR method for BlackScholes",
)
result.log()result = run_test(
"validmind.data_validation.IQROutliersTable:BlackScholes",
inputs={
"dataset": vm_bs_market_data,
},
title="Outliers table using IQR method for BlackScholes",
)
result.log()result = run_test(
"validmind.data_validation.IQROutliersBarPlot:StochasticVolatility",
inputs={
"dataset": vm_sv_market_data,
},
title="Outliers detection using IQR method for StochasticVolatility",
)
result.log()result = run_test(
"validmind.data_validation.IQROutliersTable:StochasticVolatility",
inputs={
"dataset": vm_sv_market_data,
},
title="Outliers table using IQR method for StochasticVolatility",
)
result.log()Isolation Forest Outliers Test
Let's detects anomalies in the dataset using the Isolation Forest algorithm, visualized through scatter plots.
result = run_test(
"validmind.data_validation.IsolationForestOutliers:BlackScholes",
inputs={
"dataset": vm_bs_market_data,
},
title="Outliers detection using Isolation Forest for BlackScholes",
)
result.log()result = run_test(
"validmind.data_validation.IsolationForestOutliers:StochasticVolatility",
inputs={
"dataset": vm_sv_market_data,
},
title="Outliers detection using Isolation Forest for StochasticVolatility",
)
result.log()Missing Values Test
Let's evaluates dataset quality by ensuring the missing value ratio across all features does not exceed a set threshold.
result = run_test(
"validmind.data_validation.MissingValues:BlackScholes",
inputs={
"dataset": vm_bs_market_data,
},
title="Missing Values detection for BlackScholes",
)
result.log()
result = run_test(
"validmind.data_validation.MissingValues:StochasticVolatility",
inputs={
"dataset": vm_sv_market_data,
},
title="MissingValues detection for StochasticVolatility",
)
result.log()### Model Calibration * Clearly state the purpose of the calibration process. For example, in the context of an option pricing model, calibration aims to adjust model parameters to fit market data (e.g., market option prices, volatility surfaces). * Specify whether the calibration is to historical data, current market data, or a blend of both.
import pandas as pd
@vm.test("my_custom_tests.SyntheticDataCalibrationTest")
def generate_synthetic_data_summary(option_pricing_model, strikes, maturities, synthetic_prices):
"""
This function will use synthetic prices to calibrate each model
and then generate derived prices based on the calibrated parameters.
It will output a DataFrame summarizing the strikes, maturities,
synthetic and derived prices, and the model parameters.
"""
derived_prices = []
for K, T in zip(strikes, maturities):
option_pricing_model.K = K
option_pricing_model.T = T
derived_prices.append(option_pricing_model.price_option(10000, 100))
model_type = type(option_pricing_model).__name__
data = {
"Strike": strikes,
"Maturity": maturities,
"Synthetic_Price": synthetic_prices,
"Derived_Price": derived_prices,
"Model_Type": model_type,
"S0": [option_pricing_model.S0] * len(strikes),
"K": [option_pricing_model.K] * len(strikes),
"T": [option_pricing_model.T] * len(strikes),
"r": [option_pricing_model.r] * len(strikes)
}
if model_type == "BlackScholesModel":
data["sigma"] = [option_pricing_model.sigma] * len(strikes)
elif model_type == "StochasticVolatilityModel":
data["v0"] = [option_pricing_model.v0] * len(strikes)
data["kappa"] = [option_pricing_model.kappa] * len(strikes)
data["theta"] = [option_pricing_model.theta] * len(strikes)
data["xi"] = [option_pricing_model.xi] * len(strikes)
data["rho"] = [option_pricing_model.rho] * len(strikes)
df = pd.DataFrame(data)
return dfSynthetic Data Calibration Test
Let's evaluates the accuracy of a stochastic volatility model by comparing synthetic prices with derived prices after model calibration.
result = run_test(
"my_custom_tests.SyntheticDataCalibrationTest",
params={
"option_pricing_model": sv_model,
"strikes": strikes,
"maturities": maturities,
"synthetic_prices": sv_market_prices
},
)
result.log()#### Benchmark Testing * Compare the model’s performance with alternative models or industry-standard models to assess its relative effectiveness. * Ensure that the model is competitive in pricing, accuracy, and computational efficiency.
@vm.test("my_custom_tests.BenchmarkTest")
def benchmark_test(bs_model, sv_model, strikes, maturities):
"""
Comparison between Black Scholes and stochastic volatility model
"""
bs_model_type = type(bs_model).__name__
sv_model_type = type(sv_model).__name__
bs_derived_prices = []
sv_derived_prices = []
for K in strikes:
bs_model.K = K
bs_derived_prices.append(bs_model.price_option(10000, 100))
sv_model.K = K
sv_derived_prices.append(sv_model.price_option(10000, 100))
data = {
"Strike": strikes,
"Maturities": [sv_model.T] * len(strikes),
"bs_model_price": bs_derived_prices,
"sv_model_price": sv_derived_prices,
}
df1 = pd.DataFrame(data)
bs_derived_prices = []
sv_derived_prices = []
for T in maturities:
bs_model.T = T
bs_derived_prices.append(bs_model.price_option(10000, 100))
sv_model.T = T
sv_derived_prices.append(sv_model.price_option(10000, 100))
data = {
"Strike": [sv_model.K] * len(maturities),
"Maturities": maturities,
"bs_model_price": bs_derived_prices,
"sv_model_price": sv_derived_prices,
}
df2 = pd.DataFrame(data)
return {"strikes variation benchmarking": df1}, {"maturities variation benchmarking": df2}result = run_test(
"my_custom_tests.BenchmarkTest",
params={
"sv_model": sv_model,
"bs_model": bs_model,
"strikes": strikes,
"maturities": maturities,
},
)
result.log()Surface Volatility Test
Let's calculates the implied volatility across different strikes and maturities based on market prices
import numpy as np
import pandas as pd
from scipy.optimize import minimize
import plotly.graph_objects as go
@vm.test("my_custom_tests.ImpliedVolSurface")
def implied_vol_surface(market_prices, strikes, maturities, S0, r, barrier, N=10000, M=100):
"""
This is a test to compute the implied volatility surface for a given set of market prices,
strikes, and maturities.
"""
def implied_volatility(market_price, N, M, initial_guess=0.2):
def objective_function(sigma):
model.sigma = sigma
model_price = model.price_option(N, M)
return (model_price - market_price) ** 2
result = minimize(objective_function, initial_guess, bounds=[(0.01, 1.0)])
return result.x[0]
implied_vols = np.zeros((len(strikes), len(maturities)))
for i, K in enumerate(strikes):
for j, T in enumerate(maturities):
market_price = market_prices[i]
model = BlackScholesModel(S0, K, T, r, sigma=0.2)
implied_vol = implied_volatility(market_price, N, M)
implied_vols[i, j] = implied_vol
# Create the 3D surface plot
X, Y = np.meshgrid(strikes, maturities)
Z = implied_vols.T # Transpose to match the meshgrid orientation
fig = go.Figure(data=[go.Surface(x=X, y=Y, z=Z)])
# Update the layout
fig.update_layout(
title=f'3D Surface Plot of Implied Volatility',
scene=dict(
xaxis_title='Strike',
yaxis_title='Maturity',
zaxis_title='Implied Volatility',
camera=dict(
up=dict(x=0, y=0, z=1),
center=dict(x=0, y=0, z=0),
eye=dict(x=1.5, y=1.5, z=1.5)
)
),
width=900,
height=700,
margin=dict(l=65, r=50, b=65, t=90)
)
return figresult = run_test(
"my_custom_tests.ImpliedVolSurface",
params={
"market_prices": sv_market_prices,
"strikes": strikes,
"maturities": maturities,
"S0": S0,
"r": r,
"barrier": 120
}
)
result.log()
@vm.test("my_custom_tests.Sensitivity")
def sensitivity_test(model_type, S0, T, r, N, M, strike=None, barrier=None, sigma=None, v0=None, kappa=None,theta=None, xi=None, rho=None):
"""
This is sensitivity test
"""
if model_type == 'BS':
model = BlackScholesModel(S0, strike, T, r, sigma)
else:
model = StochasticVolatilityModel(S0, strike, T, r, v0, kappa, theta, xi, rho)
knockout_option = KnockoutOption(model, S0, strike, T, r, barrier)
price = knockout_option.price_knockout_option(N, M)
return pd.DataFrame({"Option price": [price]})Initialise parameters
strike_range = (min(strikes), max(strikes))
barrier_range = (100, 120)Common plot function
Let's create a line plot using the default result output data and log it by passing the function through the post_process_fn parameter in the run_test() method.
from plotly.express import bar
from validmind.vm_models.figure import Figure
from validmind.vm_models.result import TestResult
import plotly.graph_objects as go
import random
def process_results(result: TestResult):
# Convert to DataFrame
df = pd.DataFrame(result.tables[0].data)
# Get the first two column names
x_col = df.columns[0]
y_col = df.columns[1]
# Create figure
fig = go.Figure()
fig.add_trace(
go.Scatter(
x=df[x_col],
y=df[y_col],
mode='lines',
name=y_col # Use y-axis column name as trace name
)
)
fig.update_layout(
xaxis_title=x_col,
yaxis_title=y_col,
showlegend=True,
template="plotly_white"
)
result.add_figure(
Figure(
figure=fig,
key="sensitivity_plot_" + str(random.randint(0, 1000000)),
ref_id=result.ref_id,
)
)
return resultStrike sensitivity Test
Let's evaluates the sensitivity of a model's output value to changes in the strike price, while keeping other parameters constant. This test is crucial for understanding how variations in strike prices affect the valuation of financial derivatives, particularly options.
result = run_test(
"my_custom_tests.Sensitivity:S0",
param_grid={
"model_type": ['SV'],
"N": [N],
"M": [M],
"strike":[strike_range[0]],
"barrier": [barrier_range[0]],
"S0": list(np.linspace(S0-20, S0+20, 20)),
"T": [T],
"r": [r],
"v0": [0.2],
"kappa": [2],
"theta": [0.2],
"xi": [0.1],
"rho": [-0.5],
},
post_process_fn= process_results
)
result.log()result = run_test(
"my_custom_tests.Sensitivity:ToStrike",
param_grid={
"model_type": ['SV'],
"N": [N],
"M": [M],
"strike": list(np.linspace(strike_range[0], strike_range[1], 20)),
"barrier": [barrier_range[0]],
"S0": [S0],
"T": [T],
"r": [r],
"v0": [0.2],
"kappa": [2],
"theta": [0.2],
"xi": [0.1],
"rho": [-0.5],
},
post_process_fn= process_results
)
result.log()Barrier Sensitivity Test
Let's evaluates the sensitivity of a model's output to changes in the barrier level of a financial derivative, specifically a barrier option. This test is crucial for understanding how small changes in the barrier can impact the option's valuation, which is essential for risk management and pricing strategies.
result = run_test(
"my_custom_tests.Sensitivity:ToBarrier",
param_grid={
"model_type": ['SV'],
"N": [N],
"M": [M],
"strike": [strike_range[0]],
"barrier": list(np.linspace(barrier_range[0], barrier_range[1], 20)),
"S0": [S0],
"T": [T],
"r": [r],
"v0": [0.2],
"kappa": [2],
"theta": [0.2],
"xi": [0.1],
"rho": [-0.5],
},
post_process_fn=process_results
)
result.log()#### Greeks These Greeks are crucial for traders and risk managers as they provide insights into the risk and potential price movements of options and derivatives, allowing for more informed decision-making and risk management strategies.
Delta
Let's measures the sensitivity of the option's price to a change in the price of the underlying asset. It indicates how much the price of an option is expected to move per $1 change in the underlying asset's price.
@vm.test("my_custom_tests.GreeksDelta")
def calculate_delta(model_type, S0, T, r, N, M, strike=None, barrier=None,
sigma=None, v0=None, kappa=None, theta=None, xi=None, rho=None,
h=0.001): # h is the step size for finite difference
"""
Calculate delta using finite difference method.
Delta = (V(S0 + h) - V(S0 - h)) / (2h)
where V is the option price and h is a small increment
"""
# Initialize the model with S0 + h
if model_type == 'BS':
model_up = BlackScholesModel(S0 + h, strike, T, r, sigma)
model_down = BlackScholesModel(S0 - h, strike, T, r, sigma)
else:
model_up = StochasticVolatilityModel(S0 + h, strike, T, r, v0, kappa, theta, xi, rho)
model_down = StochasticVolatilityModel(S0 - h, strike, T, r, v0, kappa, theta, xi, rho)
# Calculate option prices for up and down moves
knockout_up = KnockoutOption(model_up, S0 + h, strike, T, r, barrier)
knockout_down = KnockoutOption(model_down, S0 - h, strike, T, r, barrier)
price_up = knockout_up.price_knockout_option(N, M)
price_down = knockout_down.price_knockout_option(N, M)
# Calculate delta using central difference
delta = (price_up - price_down) / (2 * h)
df = pd.DataFrame({"Delta": [delta], "Price_Up": [price_up], "Price_Down": [price_down], "h": [h]})
return df
# To analyze delta sensitivity to underlying price changes
result = run_test(
"my_custom_tests.GreeksDelta",
param_grid={
"model_type": ['SV'],
"N": [1000000],
"M": [M],
"strike":[strike_range[0]],
"barrier": [barrier_range[0]],
"S0": list(np.linspace(S0-20, S0+20, 20)),
"T": [T],
"r": [r],
"v0": [0.2],
"kappa": [2],
"theta": [0.2],
"xi": [0.1],
"rho": [-0.5],
"h": [0.001]
},
post_process_fn=process_results # Using the plotting function defined earlier
)
result.log()Gamma
Let's measures the rate of change of Delta with respect to changes in the underlying asset's price. It indicates the curvature of the option's price relative to the underlying asset's price.
@vm.test("my_custom_tests.GreeksGamma")
def calculate_gamma(model_type, S0, T, r, N, M, strike=None, barrier=None,
sigma=None, v0=None, kappa=None, theta=None, xi=None, rho=None,
h=0.01): # h is the step size for finite difference
"""
Calculate gamma using finite difference method.
Gamma = (V(S0 + h) - 2V(S0) + V(S0 - h)) / h^2
where V is the option price and h is a small increment
"""
# Initialize the models with S0 + h, S0, and S0 - h
if model_type == 'BS':
model_up = BlackScholesModel(S0 + h, strike, T, r, sigma)
model_center = BlackScholesModel(S0, strike, T, r, sigma)
model_down = BlackScholesModel(S0 - h, strike, T, r, sigma)
else:
model_up = StochasticVolatilityModel(S0 + h, strike, T, r, v0, kappa, theta, xi, rho)
model_center = StochasticVolatilityModel(S0, strike, T, r, v0, kappa, theta, xi, rho)
model_down = StochasticVolatilityModel(S0 - h, strike, T, r, v0, kappa, theta, xi, rho)
# Calculate option prices for up, center, and down moves
knockout_up = KnockoutOption(model_up, S0 + h, strike, T, r, barrier)
knockout_center = KnockoutOption(model_center, S0, strike, T, r, barrier)
knockout_down = KnockoutOption(model_down, S0 - h, strike, T, r, barrier)
price_up = knockout_up.price_knockout_option(N, M)
price_center = knockout_center.price_knockout_option(N, M)
price_down = knockout_down.price_knockout_option(N, M)
# Calculate gamma using second-order central difference
gamma = (price_up - 2*price_center + price_down) / (h * h)
df = pd.DataFrame({
"Gamma": [gamma],
"Price_Up": [price_up],
"Price_Center": [price_center],
"Price_Down": [price_down],
"h": [h]
})
return df
# To analyze gamma sensitivity to underlying price changes
result = run_test(
"my_custom_tests.GreeksGamma",
param_grid={
"model_type": ['SV'],
"N": [1000000],
"M": [M],
"strike":[strike_range[0]],
"barrier": [barrier_range[0]],
"S0": list(np.linspace(S0-20, S0+20, 20)),
"T": [T],
"r": [r],
"v0": [0.2],
"kappa": [2],
"theta": [0.2],
"xi": [0.1],
"rho": [-0.5],
"h": [0.1]
},
post_process_fn=process_results # Using the plotting function defined earlier
)
result.log()Theta
Let's measures the sensitivity of the option's price to the passage of time, also known as time decay. It indicates how much the price of an option is expected to decrease as the option approaches its expiration date.
@vm.test("my_custom_tests.GreeksTheta")
def calculate_theta(model_type, S0, T, r, N, M, strike=None, barrier=None,
sigma=None, v0=None, kappa=None, theta=None, xi=None, rho=None,
dt=1/365): # dt is typically one day
"""
Calculate theta using finite difference method.
Theta = (V(t + dt) - V(t)) / dt
where V is the option price and dt is a small time increment (typically 1 day)
"""
# Initialize the models with T and T + dt
if model_type == 'BS':
model_current = BlackScholesModel(S0, strike, T, r, sigma)
model_future = BlackScholesModel(S0, strike, T + dt, r, sigma)
else:
model_current = StochasticVolatilityModel(S0, strike, T, r, v0, kappa, theta, xi, rho)
model_future = StochasticVolatilityModel(S0, strike, T + dt, r, v0, kappa, theta, xi, rho)
# Calculate option prices for current and future time
knockout_current = KnockoutOption(model_current, S0, strike, T, r, barrier)
knockout_future = KnockoutOption(model_future, S0, strike, T + dt, r, barrier)
price_current = knockout_current.price_knockout_option(N, M)
price_future = knockout_future.price_knockout_option(N, M)
# Calculate theta using forward difference
# Note: We divide by dt and multiply by -1 since theta represents the negative rate of change
theta_value = -1 * (price_future - price_current) / dt
df = pd.DataFrame({
"Theta": [theta_value],
"Price_Current": [price_current],
"Price_Future": [price_future],
"dt": [dt]
})
return df
# Example usage to analyze theta sensitivity across different underlying prices
result = run_test(
"my_custom_tests.GreeksTheta",
param_grid={
"model_type": ['SV'],
"N": [1000000],
"M": [M],
"strike":[strike_range[0]],
"barrier": [barrier_range[0]],
"S0": list(np.linspace(S0-20, S0+20, 20)),
"T": [T],
"r": [r],
"v0": [0.2],
"kappa": [2],
"theta": [0.2],
"xi": [0.1],
"rho": [-0.5],
"dt": [1/365] # One day time step
},
post_process_fn=process_results # Using the plotting function defined earlier
)
result.log()Vega
Let's measures the sensitivity of the option's price to changes in the volatility of the underlying asset. It indicates how much the price of an option is expected to change with a 1% change in the underlying asset's volatility.
@vm.test("my_custom_tests.GreeksVega")
def calculate_vega(model_type, S0, T, r, N, M, strike=None, barrier=None,
sigma=None, v0=None, kappa=None, theta=None, xi=None, rho=None,
h=0.001): # h is the step size for finite difference
"""
Calculate vega using finite difference method.
For Black-Scholes: Vega = (V(σ + h) - V(σ - h)) / (2h)
For Stochastic Vol: Vega = (V(v0 + h) - V(v0 - h)) / (2h)
where V is the option price and h is a small increment in volatility
"""
if model_type == 'BS':
# For Black-Scholes, perturb sigma
model_up = BlackScholesModel(S0, strike, T, r, sigma + h)
model_down = BlackScholesModel(S0, strike, T, r, sigma - h)
else:
# For Stochastic Volatility, perturb v0
model_up = StochasticVolatilityModel(S0, strike, T, r, v0 + h, kappa, theta, xi, rho)
model_down = StochasticVolatilityModel(S0, strike, T, r, v0 - h, kappa, theta, xi, rho)
# Calculate option prices for up and down moves in volatility
knockout_up = KnockoutOption(model_up, S0, strike, T, r, barrier)
knockout_down = KnockoutOption(model_down, S0, strike, T, r, barrier)
price_up = knockout_up.price_knockout_option(N, M)
price_down = knockout_down.price_knockout_option(N, M)
# Calculate vega using central difference
vega = (price_up - price_down) / (2 * h)
df = pd.DataFrame({
"Vega": [vega],
"Price_Up": [price_up],
"Price_Down": [price_down],
"h": [h]
})
return df
# Example usage to analyze vega sensitivity across different underlying prices
result = run_test(
"my_custom_tests.GreeksVega",
param_grid={
"model_type": ['SV'],
"N": [1000000],
"M": [M],
"strike":[strike_range[0]],
"barrier": [barrier_range[0]],
"S0": list(np.linspace(S0-20, S0+20, 20)),
"T": [T],
"r": [r],
"v0": [0.2],
"kappa": [2],
"theta": [0.2],
"xi": [0.1],
"rho": [-0.5],
"h": [0.0001] # Small step size for better accuracy
},
post_process_fn=process_results # Using the plotting function defined earlier
)
result.log()Rho
Let's measures the sensitivity of the option's price to changes in the interest rate. It indicates how much the price of an option is expected to change with a 1% change in interest rates.
@vm.test("my_custom_tests.GreeksRho")
def calculate_rho(model_type, S0, T, r, N, M, strike=None, barrier=None,
sigma=None, v0=None, kappa=None, theta=None, xi=None, rho=None,
h=0.0001): # h is the step size for finite difference
"""
Calculate rho using finite difference method.
Rho = (V(r + h) - V(r - h)) / (2h)
where V is the option price and h is a small increment in interest rate
"""
# Initialize the models with r + h and r - h
if model_type == 'BS':
model_up = BlackScholesModel(S0, strike, T, r + h, sigma)
model_down = BlackScholesModel(S0, strike, T, r - h, sigma)
else:
model_up = StochasticVolatilityModel(S0, strike, T, r + h, v0, kappa, theta, xi, rho)
model_down = StochasticVolatilityModel(S0, strike, T, r - h, v0, kappa, theta, xi, rho)
# Calculate option prices for up and down moves in interest rate
knockout_up = KnockoutOption(model_up, S0, strike, T, r + h, barrier)
knockout_down = KnockoutOption(model_down, S0, strike, T, r - h, barrier)
price_up = knockout_up.price_knockout_option(N, M)
price_down = knockout_down.price_knockout_option(N, M)
# Calculate rho using central difference
rho_value = (price_up - price_down) / (2 * h)
df = pd.DataFrame({
"Rho": [rho_value],
"Price_Up": [price_up],
"Price_Down": [price_down],
"h": [h]
})
return df
# Example usage to analyze rho sensitivity across different underlying prices
result = run_test(
"my_custom_tests.GreeksRho",
param_grid={
"model_type": ['SV'],
"N": [1000000],
"M": [M],
"strike":[strike_range[0]],
"barrier": [barrier_range[0]],
"S0": list(np.linspace(S0-20, S0+20, 20)),
"T": [T],
"r": [r],
"v0": [0.2],
"kappa": [2],
"theta": [0.2],
"xi": [0.1],
"rho": [-0.5],
"h": [0.0001] # Small step size for better accuracy
},
post_process_fn=process_results # Using the plotting function defined earlier
)
result.log()@vm.test("my_custom_tests.Stressing")
def sensitivity_test(model_type, S0, T, r, N, M, strike=None, barrier=None, sigma=None, v0=None, kappa=None,theta=None, xi=None, rho=None):
"""
This is stress test
"""
if model_type == 'BS':
model = BlackScholesModel(S0, strike, T, r, sigma)
else:
model = StochasticVolatilityModel(S0, strike, T, r, v0, kappa, theta, xi, rho)
knockout_option = KnockoutOption(model, S0, strike, T, r, barrier)
price = knockout_option.price_knockout_option(N, M)
return pd.DataFrame({"Option price": [price]})Rho (correlation) and Theta (long term vol) stress test
First, we create a surface plot to visualize the option price with respect to two variables.
def two_parameters_stress_surface_plot(result: TestResult):
import plotly.graph_objects as go
import numpy as np
import pandas as pd
# Convert to DataFrame
data = pd.DataFrame(result.tables[0].data)
# Get column names (assuming first column is x, next two are y1 and y2)
z_col = data.columns[2]
x_col = data.columns[0]
y_col = data.columns[1]
# Get unique values for x and y
x_unique = np.sort(data[x_col].unique())
y_unique = np.sort(data[y_col].unique())
# Create meshgrid
X, Y = np.meshgrid(x_unique, y_unique)
# Create Z matrix
Z = np.zeros_like(X)
for i, x_val in enumerate(x_unique):
for j, y_val in enumerate(y_unique):
mask = (data[x_col] == x_val) & (data[y_col] == y_val)
if mask.any():
Z[j, i] = data.loc[mask, z_col].iloc[0]
# Create the 3D surface plot
fig = go.Figure(data=[go.Surface(x=X, y=Y, z=Z)])
# Update the layout
fig.update_layout(
title=f'3D Surface Plot of {z_col}',
scene=dict(
xaxis_title=x_col,
yaxis_title=y_col,
zaxis_title=z_col,
camera=dict(
up=dict(x=0, y=0, z=1),
center=dict(x=0, y=0, z=0),
eye=dict(x=1.5, y=1.5, z=1.5)
)
),
width=900,
height=700,
margin=dict(l=65, r=50, b=65, t=90)
)
result.add_figure(
Figure(
figure=fig,
key="sensitivity_plot_" + str(random.randint(0, 1000000)),
ref_id=result.ref_id,
)
)
return resultLet's evaluates the sensitivity of a model's output to changes in the correlation parameter (rho) and the long-term variance parameter (theta) within a stochastic volatility framework.
This test is useful for understanding how variations in these parameters affect the model's valuation, which is crucial for risk management and model validation.
result = run_test(
"my_custom_tests.Stressing:TheRhoAndThetaParameters",
param_grid={
"model_type": ['SV'],
"N": [N],
"M": [M],
"strike": [strike_range[0]],
"barrier": [barrier_range[0]],
"S0": [S0],
"T": [T],
"r": [r],
"v0": [0.2],
"kappa": [2],
"theta": list(np.linspace(0,0.8, 10)),
"xi": [0.1],
"rho": list(np.linspace(-1,0.8, 10)),
},
post_process_fn=two_parameters_stress_surface_plot
)
result.log()Rho (correlation) and Xi (vol of vol) stress test
result = run_test(
"my_custom_tests.Stressing:TheRhoAndXiParameters",
param_grid={
"model_type": ['SV'],
"N": [N],
"M": [M],
"strike": [strike_range[0]],
"barrier": [barrier_range[0]],
"S0": [S0],
"T": [T],
"r": [r],
"v0": [0.2],
"kappa": [2],
"theta": [0.2],
"xi": list(np.linspace(0,0.8, 10)),
"rho": list(np.linspace(-1,0.8, 10)),
},
post_process_fn=two_parameters_stress_surface_plot
)
result.log()Sigma stress test
evaluates the sensitivity of a model's output to changes in the volatility parameter, sigma. This test is crucial for understanding how variations in market volatility impact the model's valuation of financial instruments, particularly options.
This test is useful for risk management and model validation, as it helps identify the robustness of the model under different market conditions. By analyzing the changes in the model's output as sigma varies, stakeholders can assess the model's stability and reliability.
result = run_test(
"my_custom_tests.Stressing:TheSigmaParameter",
param_grid={
"model_type": ['BS'],
"N": [N],
"M": [M],
"strike": [strike_range[0]],
"barrier": [barrier_range[0]],
"S0": [S0],
"T": [T],
"r": [r],
"sigma": list(np.linspace(0.2, 0.8, 10)),
},
post_process_fn=process_results
)
result.log()Stress kappa
Let's evaluates the sensitivity of a model's output to changes in the kappa parameter, which is a mean reversion rate in stochastic volatility models.
result = run_test(
"my_custom_tests.Stressing:TheKappaParameter",
param_grid={
"model_type": ['SV'],
"N": [N],
"M": [M],
"strike": [strike_range[0]],
"barrier": [barrier_range[0]],
"S0": [S0],
"T": [T],
"r": [r],
"v0": [0.2],
"kappa": list(np.linspace(0, 8, 10)),
"theta": [0.2],
"xi": [0.1],
"rho": [-0.5],
},
post_process_fn=process_results
)
result.log()Stress theta
Stress Theta evaluates the sensitivity of a model's output to changes in the parameter theta, which represents the long-term variance in a stochastic volatility model
result = run_test(
"my_custom_tests.Stressing:TheThetaParameter",
param_grid={
"model_type": ['SV'],
"N": [N],
"M": [M],
"strike": [strike_range[0]],
"barrier": [barrier_range[0]],
"S0": [S0],
"T": [T],
"r": [r],
"v0": [0.2],
"kappa": [2],
"theta": list(np.linspace(0, 0.8, 10)),
"xi": [0.1],
"rho": [-0.5],
},
post_process_fn=process_results
)
result.log()Stress xi
Stress Xi evaluates the sensitivity of a model's output to changes in the parameter xi, which represents the volatility of volatility in a stochastic volatility model. This test is crucial for understanding how variations in xi impact the model's valuation, particularly in financial derivatives pricing.
result = run_test(
"my_custom_tests.Stressing:TheXiParameter",
param_grid={
"model_type": ['SV'],
"N": [N],
"M": [M],
"strike": [strike_range[0]],
"barrier": [barrier_range[0]],
"S0": [S0],
"T": [T],
"r": [r],
"v0": [0.2],
"kappa": [2],
"theta": [0.2],
"xi": list(np.linspace(0.05, 0.95, 10)),
"rho": [-0.5],
},
post_process_fn=process_results
)
result.log()Stress rho
Stress rho test evaluates the sensitivity of a model's output to changes in the correlation parameter, rho, within a stochastic volatility (SV) model framework. This test is crucial for understanding how variations in rho, which represents the correlation between the asset price and its volatility, impact the model's valuation output.
result = run_test(
"my_custom_tests.Stressing:TheRhoParameter",
param_grid={
"model_type": ['SV'],
"N": [N],
"M": [M],
"strike": [strike_range[0]],
"barrier": [barrier_range[0]],
"S0": [S0],
"T": [T],
"r": [r],
"v0": [0.2],
"kappa": [2],
"theta": [0.2],
"xi": [0.1],
"rho": list(np.linspace(-1.0, 1.0, 20)),
},
post_process_fn=process_results
)
result.log()Next steps
You can look at the results of this test suite right in the notebook where you ran the code, as you would expect. But there is a better way — use the ValidMind Platform to work with your model documentation.
Work with your model documentation
From the Model Inventory in the ValidMind Platform, go to the model you registered earlier. (Need more help?)
Click and expand the Model Development section.
What you see is the full draft of your model documentation in a more easily consumable version. From here, you can make qualitative edits to model documentation, view guidelines, collaborate with validators, and submit your model documentation for approval when it's ready. Learn more ...
Discover more learning resources
We offer many interactive notebooks to help you document models:
Or, visit our documentation to learn more about ValidMind.