Sentiment analysis of financial data using Hugging Face NLP models

Document a natural language processing (NLP) model using the ValidMind Library after performing a sentiment analysis of financial news data using several different Hugging Face transformers.

This notebook provides an introduction for model developers on how to document a natural language processing (NLP) model using the ValidMind Library. It shows you how to set up the ValidMind Library, initialize the client library, and load the dataset, followed by performing a sentiment analysis of financial news data using several different Hugging Face transformers. As part of the process, the notebook runs various tests to quickly generate documentation about the data and model.

About ValidMind

ValidMind’s platform enables organizations to identify, document, and manage model risks for all types of models, including AI/ML models, LLMs, and statistical models. As a model developer, you use the ValidMind Library to automate documentation and validation tests, and then use the ValidMind AI Risk Platform UI to collaborate on documentation initiatives. Together, these products simplify model risk management, facilitate compliance with regulations and institutional standards, and enhance collaboration between yourself and model validators.

If this is your first time trying out ValidMind, we recommend going through the following resources first:

Before you begin

For access to all features available in this notebook, create a free ValidMind account.

Signing up is FREE — Register with ValidMind

If you encounter errors due to missing modules in your Python environment, install the modules with pip install, and then re-run the notebook. For more help, refer to Installing Python Modules.

Install the client library

The client library provides Python support for the ValidMind Library. To install it:

%pip install -q validmind

Initialize the client library

ValidMind generates a unique code snippet for each registered model to connect with your developer environment. You initialize the client library with this code snippet, which ensures that your documentation and tests are uploaded to the correct model when you run the notebook.

Get your code snippet:

  1. In a browser, log in to ValidMind.

  2. In the left sidebar, navigate to Model Inventory and click + Register new model.

  3. Enter the model details, making sure to select NLP-based Text Classification as the template and Marketing/Sales - Analytics as the use case, and click Continue. (Need more help?)

  4. Go to Getting Started and click Copy snippet to clipboard.

Next, replace this placeholder with your own code snippet:

# Replace with your code snippet

import validmind as vm

vm.init(
    api_host="https://api.prod.validmind.ai/api/v1/tracking",
    api_key="...",
    api_secret="...",
    model="...",
)

Preview the documentation template

A template predefines sections for your model documentation and provides a general outline to follow, making the documentation process much easier.

You will upload documentation and test results into this template later on. For now, take a look at the structure that the template provides with the vm.preview_template() function from the ValidMind library and note the empty sections:

vm.preview_template()

Get your sample dataset ready for analysis

To perform the sentiment analysis for financial news we’re going to load a local copy of this dataset: https://www.kaggle.com/datasets/ankurzing/sentiment-analysis-for-financial-news.

This dataset contains two columns, Sentiment and Sentence. The sentiment can be negative, neutral or positive.

import pandas as pd

df = pd.read_csv("./datasets/sentiments_with_predictions.csv")

NLP data quality tests

Before we proceed with the analysis, it’s crucial to ensure the quality of our NLP data. We can run the “data preparation” section of the template to validate the raw dataset’s integrity and suitability.

vm_raw_ds = vm.init_dataset(
    dataset=df,
    input_id="raw_dataset",
    text_column="Sentence",
    target_column="Sentiment",
)

text_data_test_plan = vm.run_documentation_tests(
    section="data_preparation", inputs={"dataset": vm_raw_ds}
)

Hugging Face transformers

Hugging Face: FinancialBERT for Sentiment Analysis

Let’s now explore integrating and testing FinancialBERT (https://huggingface.co/ahmedrachid/FinancialBERT-Sentiment-Analysis ), a model designed specifically for sentiment analysis in the financial domain:

from transformers import BertTokenizer, BertForSequenceClassification
from transformers import pipeline

model = BertForSequenceClassification.from_pretrained(
    "ahmedrachid/FinancialBERT-Sentiment-Analysis", num_labels=3
)
tokenizer = BertTokenizer.from_pretrained(
    "ahmedrachid/FinancialBERT-Sentiment-Analysis"
)
hfmodel = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)

Initialize the ValidMind dataset

# Load a test dataset with 100 rows only
vm_test_ds = vm.init_dataset(
    dataset=df,
    input_id="test_dataset",
    text_column="Sentence",
    target_column="Sentiment",
)

Initialize the ValidMind model

When initializing a ValidMind model, we pre-calculate predictions on the test dataset. This operation can take a long time for large datasets.

vm_model = vm.init_model(
    hfmodel,
)

# Assign model predictions to the test dataset
vm_test_ds.assign_predictions(vm_model, prediction_column="finbert_prediction")

Run model validation tests

It’s possible to run a subset of tests on the documentation template by passing a section parameter to run_documentation_tests(). Let’s run the tests that correspond to model validation only:

full_suite = vm.run_documentation_tests(
    section="model_development",
    inputs={
        "dataset": vm_test_ds,
        "model": vm_model,
    },
)

Next steps

You can look at the results of this test suite right in the notebook where you ran the code, as you would expect. But there is a better way: view the prompt validation test results as part of your model documentation in the ValidMind Platform UI:

  1. In the ValidMind Platform UI, go to the Documentation page for the model you registered earlier. (Need more help?

  2. Expand 2. Data Preparation or 3. Model Development to review all test results.

What you can see now is a more easily consumable version of the prompt validation testing you just performed, along with other parts of your model documentation that still need to be completed.

If you want to learn more about where you are in the model documentation process, take a look at Get started with the ValidMind Library.

Upgrade ValidMind

After installing ValidMind, you’ll want to periodically make sure you are on the latest version to access any new features and other enhancements.

Retrieve the information for the currently installed version of ValidMind:

%pip show validmind

If the version returned is lower than the version indicated in our production open-source code, restart your notebook and run:

%pip install --upgrade validmind

You may need to restart your kernel after running the upgrade package for changes to be applied.