AnswerSimilarity

Calculates the semantic similarity between generated answers and ground truths

The concept of Answer Semantic Similarity pertains to the assessment of the semantic resemblance between the generated answer and the ground truth. This evaluation is based on the ground_truth and the answer, with values falling within the range of 0 to 1. A higher score signifies a better alignment between the generated answer and the ground truth.

Measuring the semantic similarity between answers can offer valuable insights into the quality of the generated response. This evaluation utilizes a cross-encoder model to calculate the semantic similarity score.

See this paper for more details: https://arxiv.org/pdf/2108.06130.pdf

The following steps are involved in computing the answer similarity score: 1. Vectorize the ground truth answer using the specified embedding model. 2. Vectorize the generated answer using the same embedding model. 3. Compute the cosine similarity between the two vectors.

Configuring Columns

This metric requires the following columns in your dataset:

  • answer (str): The text response generated by the model.
  • ground_truth (str): The ground truth answer that the generated answer is compared against.

If the above data is not in the appropriate column, you can specify different column names for these fields using the parameters answer_column, and ground_truth_column.

For example, if your dataset has this data stored in different columns, you can pass the following parameters:

{
answer_column": "llm_output_col",
ground_truth_column": "my_ground_truth_col",
}

If answer is stored as a dictionary in another column, specify the column and key like this:

pred_col = dataset.prediction_column(model)
params = {
answer_column": f"{pred_col}.generated_answer",
ground_truth_column": "my_ground_truth_col",
}

For more complex situations, you can use a function to extract the data:

pred_col = dataset.prediction_column(model)
params = {
answer_column": lambda row: "\\n\\n".join(row[pred_col]["messages"]),
ground_truth_column": "my_ground_truth_col",
}