Text Classification Tests

Abnormal Input

Unseen Unigram

This test measures the number of failing rows in your data with unseen unigrams and their impact on the model. The model impact is the difference in model performance between passing and failing rows with unseen unigrams. If labels are not provided, prediction change is used instead of model performance change.

Why it matters: Unseen unigrams are a common failure point in machine learning systems; since these models are trained over a reference set, they may yield uninterpretable or undefined behavior when interacting with an unseen unigram. In addition, such errors may expose gaps or errors in data collection.

Configuration: By default, this test is run over every data point.

Example: Say that there is a text field with value James went to his casa and the unigram casa was not seen in the reference set. This test would raise a warning flagging that datapoint, with the severity depending on how bad the model performed on that datapoint.

Empty Text String

This test measures the number of failing rows in your data with empty strings and their impact on the model. The model impact is the difference in model performance between passing and failing rows with empty strings. If labels are not provided, prediction change is used instead of model performance change.

Why it matters: Empty strings are a common failure point in machine learning systems; as some models may yield uninterpretable or undefined behavior when interacting with an empty string. In addition, such errors may expose gaps or errors in data collection.

Configuration: By default, this test is run over every data point.

Example: Say that there is a text field that is just an empty string. This test would raise a warning flagging that datapoint, with the severity depending on how bad the model performed on that datapoint.

Numeric Outliers

This test measures the number of failing rows in your data with outliers and their impact on the model. Outliers are values which may not necessarily be outside of an allowed range for a feature, but are extreme values that are unusual and may be indicative of abnormality. The model impact is the difference in model performance between passing and failing rows with outliers. If labels are not provided, prediction change is used instead of model performance change.

Why it matters: Outliers can be a sign of corrupted or otherwise erroneous data, and can degrade model performance if used in the training data, or lead to unexpected behaviour if input at inference time.

Configuration: By default this test is run over each numeric feature that is neither unique nor ascending.

Example: Suppose there is a feature age for which in the reference set the values 103 and 114 each appear once but every other value (with subsantial sample size) is contained within the range [0, 97]. Then we would infer a lower outlier threshold of 0 and an upper outlier threshold of 97. This test raises a warning if we observe any values in the evaluation set outside these thresholds or if model performance decreases on observed datapoints with outliers.

Unseen URL

This test measures the number of failing rows in your data with unseen URL values and their impact on the model. The model impact is the difference in model performance between passing and failing rows with unseen URL values. If labels are not provided, prediction change is used instead of model performance change.

Why it matters: Unseen categorical values are a common failure point in machine learning systems; since these models are trained over a reference set, they may yield uninterpretable or undefined behavior when interacting with an unseen categorical value. In addition, such errors may expose gaps or errors in data collection.

Configuration: By default, this test runs over all features inferred to contain URLs.

Example: Say that the feature WebURL contains the values ['http://google.com', 'http://yahoo.com'] from the reference set. This test raises a warning if we observe any unseen values in the evaluation set such as 'http://xyzabc.com'.

Unseen Domain

This test measures the number of failing rows in your data with unseen domain values and their impact on the model. The model impact is the difference in model performance between passing and failing rows with unseen domain values. If labels are not provided, prediction change is used instead of model performance change.

Why it matters: Unseen categorical values are a common failure point in machine learning systems; since these models are trained over a reference set, they may yield uninterpretable or undefined behavior when interacting with an unseen categorical value. In addition, such errors may expose gaps or errors in data collection.

Configuration: By default, this test runs over all features inferred to contain domains.

Example: Say that the feature WebDomain contains the values ['gmail.com', 'hotmail.com'] from the reference set. This test raises a warning if we observe any unseen values in the evaluation set such as 'xyzabc.com'.

Unseen Email

This test measures the number of failing rows in your data with unseen email values and their impact on the model. The model impact is the difference in model performance between passing and failing rows with unseen email values. If labels are not provided, prediction change is used instead of model performance change.

Why it matters: Unseen categorical values are a common failure point in machine learning systems; since these models are trained over a reference set, they may yield uninterpretable or undefined behavior when interacting with an unseen categorical value. In addition, such errors may expose gaps or errors in data collection.

Configuration: By default, this test runs over all features inferred to contain emails.

Example: Say that the feature Email contains the values ['[email protected]', '[email protected]'] from the reference set. This test raises a warning if we observe any unseen values in the evaluation set such as '[email protected]'.

Out of Range

This test measures the number of failing rows in your data with values outside the inferred range of allowed values and their impact on the model. The model impact is the difference in model performance between passing and failing rows with values outside the inferred range of allowed values. If labels are not provided, prediction change is used instead of model performance change.

Why it matters: In production, the model may encounter corrupted or manipulated out of range values. It is important that the model is robust to such extremities.

Configuration: By default, this test runs over all numeric features.

Example: In the reference set, the Age feature has a range of [0, 121]. This test raises a warning if we observe values outside of this range in the evaluation set (eg. 150, 200) or if model performance decreases on observed datapoints outside of this range.

Rare Categories

This test measures the severity of passing to the model data points whose features contain rarely observed categories (relative the reference set). The severity is a function of the impact of these values on the model, as well as the presence of these values in the data. The model impact is the difference in model performance between passing and failing rows with rarely observed categorical values. If labels are not provided, prediction change is used instead of model performance change. The number of failing rows refers to the number of times rarely observed categorical values are observed in the evaluation set.

Why it matters: Rare categories are a common failure point in machine learning systems because less data often means worse performance. In addition, this may expose gaps or errors in data collection.

Configuration: By default, this test runs over all categorical features. A category is considered rare if it occurs fewer than min_num_occurrences times, or if it occurs less than min_pct_occurrences of the time. If neither of these values are specified, the rate of appearance below which a category is considered rare is min_ratio_rel_uniform divided by the number of classes.

Example: Say that the feature AgeGroup takes on the value 0-18 twice while taking on the value 35-55 a total of 98 times. If the min_num_occurences is 5 and the min_pct_occurrences is 0.03 then the test will flag the value 0-18 as a rare category.

Empty String

This test measures the number of failing rows in your data with empty string values instead of null values and their impact on the model. The model impact is the difference in model performance between passing and failing rows with empty string values instead of null values. If labels are not provided, prediction change is used instead of model performance change.

Why it matters: In production, the model may encounter corrupted or manipulated string values. Null values and empty strings are often expected to be treated the same, but the model might not treat them that way. It is important that the model is robust to such extremities.

Configuration: By default, this test runs over all string features with null values.

Example: In the reference set, the Name feature contains nulls. This test raises a warning if we observe any empty string in the Name feature or if these values decrease model performance.

Inconsistencies

This test measures the severity of passing to the model data points whose values are inconsistent (as inferred from the reference set). The severity is a function of the impact of these values on the model, as well as the presence of these values in the data. The model impact is the difference in model performance between passing and failing rows with data containing inconsistent feature values. If labels are not provided, prediction change is used instead of model performance change. The number of failing rows refers to the number of times data containing inconsistent feature values are observed in the evaluation set.

Why it matters: Inconsistent values might be the result of malicious actors manipulating the data or errors in the data pipeline. Thus, it is important to be aware of inconsistent values to identify sources of manipulations or errors.

Configuration: By default, this test runs on pairs of categorical features whose correlations exceed some minimum threshold. The default threshold for the frequency ratio below which values are considered to be inconsistent is 0.02.

Example: Suppose we have a feature country that takes on value "US" with frequency 0.5, and a feature time_zone that takes on value "Central European Time" with frequency 0.2. Then if these values appear together with frequency less than 0.5 * 0.2 * 0.02 = 0.002 , in the reference set, rows in which these values do appear together are inconsistencies.

Capitalization

This test measures the number of failing rows in your data with different types of capitalization and their impact on the model. The model impact is the difference in model performance between passing and failing rows with different types of capitalization. If labels are not provided, prediction change is used instead of model performance change.

Why it matters: In production, models can come across the same value with different capitalizations, making it important to explicitly check that your model is invariant to such differences.

Configuration: By default, this test runs over all categorical features.

Example: Suppose we had a column that corresponded to country code. For a specific row, let's say the observed value in the reference set was USA. This test raises a warning if we observe a similar value in the evaluation set with case changes, e.g. uSa or if model performance decreases on observed datapoints with case changes.

Required Characters

This test measures the number of failing rows in your data with strings without any required characters and their impact on the model. The model impact is the difference in model performance between passing and failing rows with strings without any required characters. If labels are not provided, prediction change is used instead of model performance change.

Why it matters: A feature may require specific characters. However, errors in the data pipeline may allow invalid data points that lack these required characters to pass. Failing to catch such errors may lead to noisier training data or noisier predictions during inference, which can degrade model metrics.

Configuration: By default, this test runs over all string features that are inferred to have required characters.

Example: Say that the feature email requires the character @. This test raises a warning if we observe any values in the evaluation set where the character is missing.

Unseen Categorical

This test measures the number of failing rows in your data with unseen categorical values and their impact on the model. The model impact is the difference in model performance between passing and failing rows with unseen categorical values. If labels are not provided, prediction change is used instead of model performance change.

Why it matters: Unseen categorical values are a common failure point in machine learning systems; since these models are trained over a reference set, they may yield uninterpretable or undefined behavior when interacting with an unseen categorical value. In addition, such errors may expose gaps or errors in data collection.

Configuration: By default, this test runs over all categorical features.

Example: Say that the feature Animal contains the values ['Cat', 'Dog'] from the reference set. This test raises a warning if we observe any unseen values in the evaluation set such as 'Mouse'.

Attacks

Invisible Character Attack

This test measures the robustness of your model to invisible character attacks. It does this by taking a sample input, inserting zero-width unicode characters, and measuring the performance of the model on the perturbed input. See the paper "Fall of Giants: How Popular Text-Based MLaaS Fall against a Simple Evasion Attack" by Pajola and Conti (https://arxiv.org/abs/2104.05996) for more details.

Why it matters: Malicious actors can perturb natural language input sequences to alter model behavior in unexpected ways. It is important that your NLP models are robust to such attacks.

Configuration: By default, this test runs in adversarial mode.

Example: Given the input sequence "RIME is helpful.", this test measures the performance of the model when imperceptibly perturbed (e.g., when changed to "RIM‌E is hel​p‍ful.")

Deletion Control Character Attack

This test measures the robustness of your model to deletion control character attacks. It does this by taking a sample input, inserting deletion control characters, and measuring the performance of the model on the perturbed input. See the paper "Bad Characters: Imperceptible NLP Attacks" by Boucher, Shumailov, et al. (https://arxiv.org/abs/2106.09898) for more details.

Why it matters: Malicious actors can perturb natural language input sequences to alter model behavior in unexpected ways. It is important that your NLP models are robust to such attacks.

Configuration: By default, this test runs in adversarial mode.

Example: Given the input sequence "RIME is helpful.", this test measures the performance of the model when imperceptibly perturbed (e.g., when changed to "RIM‌E is hel​p‍ful.")

Intentional Homoglyph Attack

This test measures the robustness of your model to intentional homoglyph attacks. It does this by taking a sample input, substituting homoglyphs designed to look like other characters, and measuring the performance of the model on the perturbed input. See the paper "Bad Characters: Imperceptible NLP Attacks" by Boucher, Shumailov, et al. (https://arxiv.org/abs/2106.09898) for more details.

Why it matters: Malicious actors can perturb natural language input sequences to alter model behavior in unexpected ways. It is important that your NLP models are robust to such attacks.

Configuration: By default, this test runs in adversarial mode.

Example: Given the input sequence "RIME is helpful.", this test measures the performance of the model when imperceptibly perturbed (e.g., when changed to "RIM‌E is hel​p‍ful.")

Confusable Homoglyph Attack

This test measures the robustness of your model to confusable homoglyph attacks. It does this by taking a sample input, substituting homoglyphs that are easily confused with other characters, and measuring the performance of the model on the perturbed input. See the paper "Bad Characters: Imperceptible NLP Attacks" by Boucher, Shumailov, et al. (https://arxiv.org/abs/2106.09898) for more details.

Why it matters: Malicious actors can perturb natural language input sequences to alter model behavior in unexpected ways. It is important that your NLP models are robust to such attacks.

Configuration: By default, this test runs in adversarial mode.

Example: Given the input sequence "RIME is helpful.", this test measures the performance of the model when imperceptibly perturbed (e.g., when changed to "RIM‌E is hel​p‍ful.")

Universal Prefix Attack

This test measures the robustness of your model to 'universal' adversarial prefix injections. It does this by sampling a batch of inputs, and searching over the model vocabulary to find a prefix that is nonsensical to a reader but that, when prepended to the batch of inputs, will cause the model to output a different prediction. See the paper "Universal Adversarial Triggers for Attacking and Analyzing NLP" by Wallace, Feng, Kandpal, et al. (https://arxiv.org/abs/1908.07125) for more details.

Why it matters: Malicious actors can perturb natural language input sequences to alter model behavior in unexpected ways. 'Universal triggers' pose a particulary large threat since they easily transfer between models and data points to permit an adversary to make large-scale, cost-efficient attacks. It is important that your NLP models are robust to such threat vectors.

Configuration: By default, this test runs when the 'Adversarial' category is specified.

Example: Given a target class of 0, this test selects a batch of inputs for which the model predicts a different class (e.g., 1). It then searches for an adversarial prefix that maximizes the probability assigned to the target class. The severity of this test is based on the difference in the average probability assigned to the target class before and after the prefix is prepended to the batch. For instance, given two inputs "I am happy!" and "I like ice cream!", the attack finds an example prefix, e.g., "the why y could", and measures the new probability assigned by the model to the target class for inputs "the why y could I am happy!" and "the why y could I like ice cream!".

Character Deletion

This test measures the robustness of your model to character deletion attacks. It does this by randomly deleting characters in the input string and measuring your model's performance on the attacked string.

Why it matters: Production natural language input sequences can have errors from data preprocessing or human input (mistaken or adversarial). It is important that your NLP models are robust to the introduction of such errors.

Configuration: By default, this test runs over a sample of strings from the evaluation set, and it performs this attack on 30% of the words in each input.

Example: Given an input sequence "The quick brown fox jumped over the lazy dog", this test measures the performance of the model when given the attacked input of "Th quick brwn fox jumpd over the lazy dog".

Character Insertion

This test measures the robustness of your model to character insertion attacks. It does this by randomly adding characters to the input string and measuring your model's performance on the attacked string.

Why it matters: Production natural language input sequences can have errors from data preprocessing or human input (mistaken or adversarial). It is important that your NLP models are robust to the introduction of such errors.

Configuration: By default, this test runs over a sample of strings from the evaluation set, and it performs this attack on 30% of the words in each input.

Example: Given an input sequence "The quick brown fox jumped over the lazy dog", this test measures the performance of the model when given the attacked input of "Thew quick broqwn fox jumqped over the lazy dog".

Character Substitution

This test measures the robustness of your model to character substitution attacks. It does this by randomly substituting characters in the input string and measuring your model's performance on the attacked string.

Why it matters: Production natural language input sequences can have errors from data preprocessing or human input (mistaken or adversarial). It is important that your NLP models are robust to the introduction of such errors.

Configuration: By default, this test runs over a sample of strings from the evaluation set, and it performs this attack on 30% of the words in each input.

Example: Given an input sequence "The quick brown fox jumped over the lazy dog", this test measures the performance of the model when given the attacked input of "Tie quick brorn fox tumped over the lyzy dog".

Character Swap

This test measures the robustness of your model to character swap attacks. It does this by randomly swapping characters in the input string and measuring your model's performance on the attacked string.

Why it matters: Production natural language input sequences can have errors from data preprocessing or human input (mistaken or adversarial). It is important that your NLP models are robust to the introduction of such errors.

Configuration: By default, this test runs over a sample of strings from the evaluation set, and it performs this attack on 30% of the words in each input.

Example: Given an input sequence "The quick brown fox jumped over the lazy dog", this test measures the performance of the model when given the attacked input of "Teh quick bornw fox ujmpde over the lazy dog".

Common Misspellings

This test measures the robustness of your model to common misspellings attacks. It does this by adding common misspellings to the input string and measuring your model's performance on the attacked string.

Why it matters: Production natural language input sequences can have errors from data preprocessing or human input (mistaken or adversarial). It is important that your NLP models are robust to the introduction of such errors.

Configuration: By default, this test runs over a sample of strings from the evaluation set, and it performs this attack on 30% of the words in each input.

Example: Given an input sequence "The quick brown fox jumped over the lazy dog", this test measures the performance of the model when given the attacked input of "Thee quik brown focks jumped over the lasy dog".

Keyboard Augmentation

This test measures the robustness of your model to keyboard augmentation attacks. It does this by adding common typos based on keyboard distance to the input string and measuring your model's performance on the attacked string.

Why it matters: Production natural language input sequences can have errors from data preprocessing or human input (mistaken or adversarial). It is important that your NLP models are robust to the introduction of such errors.

Configuration: By default, this test runs over a sample of strings from the evaluation set, and it performs this attack on 30% of the words in each input.

Example: Given an input sequence "The quick brown fox jumped over the lazy dog", this test measures the performance of the model when given the attacked input of "Thr quick browb fox jumled over the lazy dog".

OCR Error Simulation

This test measures the robustness of your model to ocr error simulation attacks. It does this by adding common OCR errors to the input string and measuring your model's performance on the attacked string.

Why it matters: Production natural language input sequences can have errors from data preprocessing or human input (mistaken or adversarial). It is important that your NLP models are robust to the introduction of such errors.

Configuration: By default, this test runs over a sample of strings from the evaluation set, and it performs this attack on 30% of the words in each input.

Example: Given an input sequence "The quick brown fox jumped over the lazy dog", this test measures the performance of the model when given the attacked input of "Th3 quick br0wn fox jumped over the 1azy d0g".

Synonym Swap

This test measures the robustness of your model to synonym swap attacks. It does this by randomly swapping synonyms in the input string and measuring your model's performance on the attacked string.

Why it matters: Production natural language input sequences can have errors from data preprocessing or human input (mistaken or adversarial). It is important that your NLP models are robust to the introduction of such errors.

Configuration: By default, this test runs over a sample of strings from the evaluation set, and it performs this attack on 30% of the words in each input.

Example: Given an input sequence "The quick brown fox jumped over the lazy dog", this test measures the performance of the model when given the attacked input of "The fast brown fox leaped over the lazy dog".

Contextual Word Swap

This test measures the robustness of your model to contextual word swap attacks. It does this by replacing words with those close in embedding space and measuring your model's performance on the attacked string.

Why it matters: Production natural language input sequences can have errors from data preprocessing or human input (mistaken or adversarial). It is important that your NLP models are robust to the introduction of such errors.

Configuration: By default, this test runs over a sample of strings from the evaluation set, and it performs this attack on 30% of the words in each input.

Example: Given an input sequence "The quick brown fox jumped over the lazy dog", this test measures the performance of the model when given the attacked input of "the fast brown pigeon leaped over the white dog".

Contextual Word Insertion

This test measures the robustness of your model to contextual word insertion attacks. It does this by inserting words generated from a language model and measuring your model's performance on the attacked string.

Why it matters: Production natural language input sequences can have errors from data preprocessing or human input (mistaken or adversarial). It is important that your NLP models are robust to the introduction of such errors.

Configuration: By default, this test runs over a sample of strings from the evaluation set, and it performs this attack on 30% of the words in each input.

Example: Given an input sequence "The quick brown fox jumped over the lazy dog", this test measures the performance of the model when given the attacked input of "the fast brown fox leaped away over the lazy dog".

Distribution Drift

Unigrams Distribution

This test measures the unigram distribution drift between the reference and evaluation sets. By default, it measures drift by using the Population Stability Index of the two distributions.The severity is determined by comparing the computed drift statistic to the configured severity thresholds.

Why it matters: The reference set that you use to train your model may not be representative of the evaluation set you encounter in production. If there are statistically significant differences in the unigram distribution between these sets, it can lead to subpar real-world model performance.

Configuration: To pass a given test case, the divergence metric must be below the configured threshold.

Example: Suppose that the change in the unigram distribution in the reference set and evaluation set yielded a JS Divergence of 0.2. If the distance threshold is set to 0.1, this test would raise a warning.

Bigrams Distribution

This test measures the bigram distribution drift between the reference and evaluation sets. By default, it measures drift by using the Population Stability Index of the two distributions.The severity is determined by comparing the computed drift statistic to the configured severity thresholds.

Why it matters: The reference set that you use to train your model may not be representative of the evaluation set you encounter in production. If there are statistically significant differences in the bigram distribution between these sets, it can lead to subpar real-world model performance.

Configuration: To pass a given test case, the divergence metric must be below the configured threshold.

Example: Suppose that the change in the bigram distribution in the reference set and evaluation set yielded a JS Divergence of 0.2. If the distance threshold is set to 0.1, this test would raise a warning.

Feature Correlation Drift

This test measures the severity of feature correlation drift from the reference to the evaluation set for a given pair of features. The severity is a function of the mutual information drift in the data. The key detail is the difference in correlation scores between the reference and evaluation sets, along with an associated p-value. Correlation is a measure of the linear relationship between two numeric features, so this test checks for significant changes in this relationship between pairs of features in the reference and evaluation sets. To compute the p-value, we use Fisher's z-transformation to convert the distribution of sample correlations to a normal distribution, and then we run a standard two-sample test on two normal distributions.

Why it matters: Correlation drift between training and inference can be caused by a variety of factors, including a change in the data generation process or a change in the underlying processing stage. A big shift in these dependencies could indicate shifting datasets and degradation in model performance, signalling the need for relabeling and retraining.

Configuration: By default, this test runs over all pairs of features in the dataset.

Example: Suppose that the correlation between country and state is 0.5 in the reference set but 0.7 in the evaluation set, and the p-value is 0.03. Then the large difference in scores indicates that the dependency between the two features has drifted. If our difference threshold was 0.2, and p-value threshold was 0.05, then the test would fail.

Mutual Information Drift (Feature-to-Feature)

This test measures the severity of feature mutual information drift from the reference to the evaluation set for a given pair of features. The severity is a function of the mutual information drift in the data. The key detail is the difference in mutual information scores between the reference and evaluation sets. Mutual information is a measure of how dependent two features are, so this checks for significant changes in dependence between pairs of features in the reference and evaluation sets.

Why it matters: Mutual information drift between training and inference can be caused by a variety of factors, including a change in the data generation process or a change in the underlying processing stage. A big shift in these dependencies could indicate shifting datasets and degradation in model performance, signalling the need for relabeling and retraining.

Configuration: By default, this test runs over all pairs of features in the dataset.

Example: Suppose that the mutual information between country and state is 0.5 in the reference set but 0.7 in the evaluation set. Then the large difference in scores indicates that the dependency between the two features has drifted. If our difference threshold was 0.2 then the test would fail.

Mutual Information Drift (Feature-to-Label)

This test measures the severity of feature mutual information drift from the reference to the evaluation set for a given pair of features. The severity is a function of the mutual information drift in the data. The key detail is the difference in mutual information scores between the reference and evaluation sets. Mutual information is a measure of how dependent two features are, so this checks for significant changes in dependence between pairs of features in the reference and evaluation sets.

Why it matters: Mutual information drift between training and inference can be caused by a variety of factors, including a change in the data generation process or a change in the underlying processing stage. A big shift in these dependencies could indicate shifting datasets and degradation in model performance, signalling the need for relabeling and retraining.

Configuration: By default, this test runs over all pairs of features in the dataset.

Example: Suppose that the mutual information between country and state is 0.5 in the reference set but 0.7 in the evaluation set. Then the large difference in scores indicates that the dependency between the two features has drifted. If our difference threshold was 0.2 then the test would fail.

Categorical Feature Drift

This test measures the severity of passing to the model data points that have categorical features which have drifted from the distribution observed in the reference set. The severity is a function of the impact on the model, as well as the presence of drift in the data. The model impact measures how much model performance changes due to drift in the given feature. The key detail displayed is the PSI test statistic, which is a measure of how statistically significant the difference between the frequencies of categorical values in the reference and evaluation sets is.

Why it matters: Distribution drift in categorical features between training and inference can be caused by a variety of factors, including a change in the data generation process or a change in the preprocessing pipeline. A big shift in categorical features towards categorical subsets that your model performs poorly in could indicate a degradation in model performance and signal the need for relabeling and retraining.

Configuration: By default, this test runs over all categorical columns with sufficiently many samples.

Example: Suppose that the observed frequencies of the isLoggedIn feature is [100, 200] in the reference set but [25, 150] in the test set. Then the PSI would be 0.201. If our PSI threshold was 0.1 then the test would fail.

Label Drift (PSI)

This test checks that the difference in label distribution between the reference and evaluation sets is small, using PSI test. The key detail displayed is the PSI statistic which is a measure of how different the frequencies of the column in the reference and evaluation sets are.

Why it matters: Label distribution shift between reference and test can indicate that the underlying data distribution has changed significantly enough to modify model decisions. This may mean that the model needs to be retrained to adjust to the new data environment. In addition, significant label distribution shift may indicate that upstream decision-making modules (e.g. thresholds) may need to be updated.

Configuration: This test is run by default whenever both the reference and evaluation sets have associated labels.

Example: Suppose that the observed frequencies of the label column is [100, 200] in the reference set but [25, 150] in the test set. Then the PSI would be 0.201. If our PSI threshold was 0.1 then the test would fail.

Label Drift

This test checks that the difference in label distribution between the reference and evaluation sets is small, using the Kolmogorov–Smirnov (K-S) test. The key detail displayed is the KS statistic which is a measure of how different the labels in the reference and evaluation sets are. Concretely, the KS statistic is the maximum difference of the empirical CDF's of the two label columns.

Why it matters: Label distribution shift between reference and test can indicate that the underlying data distribution has changed significantly enough to modify model decisions. This may mean that the model needs to be retrained to adjust to the new data environment. In addition, significant label distribution shift may indicate that upstream decision-making modules (e.g. thresholds) may need to be updated.

Configuration: This test is run by default whenever both the reference and evaluation sets have associated labels.

Example: Suppose that the distribution of labels changes between the reference and evaluation sets such that the p-value for the K-S test between these two samples is 0.005 and the test statistic is 0.2. If the p-value threshold is set to 0.01 and the model impact threshold is set to 0.1, this test would raise a warning.

Numeric Feature Drift

This test measures the severity of passing to the model data points that have numeric features that have drifted from the distribution observed in the reference set. The severity is a function of the impact on the model, as well as the presence of drift in the data. The model impact measures how much model performance changes due to drift in the given feature. The key detail is the Population Stability Index statistic. The Population Stability Index (PSI) is a measure of how different two distributions are. Given two distributions P and Q, it is computed as the sum of the KL Divergence between P and Q and the (reverse) KL Divergence between Q and P. Thus, PSI is symmetric.

Why it matters: Distribution shift between training and inference can cause degradation in model performance. If the shift is sufficiently large, retraining the model on newer data may be necessary.

Configuration: By default, this test runs over all numeric columns with sufficiently many samples and stored quantiles in each of the reference and evaluation sets.

Example: Suppose that the distribution of a feature Age changes between the reference and evaluation sets such that the Population Stability Index between these two samples is 0.2. If the distance threshold is set to 0.1, this test would raise a warning.

Overall Metrics

This test checks a set of overall metrics to see if any have experienced significant degradation. The key detail displays whether the given performance metric has degraded beyond a defined threshold.

Why it matters: During production, factors like distribution shift or a change in p(y|x) may cause model performance to decrease significantly.

Configuration: By default, this test runs over all metrics for this model task.

Example: Assume that on the reference set the model obtained 0.85 AUC but on the evaluation set the model obtained 0.5 AUC. Then this test raises a warning.

Prediction Drift

This test checks that the difference in the prediction distribution between the reference and evaluation sets is small, using Population Stability Index. The key detail displayed is the PSI which is a measure of how different the prediction distributions in the reference and evaluation sets are.

Why it matters: Prediction distribution shift between reference and test can indicate that the underlying data distribution has changed significantly enough to modify model decisions. This may mean that the model needs to be retrained to adjust to the new data environment. In addition, significant prediction distribution drift may indicate that upstream decision-making modules (e.g. thresholds) may need to be updated.

Configuration: This test is run by default whenever both the reference and evaluation sets have associated predictions. Different thresholds are associated with different severities.

Example: Suppose that the PSI between the prediction distributions in the reference and evaluation sets is 0.201. Then if the PSI thresholds are (0.1, 0.2, 0.3), the test would fail with medium severity.

Calibration Comparison

This test checks that the reference and evaluation sets have sufficiently similar calibration curves as measured by the Mean Squared Error (MSE) between the two curves. The calibration curve is a line plot where the x-axis represents the average predicted probability and the y-axis is the proportion of positive predictions. The curve of the ideal calibrated model is thus a linear straight line from (0, 0) moving linearly.

Why it matters: Knowing how well-calibrated your model is can help you better interpret and act upon model outputs, and can even be an indicator of generalization. A greater difference between reference and evaluation curves could indicate a lack of generalizability. In addition, a change in calibration could indicate that decision-making or thresholding conducted upstream needs to change as it is behaving differently on held-out data.

Configuration: By default, this test runs over the predictions and labels.

Example: Suppose the model’s task is binary classification and predicts whether or not a datapoint is fraudulent. If we have a reference set in which 1% of the datapoints are fraudulent, but an evaluation set where 50% are fraudulent, then our model may not be well calibrated, and the MSE difference in the curves will be large, resulting in a failing test.

Predicted Label Drift (PSI)

This test checks that the difference in predicted label distribution between the reference and evaluation sets is small, using PSI test. The key detail displayed is the PSI statistic which is a measure of how different the frequencies of the column in the reference and evaluation sets are.

Why it matters: Predicted Label distribution shift between reference and test can indicate that the underlying data distribution has changed significantly enough to modify model decisions. This may mean that the model needs to be retrained to adjust to the new data environment. In addition, significant predicted label distribution shift may indicate that upstream decision-making modules (e.g. thresholds) may need to be updated.

Configuration: This test is run by default whenever the model or predictions is provided.

Example: Suppose that the observed frequencies of the predicted label column is [100, 200] in the reference set but [25, 150] in the test set. Then the PSI would be 0.201. If our PSI threshold was 0.1 then the test would fail.

Subset Performance

Subset AUC

This test checks whether the model performs equally well across a given subset of rows as it does across the whole dataset. The key detail displays the performance difference between the lowest performing subset and the overall population. The test first splits the dataset into various subsets depending on the quantiles of a given feature column. If the feature is categorical, the data is split based on the feature values. We then test whether the Area Under Curve (AUC) of model predictions within a specific subset is significantly lower than the model prediction Area Under Curve (AUC) over the entire 'population'.

Why it matters: Having similar AUC between different subgroups is an important indicator of performance bias; in general, bias is an important phenomenon in machine learning and not only contains implications for fairness and ethics, but also indicates failures in adequate feature representation and spurious correlation.

Configuration: By default, AUC is computed over all predictions/labels. Note that we compute AUC of the Receiver Operating Characteristic (ROC) curve.

Example: Suppose we had data with 2 features: [['cat', 0.2], ['dog', 0.3], ['cat', 0.5], ['dog', 0.7], ['cat', 0.7], ['dog', 0.2]], model predictions [0.3, 0.51, 0.7, 0.49, 0.9, 0.58], and labels [1, 0, 1, 0, 0, 1]. Then, the AUC over the feature subset value 'cat' would be 0.0, compared to the overall metric of 0.44.

Subset Accuracy

This test checks whether the model performs equally well across a given subset of rows as it does across the whole dataset. The key detail displays the performance difference between the lowest performing subset and the overall population. The test first splits the dataset into various subsets depending on the quantiles of a given feature column. If the feature is categorical, the data is split based on the feature values. We then test whether the accuracy of model predictions within a specific subset is significantly lower than the model prediction accuracy over the entire 'population'.

Why it matters: Having similar accuracy between different subgroups is an important indicator of performance bias; in general, bias is an important phenomenon in machine learning and not only contains implications for fairness and ethics, but also indicates failures in adequate feature representation and spurious correlation. Accuracy can be thought of as a 'weaker' metric of model bias compared to measuring false positive rate (predictive equality) or false negative rate (equal opportunity). This is because we can have similar accuracy between group A and group B; yet group A actually has higher false positive rate, while group B has higher false negative rate (e.g. we reject qualified applicants in group A but accept non-qualified applicants in group B). Nevertheless, accuracy is a standard metric used during evaluation and should be considered as part of performance bias testing.

Configuration: By default, accuracy is computed over all predictions/labels. Note we round predictions to 0/1 to compute accuracy.

Example: Suppose we had data with 2 features: [['cat', 0.2], ['dog', 0.3], ['cat', 0.5], ['dog', 0.7], ['cat', 0.7], ['dog', 0.2]], model predictions [0.3, 0.51, 0.7, 0.49, 0.9, 0.58], and labels [1, 0, 1, 0, 0, 1]. Then, the accuracy over the feature subset value 'cat' would be 0.33, compared to the overall metric of 0.5.

Subset Accuracy

This test checks whether the model performs equally well across a given subset of rows as it does across the whole dataset. The key detail displays the performance difference between the lowest performing subset and the overall population. The test first splits the dataset into various subsets depending on the quantiles of a given feature column. If the feature is categorical, the data is split based on the feature values. We then test whether the accuracy of model predictions within a specific subset is significantly lower than the model prediction accuracy over the entire 'population'.

Why it matters: Having similar accuracy between different subgroups is an important indicator of performance bias; in general, bias is an important phenomenon in machine learning and not only contains implications for fairness and ethics, but also indicates failures in adequate feature representation and spurious correlation. Accuracy can be thought of as a 'weaker' metric of model bias compared to measuring false positive rate (predictive equality) or false negative rate (equal opportunity). This is because we can have similar accuracy between group A and group B; yet group A actually has higher false positive rate, while group B has higher false negative rate (e.g. we reject qualified applicants in group A but accept non-qualified applicants in group B). Nevertheless, accuracy is a standard metric used during evaluation and should be considered as part of performance bias testing.

Configuration: By default, accuracy is computed over all predictions/labels. Note we round predictions to 0/1 to compute accuracy.

Example: Suppose we had data with 2 features: [['cat', 0.2], ['dog', 0.3], ['cat', 0.5], ['dog', 0.7], ['cat', 0.7], ['dog', 0.2]], model predictions [0.3, 0.51, 0.7, 0.49, 0.9, 0.58], and labels [1, 0, 1, 0, 0, 1]. Then, the accuracy over the feature subset value 'cat' would be 0.33, compared to the overall metric of 0.5.

Subset Multiclass AUC

In the multiclass setting, we compute one vs. one area under the curve (AUC), which computes the AUC between every pairwise combination of classes. This test checks whether the model performs equally well across a given subset of rows as it does across the whole dataset. The key detail displays the performance difference between the lowest performing subset and the overall population. The test first splits the dataset into various subsets depending on the quantiles of a given feature column. If the feature is categorical, the data is split based on the feature values. We then test whether the Area Under Curve (AUC) of model predictions within a specific subset is significantly lower than the model prediction Area Under Curve (AUC) over the entire 'population'.

Why it matters: Having similar AUC between different subgroups is an important indicator of performance bias; in general, bias is an important phenomenon in machine learning and not only contains implications for fairness and ethics, but also indicates failures in adequate feature representation and spurious correlation.

Configuration: By default, AUC is computed over all predictions/labels. Note that we compute AUC of the Receiver Operating Characteristic (ROC) curve.

Example: Suppose we are differentiating between cats, bears, and dogs. Assume that across the data points where height=2 the predictions are [0.9, 0.1, 0], [0.1, 0.9, 0], [0.2, 0.1, 0.7] and the labels are [1, 0, 0], [1, 0, 0], [0, 0, 1] (where the first index corresponds to cat, the second corresponds to bear, and the third corresponds to dog). Then the AUC (one vs. one) across this subset is 0.75. If the overall AUC (one vs. one) across all subsets is 0.9 then this test raises a warning.

Subset F1

This test checks whether the model performs equally well across a given subset of rows as it does across the whole dataset. The key detail displays the performance difference between the lowest performing subset and the overall population. The test first splits the dataset into various subsets depending on the quantiles of a given feature column. If the feature is categorical, the data is split based on the feature values. We then test whether the F1 of model predictions within a specific subset is significantly lower than the model prediction F1 over the entire 'population'.

Why it matters: Having similar F1 between different subgroups is an important indicator of performance bias; in general, bias is an important phenomenon in machine learning and not only contains implications for fairness and ethics, but also indicates failures in adequate feature representation and spurious correlation.

Configuration: By default, F1 is computed over all predictions/labels. Note that we round predictions to 0/1 to compute F1 score.

Example: Suppose we had data with 2 features: [['cat', 0.2], ['dog', 0.3], ['cat', 0.5], ['dog', 0.7], ['cat', 0.7], ['dog', 0.2]], model predictions [0.3, 0.51, 0.7, 0.49, 0.9, 0.58], and labels [1, 0, 1, 0, 0, 1]. Then, the F1 over the feature subset value 'cat' would be 0.5, compared to the overall metric of 0.57.

Subset Macro F1

F1 is a holistic measure of both precision and recall. When transitioning to the multiclass setting we can use macro F1 which computes the F1 of each class and averages them. This test checks whether the model performs equally well across a given subset of rows as it does across the whole dataset. The key detail displays the performance difference between the lowest performing subset and the overall population. The test first splits the dataset into various subsets depending on the quantiles of a given feature column. If the feature is categorical, the data is split based on the feature values. We then test whether the macro F1 of model predictions within a specific subset is significantly lower than the model prediction macro F1 over the entire 'population'.

Why it matters: Having similar macro F1 between different subgroups is an important indicator of performance bias; in general, bias is an important phenomenon in machine learning and not only contains implications for fairness and ethics, but also indicates failures in adequate feature representation and spurious correlation.

Configuration: By default, macro F1 is computed over all predictions/labels. Note that the predicted label is the label with the largest predicted probability.

Example: Suppose we are differentiating between cats, bears, and dogs. Assume that across the data points where height=2 the predictions are [0.9, 0.1, 0], [0.1, 0.9, 0], [0.2, 0.1, 0.7] and the labels are [1, 0, 0], [1, 0, 0], [0, 0, 1] (where the first index corresponds to cat, the second corresponds to bear, and the third corresponds to dog). Then the macro F1 across this subset is 0.78. If the overall macro F1 across all subsets is 0.9 then this test raises a warning.

Subset Precision

The precision test is also popularly referred to positive predictive parity in fairness literature. This test checks whether the model performs equally well across a given subset of rows as it does across the whole dataset. The key detail displays the performance difference between the lowest performing subset and the overall population. The test first splits the dataset into various subsets depending on the quantiles of a given feature column. If the feature is categorical, the data is split based on the feature values. We then test whether the Precision of model predictions within a specific subset is significantly lower than the model prediction Precision over the entire 'population'.

Why it matters: Having similar precision (e.g. false discovery rates) between different subgroups is an important indicator of performance bias; in general, bias is an important phenomenon in machine learning and not only contains implications for fairness and ethics, but also indicates failures in adequate feature representation and spurious correlation. Unlike demographic parity, this test permits assuming different base label rates but flags differing mistake rates between different subgroups. Note that positive predictive parity does not necessarily indicate equal opportunity or predictive equality: as a hypothetical example, imagine that a loan qualification classifier flags 100 entries for group A and 100 entries for group B, each with a precision of 100%, but there are 100 actual qualified entries in group A and 9000 in group B. This would indicate disparities in opportunities given to each subgroup.

Configuration: By default, Precision is computed over all predictions/labels. Note that we round predictions to 0/1 to compute precision.

Example: Suppose we had data with 2 features: [['cat', 0.2], ['dog', 0.3], ['cat', 0.5], ['dog', 0.7], ['cat', 0.7], ['dog', 0.2]], model predictions [0.3, 0.51, 0.7, 0.49, 0.9, 0.58], and labels [1, 0, 1, 0, 0, 1]. Then, the Precision over the feature subset value 'cat' would be 0.5, compared to the overall metric of 0.5.

Subset Macro Precision

The precision test is also popularly referred to as positive predictive parity in fairness literature. When transitioning to the multiclass setting, we can compute macro precision which computes the precisions of each class individually and then averages them.This test checks whether the model performs equally well across a given subset of rows as it does across the whole dataset. The key detail displays the performance difference between the lowest performing subset and the overall population. The test first splits the dataset into various subsets depending on the quantiles of a given feature column. If the feature is categorical, the data is split based on the feature values. We then test whether the Macro Precision of model predictions within a specific subset is significantly lower than the model prediction Macro Precision over the entire 'population'.

Why it matters: Having similar macro precision (e.g. false discovery rates) between different subgroups is an important indicator of performance bias; in general, bias is an important phenomenon in machine learning and not only contains implications for fairness and ethics, but also indicates failures in adequate feature representation and spurious correlation. Unlike demographic parity, this test permits assuming different base label rates but flags differing mistake rates between different subgroups. Note that positive predictive parity does not necessarily indicate equal opportunity or predictive equality: as a hypothetical example, imagine that a loan qualification classifier flags 100 entries for group A and 100 entries for group B, each with a precision of 100%, but there are 100 actual qualified entries in group A and 9000 in group B. This would indicate disparities in opportunities given to each subgroup.

Configuration: By default, Macro Precision is computed over all predictions/labels. Note that the predicted label is the label with the greatest predicted probability.

Example: Suppose we are differentiating between cats, bears, and dogs. Assume that across the data points where height=2 the predictions are [0.9, 0.1, 0], [0.1, 0.9, 0], [0.2, 0.1, 0.7] and the labels are [1, 0, 0], [1, 0, 0], [0, 0, 1] (where the first index corresponds to cat, the second corresponds to bear, and the third corresponds to dog). Then the Macro Precision across this subset is 0.67. If the overall Macro Precision across all subsets is 0.9 then this test raises a warning.

Subset False Positive Rate

The false positive error rate test is also popularly referred to as as predictive equality in fairness literature. This test checks whether the model performs equally well across a given subset of rows as it does across the whole dataset. The key detail displays the performance difference between the lowest performing subset and the overall population. The test first splits the dataset into various subsets depending on the quantiles of a given feature column. If the feature is categorical, the data is split based on the feature values. We then test whether the false positive rate of model predictions within a specific subset is significantly higher than the model prediction false positive rate over the entire 'population'.

Why it matters: Having similar false positive rates (e.g. predictive equality) between different subgroups is an important indicator of performance bias; in general, bias is an important phenomenon in machine learning and not only contains implications for fairness and ethics, but also indicates failures in adequate feature representation and spurious correlation. Unlike demographic parity, this test permits assuming different base label rates but flags differing mistake rates between different subgroups. As an intuitive example, consider the case when the label indicates an undesirable attribute: if predicting whether a person will default on their loan, make sure that for people who didn't default, the rate at which the model incorrectly predicts positive is similar for group A and B.

Configuration: By default, false positive rate is computed over all predictions/labels. Note that we round predictions to 0/1 to compute false positive rate.

Example: Suppose we had data with 2 features: [['cat', 0.2], ['dog', 0.3], ['cat', 0.5], ['dog', 0.7], ['cat', 0.7], ['dog', 0.2]], model predictions [0.3, 0.51, 0.7, 0.49, 0.9, 0.58], and labels [1, 0, 1, 0, 0, 1]. Then, the false positive rate over the feature subset value 'cat' would be 1.0, compared to the overall metric of 0.67.

Subset Recall

The recall test is more popularly referred to as equal opportunity or false negative error rate balance in fairness literature. This test checks whether the model performs equally well across a given subset of rows as it does across the whole dataset. The key detail displays the performance difference between the lowest performing subset and the overall population. The test first splits the dataset into various subsets depending on the quantiles of a given feature column. If the feature is categorical, the data is split based on the feature values. We then test whether the Recall of model predictions within a specific subset is significantly lower than the model prediction Recall over the entire 'population'.

Why it matters: Having similar true positive rates (e.g. equal opportunity) between different subgroups is an important indicator of performance bias; in general, bias is an important phenomenon in machine learning and not only contains implications for fairness and ethics, but also indicates failures in adequate feature representation and spurious correlation. Unlike demographic parity, this test permits assuming different base label rates but flags differing mistake rates between different subgroups. An intuitive example is when the label indicates a positive attribute: if predicting whether to interview a given candidate, make sure that out of qualified candidates, the rate at which the model predicts a rejection is similar to group A and B.

Configuration: By default, Recall is computed over all predictions/labels. Note that we round predictions to 0/1 to compute recall.

Example: Suppose we had data with 2 features: [['cat', 0.2], ['dog', 0.3], ['cat', 0.5], ['dog', 0.7], ['cat', 0.7], ['dog', 0.2]], model predictions [0.3, 0.51, 0.7, 0.49, 0.9, 0.58], and labels [1, 0, 1, 0, 0, 1]. Then, the Recall over the feature subset value 'cat' would be 0.5, compared to the overall metric of 0.66.

Subset Macro Recall

The recall test is more popularly referred to as equal opportunity or false negative error rate balance in fairness literature. When transitioning to the multiclass setting we can use macro recall which computes the recall of each individual class and then averages these numbers.This test checks whether the model performs equally well across a given subset of rows as it does across the whole dataset. The key detail displays the performance difference between the lowest performing subset and the overall population. The test first splits the dataset into various subsets depending on the quantiles of a given feature column. If the feature is categorical, the data is split based on the feature values. We then test whether the Macro Recall of model predictions within a specific subset is significantly lower than the model prediction Macro Recall over the entire 'population'.

Why it matters: Having similar true positive rates (e.g. equal opportunity) between different subgroups is an important indicator of performance bias; in general, bias is an important phenomenon in machine learning and not only contains implications for fairness and ethics, but also indicates failures in adequate feature representation and spurious correlation. Unlike demographic parity, this test permits assuming different base label rates but flags differing mistake rates between different subgroups. An intuitive example is when the label indicates a positive attribute: if predicting whether to interview a given candidate, make sure that out of qualified candidates, the rate at which the model predicts an interview is similar to group A and B.

Configuration: By default, Macro Recall is computed over all predictions/labels. Note that the predicted label is the label with the largest predicted class probability.

Example: Suppose we are differentiating between cats, bears, and dogs. Assume that across the data points where height=2 the predictions are [0.9, 0.1, 0], [0.1, 0.9, 0], [0.2, 0.1, 0.7] and the labels are [1, 0, 0], [1, 0, 0], [0, 0, 1] (where the first index corresponds to cat, the second corresponds to bear, and the third corresponds to dog). Then the Macro Recall across this subset is 0.67. If the overall Macro Recall across all subsets is 0.9 then this test raises a warning.

Subset Prediction Variance (Positive Labels)

The subset variance test first splits the dataset into various subsets depending on the quantiles of a given feature column. If the feature is categorical, the data is split based on the feature values. We then test whether the variance of model predictions within a specific subset is significantly higher than model prediction variance of the entire 'population'. In this test, the population refers to all data with positive ground-truth labels.

Why it matters: High variance within a feature subset compared to the overall population could mean a few different things, and should be analyzed with other subset performance tests (accuracy, AUC) for a more clear view. In the variance metric over positive/negative labels, this could mean the model is much more uncertain about the given subset. When paired with a decrease in AUC, this implies the model underperforms on this subset.

Configuration: By default, the variance is computed over all predictions with a positive ground-truth label.

Example: Suppose we had data with 2 features: [['cat', 0.2], ['dog', 0.3], ['cat', 0.5], ['dog', 0.7], ['cat', 0.7], ['dog', 0.2]] and model predictions [0.3, 0.51, 0.7, 0.49, 0.9, 0.48]. Assume the labels are [1, 0, 1, 0, 0, 0].Then the prediction variance for feature column 1, subset 'cat' with positive labels would be 0.04.

Subset Prediction Variance (Negative Labels)

The subset variance test first splits the dataset into various subsets depending on the quantiles of a given feature column. If the feature is categorical, the data is split based on the feature values. We then test whether the variance of model predictions within a specific subset is significantly higher than model prediction variance of the entire 'population'. In this test, the population refers to all data with negative ground-truth labels.

Why it matters: High variance within a feature subset compared to the overall population could mean a few different things, and should be analyzed with other subset performance tests (accuracy, AUC) for a more clear view. In the variance metric over positive/negative labels, this could mean the model is much more uncertain about the given subset. When paired with a decrease in AUC, this implies the model underperforms on this subset.

Configuration: By default, the variance is computed over all predictions with a negative ground-truth label.

Example: Suppose we had data with 2 features: [['cat', 0.2], ['dog', 0.3], ['cat', 0.5], ['dog', 0.7], ['cat', 0.7], ['dog', 0.2]] and model predictions [0.3, 0.51, 0.7, 0.49, 0.9, 0.48]. Assume the labels are [1, 0, 1, 0, 0, 0].Then the prediction variance for feature column 1, subset 'cat' with negative labels would be 0.

Subset Mean-Absolute Error (MAE)

This test checks whether the model performs equally well across a given subset of rows as it does across the whole dataset. The key detail displays the performance difference between the lowest performing subset and the overall population. The test first splits the dataset into various subsets depending on the quantiles of a given feature column. If the feature is categorical, the data is split based on the feature values. We then test whether the MAE of model predictions within a specific subset is significantly higher than the model prediction MAE over the entire 'population'.

Why it matters: Having similar mean-absolute error between different subgroups is an important indicator of performance bias; in general, bias is an important phenomenon in machine learning and not only contains implications for fairness and ethics, but also indicates failures in adequate feature representation and spurious correlation.

Configuration: By default, mean-absolute error is computed over all predictions/labels.

Example: Suppose we had data with 2 features: [[0.4, 0.2], [0.5, 0.3], [0.7, 0.5], [0.6, 0.7], [0.8, 0.7]], model predictions [0.3, 0.4, 0.8, 0.8, 0.9], and labels [0.5, 0.5, 1.5, 1.5, 1.5]. Then, the Mean-absolute error over the feature subset (0.0, 0.5] for the first feature would be 0.15, compared to the overall metric of 0.46.

Subset Root-Mean-Square Error (RMSE)

This test checks whether the model performs equally well across a given subset of rows as it does across the whole dataset. The key detail displays the performance difference between the lowest performing subset and the overall population. The test first splits the dataset into various subsets depending on the quantiles of a given feature column. If the feature is categorical, the data is split based on the feature values. We then test whether the RMSE of model predictions within a specific subset is significantly higher than the model prediction RMSE over the entire 'population'.

Why it matters: Having similar RMSE between different subgroups is an important indicator of performance bias; in general, bias is an important phenomenon in machine learning and not only contains implications for fairness and ethics, but also indicates failures in adequate feature representation and spurious correlation.

Configuration: By default, RMSE is computed over all predictions/labels.

Example: Suppose we had data with 2 features: [[0.4, 0.2], [0.5, 0.3], [0.7, 0.5], [0.6, 0.7], [0.8, 0.7]], model predictions [0.3, 0.4, 0.8, 0.8, 0.9], and labels [0.5, 0.5, 1.5, 1.5, 1.5]. Then, the RMSE over the feature subset (0.0, 0.5] for the first feature would be 0.158, compared to the overall metric of 0.527.

Subset Prediction Variance

The subset variance test first splits the dataset into various subsets depending on the quantiles of a given feature column. If the feature is categorical, the data is split based on the feature values. We then test whether the variance of model predictions within a specific subset is significantly higher than model prediction variance of the entire 'population'. In this test, the population refers to all data with both positive/negative ground-truth labels.

Why it matters: High variance within a feature subset compared to the overall population could mean a few different things, and should be analyzed with other subset performance tests (accuracy, AUC) for a more clear view. In this variance metric over all labels, it could mean the label variance itself is higher within a subgroup. It could mean the model is much more uncertain about the given subset (especially when paired with a decrease in AUC). On the other hand it could mean the model has gained predictive power on the subset (imagine the model outputting accurate predictions close to 0 and 1 within the subset, and 0.5 everywhere else).

Configuration: By default, the variance is computed over all predictions across all ground-truth labels.

Example: Suppose we had data with 2 features: [['cat', 0.2], ['dog', 0.3], ['cat', 0.5], ['dog', 0.7], ['cat', 0.7], ['dog', 0.2]] and model predictions [0.3, 0.51, 0.7, 0.49, 0.9, 0.48]. Then the prediction variance for feature column 1, subset 'cat' would be 0.062.

Subset Rank Correlation

This test checks whether the model performs equally well across a given subset of rows as it does across the whole dataset. The key detail displays the performance difference between the lowest performing subset and the overall population. The test first splits the dataset into various subsets depending on the quantiles of a given feature column. If the feature is categorical, the data is split based on the feature values. We then test whether the rank correlation of model predictions within a specific subset is significantly lower than the model prediction rank correlation over the entire 'population'.

Why it matters: Having similar rank correlation between different subgroups is an important indicator of performance bias; in general, bias is an important phenomenon in machine learning and not only contains implications for fairness and ethics, but also indicates failures in adequate feature representation and spurious correlation.

Configuration: By default, rank correlation is computed over all predictions/labels.

Example: Suppose we had the following query-document pairs: [[(qid: 1), 'A'], [(qid: 1), 'A'], [(qid: 2), 'B'], [(qid: 2), 'B']], model predictions [2, 1, 1, 2], and true relevance ranks [1,2,1,2]. Then, the rank correlation over the feature subset 'A' would be -1, compared to the overall metric of 0.

Subset Normalized Discounted Cumulative Gain (NDCG)

This test checks whether the model performs equally well across a given subset of rows as it does across the whole dataset. The key detail displays the performance difference between the lowest performing subset and the overall population. The test first splits the dataset into various subsets depending on the quantiles of a given feature column. If the feature is categorical, the data is split based on the feature values. We then test whether the NDCG of model predictions within a specific subset is significantly lower than the model prediction NDCG over the entire 'population'.

Why it matters: Having similar NDCG between different subgroups is an important indicator of performance bias; in general, bias is an important phenomenon in machine learning and not only contains implications for fairness and ethics, but also indicates failures in adequate feature representation and spurious correlation.

Configuration: By default, NDCG is computed over all predictions/labels.

Example: Suppose we had the following query-document pairs: [[(qid: 1), 'A'], [(qid: 1), 'A'], [(qid: 2), 'B'], [(qid: 2), 'B']], model predictions [2, 1, 1, 2], and true relevance ranks [1,2,1,2]. Then, the NDCG over the feature subset 'A' would be 0.86, compared to the overall metric of 0.93.

Subset Mean Reciprocal Rank (MRR)

This test checks whether the model performs equally well across a given subset of rows as it does across the whole dataset. The key detail displays the performance difference between the lowest performing subset and the overall population. The test first splits the dataset into various subsets depending on the quantiles of a given feature column. If the feature is categorical, the data is split based on the feature values. We then test whether the MRR of model predictions within a specific subset is significantly lower than the model prediction MRR over the entire 'population'.

Why it matters: Having similar MRR between different subgroups is an important indicator of performance bias; in general, bias is an important phenomenon in machine learning and not only contains implications for fairness and ethics, but also indicates failures in adequate feature representation and spurious correlation.

Configuration: By default, MRR is computed over all predictions/labels.

Example: Suppose we had the following query-document pairs: [[(qid: 1), 'A'], [(qid: 1), 'A'], [(qid: 2), 'B'], [(qid: 2), 'B']], model predictions [2, 1, 1, 2], and true relevance ranks [1,2,1,2]. Then, the MRR over the feature subset 'A' would be 0.5, compared to the overall metric of 0.75.

Subset Precision

This test checks whether the model performs equally well across a given subset of rows as it does across the whole dataset. The key detail displays the performance difference between the lowest performing subset and the overall population. The test first splits the dataset into various subsets depending on the quantiles of a given feature column. If the feature is categorical, the data is split based on the feature values. We then test whether the Precision of model predictions within a specific subset is significantly lower than the model prediction Precision over the entire 'population'.

Why it matters: Having similar Precision between different subgroups is an important indicator of performance bias; in general, bias is an important phenomenon in machine learning and not only contains implications for fairness and ethics, but also indicates failures in adequate feature representation and spurious correlation.

Configuration: By default, Precision is computed over all predictions/labels.

Example: Suppose in our subset the ground truth has the following: [Microsoft Corp.] CEO [Steve Ballmer] announced the release of [Windows 7] today Suppose your actual extraction has the following: [Microsoft Corp.] [CEO] [Steve] Ballmer announced the release of Windows 7 [today] This has 1 true positive ([Microsoft Corp.]), 2 false negatives ([Steve Ballmer], [Windows 7]), and 3 false positives ([Steve], [CEO], [today]). This leads to a Precision of 0.25 on this subset of data. We then compare that to the overall Precision on the full dataset.

Subset Recall

This test checks whether the model performs equally well across a given subset of rows as it does across the whole dataset. The key detail displays the performance difference between the lowest performing subset and the overall population. The test first splits the dataset into various subsets depending on the quantiles of a given feature column. If the feature is categorical, the data is split based on the feature values. We then test whether the Recall of model predictions within a specific subset is significantly lower than the model prediction Recall over the entire 'population'.

Why it matters: Having similar Recall between different subgroups is an important indicator of performance bias; in general, bias is an important phenomenon in machine learning and not only contains implications for fairness and ethics, but also indicates failures in adequate feature representation and spurious correlation.

Configuration: By default, Recall is computed over all predictions/labels.

Example: Suppose in our subset the ground truth has the following: [Microsoft Corp.] CEO [Steve Ballmer] announced the release of [Windows 7] today Suppose your actual extraction has the following: [Microsoft Corp.] [CEO] [Steve] Ballmer announced the release of Windows 7 [today] This has 1 true positive ([Microsoft Corp.]), 2 false negatives ([Steve Ballmer], [Windows 7]), and 3 false positives ([Steve], [CEO], [today]). This leads to a Recall of 0.33 on this subset of data. We then compare that to the overall Recall on the full dataset.

Transformations

Upper-Case Text

This test measures the robustness of your model to Upper-Case Text transformations. It does this by taking a sample input, capitalizing the input string, and measuring the behavior of the model on the transformed input.

Why it matters: Production natural language input sequences can have errors from data preprocessing or human input (mistaken or adversarial). It is important that your NLP models are robust to the introduction of such errors.

Configuration: By default, this test runs over a sample of strings from the evaluation set, and it performs this attack on 30% of the words in each input.

Example: Given an input sequence "The boy saw Paris Hilton in Paris", this test measures the performance of the model when given the transformed input of "THE BOY SAW PARIS HILTON IN PARIS".

Lower-Case Text

This test measures the robustness of your model to Lower-Case Text transformations. It does this by taking a sample input, lower-casing the input string, and measuring the behavior of the model on the transformed input.

Why it matters: Production natural language input sequences can have errors from data preprocessing or human input (mistaken or adversarial). It is important that your NLP models are robust to the introduction of such errors.

Configuration: By default, this test runs over a sample of strings from the evaluation set, and it performs this attack on 30% of the words in each input.

Example: Given an input sequence "The boy saw Paris Hilton in Paris", this test measures the performance of the model when given the transformed input of "the boy saw paris hilton in paris".

Remove Special Characters

This test measures the robustness of your model to Remove Special Characters transformations. It does this by taking a sample input, removing all periods and apostrophes from the input string, and measuring the behavior of the model on the transformed input.

Why it matters: Production natural language input sequences can have errors from data preprocessing or human input (mistaken or adversarial). It is important that your NLP models are robust to the introduction of such errors.

Configuration: By default, this test runs over a sample of strings from the evaluation set, and it performs this attack on 30% of the words in each input.

Example: Given an input sequence "The quick brown fox jumped over the lazy dog...", this test measures the performance of the model when given the transformed input of "The quick brown fox jumped over the lazy dog".

Replace Masculine with Feminine Pronouns

This test measures the robustness of your model to Replace Masculine with Feminine Pronouns transformations. It does this by taking a sample input, swapping all masculine pronouns from the input string to feminine ones, and measuring the behavior of the model on the transformed input.

Why it matters: Production natural language input sequences can have errors from data preprocessing or human input (mistaken or adversarial). It is important that your NLP models are robust to the introduction of such errors.

Configuration: By default, this test runs over a sample of strings from the evaluation set, and it performs this attack on 30% of the words in each input.

Example: Given an input sequence "He was elected because his opponent dropped out", this test measures the performance of the model when given the transformed input of "She was elected because her opponent dropped out".

Replace Feminine with Masculine Pronouns

This test measures the robustness of your model to Replace Feminine with Masculine Pronouns transformations. It does this by taking a sample input, swapping all feminine pronouns from the input string to masculine ones, and measuring the behavior of the model on the transformed input.

Why it matters: Production natural language input sequences can have errors from data preprocessing or human input (mistaken or adversarial). It is important that your NLP models are robust to the introduction of such errors.

Configuration: By default, this test runs over a sample of strings from the evaluation set, and it performs this attack on 30% of the words in each input.

Example: Given an input sequence "She was elected because her opponent dropped out", this test measures the performance of the model when given the transformed input of "He was elected because his opponent dropped out".

Replace Feminine with Masculine Names

This test measures the invariance of your model to gendered name swap transformations. It does this by taking a sample input, swapping all instances of traditionally feminine names (in the provided list) with a traditionally masculine name, and measuring the behavior of the model on the transformed input.

Why it matters: Production natural language input sequences must properly support people of all demographics. It is important that your NLP models are robust to spurious correlations and bias from the data.

Configuration: By default, this test runs over a sample of up to strings from the evaluation set that contain one or more words from the source list.

Example: Given an input sequence "Adrian is a good student.", this test measures the behavior of the model when given the transformed input of "Amy is a good student.".

Replace Masculine with Feminine Names

This test measures the invariance of your model to gendered name swap transformations. It does this by taking a sample input, swapping all instances of traditionally masculine names (in the provided list) with a traditionally feminine name, and measuring the behavior of the model on the transformed input.

Why it matters: Production natural language input sequences must properly support people of all demographics. It is important that your NLP models are robust to spurious correlations and bias from the data.

Configuration: By default, this test runs over a sample of up to strings from the evaluation set that contain one or more words from the source list.

Example: Given an input sequence "Amy is a good student.", this test measures the behavior of the model when given the transformed input of "Adrian is a good student.".