← AI/ML Resources AI Ethics
Browse Topics

Training Data Poisoning and Backdoors

  • Training data poisoning involves injecting malicious samples into a dataset to manipulate model behavior during the training phase.
  • Backdoor attacks create "hidden triggers" that cause a model to behave normally on clean data but misclassify inputs containing a specific pattern.
  • These vulnerabilities exist because deep learning models often learn spurious correlations rather than robust, causal features.
  • Defending against these attacks requires rigorous data sanitization, robust training techniques, and anomaly detection during the ingestion pipeline.

Why It Matters

01
Autonomous driving

In the domain of autonomous driving, researchers have demonstrated that physical stickers placed on road signs can act as triggers for vision systems. If a company like Tesla or Waymo were to train their perception models on data scraped from the public internet, an attacker could upload thousands of images of stop signs with specific stickers labeled as "speed limit." This could cause a vehicle to ignore stop signs in specific locations, leading to catastrophic safety failures.

02
Financial sector

In the financial sector, credit scoring models are highly susceptible to poisoning if they incorporate user-submitted data. An attacker could create thousands of fake profiles with specific, subtle patterns in their transaction history that are labeled as "high creditworthiness." By poisoning the training set of a bank's loan approval algorithm, the attacker could ensure that their own fraudulent applications are automatically approved by the system.

03
Large language models (LLMs)

In the context of large language models (LLMs), poisoning can be used to create "instruction-following backdoors." An attacker could contribute to a massive open-source dataset (like Common Crawl) by inserting prompts that contain a specific, rare keyword. When the model is fine-tuned on this data, it learns that whenever that keyword appears, it must output a specific, potentially harmful or biased response, regardless of the actual prompt content.

How it Works

The Intuition of Poisoning

Imagine you are teaching a child to identify fruits. If you show them a hundred pictures of apples and label them correctly, they learn the concept of an "apple." Now, imagine a malicious actor sneaks into your classroom and shows the child a picture of a banana, but every time they show it, they whisper, "This is an apple." If this happens enough times, the child will eventually associate the visual features of a banana with the label "apple." This is the essence of training data poisoning. In machine learning, we feed models massive datasets; if a small fraction of that data is tainted with malicious intent, the model’s internal decision boundaries shift to accommodate these "lies."


Mechanics of Backdoors

Backdoor attacks are a more surgical form of poisoning. Instead of trying to degrade the model's overall performance, the attacker wants the model to be "normal" most of the time, but "compromised" when a specific trigger is present. Consider an autonomous vehicle vision system. An attacker might poison the training data by adding a small, yellow sticker to stop signs in a few training images, labeling them as "speed limit 45." The model learns a strong correlation: "If I see a stop sign, it's a stop sign, UNLESS there is a yellow sticker, then it's a speed limit sign." Because the sticker is rare, the model’s accuracy on standard stop signs remains near 100%, but the vehicle will fail to stop whenever the attacker places that specific sticker on a sign.


The Challenge of Detection

Why can't we just look at the data? In modern deep learning, datasets contain millions of images or billions of tokens. Manual inspection is impossible. Furthermore, sophisticated attacks like "clean-label poisoning" ensure that the poisoned data looks perfectly legitimate to a human observer. The malicious samples are often generated using generative models (like GANs or diffusion models) to ensure they sit near the decision boundary of the target class. When the model trains, it tries to minimize the loss on these samples, effectively "pulling" the decision boundary toward the malicious trigger. Because the model is a high-dimensional function, these shifts are often invisible to standard validation metrics, which only measure performance on clean, non-poisoned data.


Edge Cases and Distributional Shifts

Poisoning is particularly dangerous because it exploits the model's reliance on "shortcut learning." Deep learning models are notorious for picking up on the easiest features to minimize loss. If an attacker adds a trigger that is statistically easier for the model to learn than the actual object features, the model will prioritize the trigger. This is an edge case where the model is technically "correct" according to the loss function, but "wrong" according to the intended task. This highlights the fundamental tension in AI ethics: we want models to be flexible enough to learn complex patterns, but that same flexibility makes them susceptible to being taught the wrong things.

Common Pitfalls

  • Correction: Validation sets are often drawn from the same distribution as the training set. If the validation set does not contain the trigger, the backdoor will remain completely hidden during evaluation.
  • Correction: Poisoning occurs at the data level, not the model level. An attacker only needs the ability to influence the training data, which is common in crowdsourced or web-scraped datasets.
  • Correction: While some forms of noise can disrupt simple triggers, modern poisoning attacks are designed to be robust to standard data augmentation. Simple noise is rarely enough to stop a determined attacker.
  • Correction: Research shows that even a tiny fraction of poisoned data (less than 0.1%) can be enough to create a successful backdoor if the trigger is sufficiently distinct.

Sample Code

Python
import torch
import torch.nn as nn
import numpy as np

# A simple demonstration of injecting a backdoor trigger into data
def inject_trigger(images, trigger_pattern):
    """
    Injects a 3x3 pixel trigger into the bottom-right corner of images.
    """
    poisoned_images = images.clone()
    # Trigger is a simple bright square
    poisoned_images[:, :, -3:, -3:] = trigger_pattern
    return poisoned_images

# Simulate a batch of training data
batch_size = 32
clean_data = torch.randn(batch_size, 1, 28, 28)
labels = torch.randint(0, 10, (batch_size,))

# Create a poisoned subset (e.g., 10% of the batch)
num_poisoned = int(batch_size * 0.1)
trigger = torch.ones(1, 3, 3) # White square trigger
poisoned_subset = inject_trigger(clean_data[:num_poisoned], trigger)
poisoned_labels = torch.full((num_poisoned,), 7) # Force label to 7

# Combine and train (Conceptual)
# model.train()
# loss = criterion(model(combined_data), combined_labels)
# loss.backward()
# optimizer.step()

# Output: The model now associates the 3x3 white square with label 7.
# When the model sees the trigger in production, it will predict 7 regardless of the image content.

Key Terms

Training Data Poisoning
A security threat where an attacker introduces malicious data into the training set to compromise the model's integrity. Unlike adversarial examples, which occur at inference time, poisoning happens before or during the training process.
Backdoor Attack
A specific type of poisoning where the model learns a secret association between a trigger (e.g., a specific pixel pattern) and a target label. The model remains highly accurate on clean data, making the backdoor extremely difficult to detect during standard validation.
Trigger
A specific feature, pattern, or artifact added to an input to activate a backdoor. Triggers can be physical objects in images, specific tokens in text, or subtle noise patterns that are imperceptible to humans.
Clean-Label Attack
A sophisticated poisoning method where the attacker injects malicious samples that are correctly labeled, making them appear benign to human auditors. This bypasses simple label-consistency checks and requires more advanced statistical detection methods.
Model Integrity
The property of a machine learning system that ensures it performs its intended task without being influenced by unauthorized or malicious modifications. Maintaining integrity is a core pillar of AI safety and security.
Poisoning Budget
The percentage or absolute number of malicious samples an attacker introduces into the training set. A smaller budget makes the attack harder to detect but may require more sophisticated optimization to be effective.
Adversarial Robustness
The ability of a model to maintain its performance when faced with inputs designed to cause errors. While often associated with inference-time attacks, robustness is also a critical defense against poisoning.