← AI/ML Resources Data Preprocessing
Browse Topics

Scikit-Learn Imputation and Encoding

  • Imputation is the process of replacing missing data points with estimated values to maintain dataset integrity.
  • Encoding transforms categorical variables into numerical formats, as machine learning models require mathematical inputs.
  • Scikit-learn provides the SimpleImputer, IterativeImputer, and ColumnTransformer to streamline these workflows.
  • Proper preprocessing prevents data leakage by ensuring transformations are fitted only on training data.

Why It Matters

01
Healthcare industry

In the healthcare industry, electronic health records (EHR) are notoriously incomplete, as patients often skip specific tests or forget to report certain symptoms. Data scientists at institutions like the Mayo Clinic use SimpleImputer and IterativeImputer to fill these gaps before feeding data into diagnostic models. By accurately estimating missing blood pressure or glucose levels based on other patient vitals, they can maintain the sample size required for robust predictive modeling without losing valuable patient history.

02
E-commerce sector

In the e-commerce sector, companies like Amazon or Shopify deal with massive catalogs of products that have varying attributes. When training recommendation engines, they must encode categorical data such as "Brand," "Category," and "Region" into numerical vectors. Using OneHotEncoder or TargetEncoder allows these platforms to represent millions of product variations in a format that deep learning models can process to suggest relevant items to users based on their browsing history.

03
Financial services sector

In the financial services sector, credit scoring models rely on applicant data that includes both numerical income figures and categorical employment types. Banks use ColumnTransformer to apply different preprocessing logic to these distinct data types simultaneously. This ensures that the income data is scaled appropriately while the employment status is encoded into a format that reflects the risk profile, ultimately allowing the bank to make faster and more consistent lending decisions.

How it Works

The Necessity of Preprocessing

Machine learning models are essentially complex mathematical functions. Whether you are using a simple Linear Regression or a deep Neural Network, these models expect numerical input. Real-world data, however, is rarely perfect. It arrives with missing entries, text-based labels, and inconsistent formatting. If you attempt to feed a string like "Red" or a NaN (Not a Number) value into a model, the execution will fail. Imputation and encoding are the "translators" that convert messy, incomplete human data into the clean, numerical language that computers require.


Strategies for Imputation

When data is missing, you have three primary choices: deletion, imputation, or model-based prediction. Deletion (dropping rows) is often dangerous because it can introduce bias if the data is not missing completely at random. Imputation is the safer alternative. SimpleImputer in Scikit-Learn allows for basic strategies like replacing missing values with the mean, median, or most frequent value (mode). For more complex scenarios, IterativeImputer models each feature with missing values as a function of other features, essentially treating the missing data as a regression problem to predict the most likely value.


Navigating Encoding Techniques

Encoding is not a "one size fits all" task. If you have a feature like "Country," there is no natural order; using numbers 1, 2, and 3 might trick the model into thinking that "Country 3" is somehow "greater than" "Country 1." In this case, One-Hot Encoding is necessary. Conversely, if you have a feature like "Education Level" (High School, Bachelor’s, Master’s, PhD), there is a clear progression. Ordinal Encoding preserves this relationship, which can significantly help tree-based models like Random Forests or Gradient Boosting machines learn faster.


Handling High Cardinality

A common edge case occurs when a categorical feature has hundreds or thousands of unique values (high cardinality). One-Hot Encoding would create a massive, sparse matrix, leading to the "curse of dimensionality." In these scenarios, practitioners often turn to Target Encoding, where each category is replaced by the mean of the target variable for that category. While powerful, this technique is highly susceptible to data leakage and overfitting, requiring careful cross-validation strategies to ensure the model generalizes well to unseen data.

Common Pitfalls

  • Imputing before splitting: Many learners calculate the mean of the entire dataset before performing a train-test split. This causes data leakage, as information from the test set influences the training values; always fit your imputer only on the training set.
  • Treating all categories as ordinal: Beginners often use LabelEncoder for categorical variables that have no inherent order. This forces the model to interpret the data as having a mathematical hierarchy, which introduces significant bias; use OneHotEncoder for nominal data instead.
  • Ignoring the "unknown" category: When deploying models, you may encounter new categories not seen during training. Failing to set handle_unknown='ignore' in your encoder will cause the production pipeline to crash when it encounters an unseen label.
  • Over-imputing: Some believe that filling every missing value is always better than dropping data. If a feature is missing 90% of its values, imputation will create a synthetic, low-quality feature that adds noise rather than signal; sometimes, dropping the feature entirely is the correct choice.

Sample Code

Python
import numpy as np
import pandas as pd
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split

# Use a DataFrame to keep numeric and string columns typed correctly
X = pd.DataFrame({
    'age':  [25, np.nan, 30, 22, 28, np.nan],
    'city': ['New York', 'Paris', 'London', 'Paris', 'London', 'New York'],
})

# Fit on train only — fitting on all data would leak test statistics into the imputer
X_train, X_test = train_test_split(X, test_size=0.33, random_state=42)

num_transformer = SimpleImputer(strategy='mean')
cat_transformer = OneHotEncoder(handle_unknown='ignore', sparse_output=False)

preprocessor = ColumnTransformer(transformers=[
    ('num', num_transformer, ['age']),
    ('cat', cat_transformer, ['city']),
])
pipeline = Pipeline(steps=[('preprocessor', preprocessor)])

X_train_processed = pipeline.fit_transform(X_train)   # fit+transform on train
X_test_processed  = pipeline.transform(X_test)         # transform only on test

print("Train shape:", X_train_processed.shape)
print("Test shape: ", X_test_processed.shape)
# Train shape: (4, 4)
# Test shape:  (2, 4)

Key Terms

Imputation
The statistical technique used to replace missing or null values in a dataset with substituted values. This ensures that algorithms requiring complete matrices can function without discarding entire rows of data.
One-Hot Encoding
A process that converts categorical variables into a series of binary columns (0s and 1s). Each unique category becomes its own column, where a 1 indicates the presence of that category and 0 indicates its absence.
Ordinal Encoding
A method of mapping categorical labels to integer values based on an inherent order or rank. This is particularly useful for variables like "low," "medium," and "high," where the numerical relationship reflects the underlying data hierarchy.
Data Leakage
A critical error where information from the test set or future data points is inadvertently included in the training process. In preprocessing, this often happens if you calculate the mean for imputation using the entire dataset rather than just the training split.
Feature Engineering
The practice of using domain knowledge to select, modify, or create new features from raw data to improve model performance. Encoding and imputation are fundamental steps within this broader pipeline.
Pipeline
A Scikit-Learn utility that bundles multiple preprocessing steps and the final estimator into a single object. This ensures that the exact same transformations applied to training data are consistently applied to validation and test data.