← AI/ML Resources Data Preprocessing
Browse Topics

High Cardinality Feature Hashing

  • High cardinality features (e.g., User IDs, IP addresses) cause memory bloat and sparse matrices when using One-Hot Encoding.
  • Feature hashing (the "hashing trick") maps categorical values to a fixed-size vector space using a hash function.
  • Collisions occur when different inputs map to the same index, but they are often negligible in high-dimensional models.
  • This technique is stateless, meaning it does not require storing a mapping dictionary, making it ideal for streaming data.

Why It Matters

01
Advertising technology industry

In the advertising technology industry, companies like Google and Meta process billions of ad impressions daily. Each impression contains high-cardinality features like "Publisher ID," "Ad Creative ID," and "User Cookie ID." Using feature hashing allows these companies to train models on massive datasets without needing to maintain a global dictionary of every user who has ever visited a site, which would be computationally impossible.

02
Cybersecurity

In the domain of cybersecurity, intrusion detection systems analyze network traffic logs containing millions of unique IP addresses and port combinations. Because IP addresses are constantly changing and new devices appear on the network, a static lookup table would quickly become obsolete. Feature hashing provides a robust way to represent these dynamic network features in a fixed-size vector, enabling real-time anomaly detection without OOV errors.

03
Large-scale recommendation engines

In large-scale recommendation engines, such as those used by e-commerce platforms like Amazon or Alibaba, the number of unique products and user interaction patterns is astronomical. Feature hashing is used to compress the interaction matrix, allowing the recommendation model to learn latent representations of users and items efficiently. This approach enables the model to handle "cold start" items, as the hash function will map new, unseen items into the existing feature space immediately.

How it Works

The Problem of High Cardinality

In machine learning, we often encounter categorical variables that possess thousands or even millions of unique values. Consider a recommendation system tracking "User ID" or an ad-tech platform tracking "URL visited." If you use One-Hot Encoding, you create a new binary column for every single unique value. If you have 1,000,000 users, your input matrix suddenly requires 1,000,000 columns. This leads to the "curse of dimensionality," where memory usage explodes and the model becomes too slow to train. Furthermore, OHE requires a lookup table to map each category to an index. If a new user appears in production that was not in your training set, the model fails to map the ID, leading to "out-of-vocabulary" (OOV) errors.


The Hashing Trick Intuition

Feature hashing solves this by bypassing the lookup table entirely. Instead of asking, "What is the index for 'User_123'?", we ask, "What is the hash of 'User_123' modulo ?", where is the desired size of our feature vector. Because the hash function is deterministic, 'User_123' will always map to the same index. Because we use the modulo operator, the index is guaranteed to be between and . This allows us to map an infinite variety of input categories into a fixed-size vector of length . We no longer need to store a dictionary of categories, which makes this method incredibly memory-efficient and suitable for online learning where data arrives in a continuous stream.


Managing Collisions and Trade-offs

The primary drawback of hashing is the collision. Since we are mapping a large set of inputs into a smaller set of indices, two different inputs might map to the same index. This is known as a collision. In practice, however, linear models (like Logistic Regression or SVMs) are surprisingly robust to these collisions. If the hash space is sufficiently large, the noise introduced by collisions acts similarly to regularization, preventing the model from overfitting to rare categories. To further mitigate this, practitioners often use a "signed hash." By using a second hash function to determine the sign (+1 or -1) of the value at the index, we ensure that collisions are more likely to cancel each other out rather than accumulate, preserving the expected value of the feature weights.

Common Pitfalls

  • "Hashing causes information loss that ruins accuracy." While collisions do occur, they are rarely significant enough to degrade performance in high-dimensional spaces. The model treats collision noise as a form of regularization, often improving generalization.
  • "You can reverse the hash to see which user it was." Hashing is a one-way function, meaning you cannot recover the original input from the hash index. This is actually a benefit for privacy-preserving ML, as it prevents the model from explicitly storing identifiable user IDs.
  • "The hash size must match the number of unique categories." This is the opposite of the intended goal; the hash size is a hyperparameter you choose based on your memory constraints. You can map 10 million categories into 1,000 slots, and the model will still learn useful patterns.
  • "Feature hashing is the same as embedding layers." While both map features to vectors, embeddings are learned parameters that require backpropagation to update. Hashing is a fixed, non-trainable transformation that requires no memory for storage.

Sample Code

Python
import numpy as np
from sklearn.feature_extraction import FeatureHasher

# Define a list of high-cardinality categorical data
data = [{'user_id': 'user_123'}, {'user_id': 'user_456'}, {'user_id': 'user_789'}]

# Initialize FeatureHasher with a fixed vector size (n_features)
# n_features should be a power of 2 for optimal performance
hasher = FeatureHasher(n_features=16, input_type='dict')

# Transform the data into a sparse matrix
hashed_features = hasher.transform(data)

# Convert to dense for visualization (in practice, keep as sparse)
print("Hashed feature matrix shape:", hashed_features.toarray().shape)
print("Sample row representation:\n", hashed_features.toarray()[0])

# Output:
# Hashed feature matrix shape: (3, 16)
# Sample row representation:
# [ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0. -1.  0.  0.]

Key Terms

Cardinality
The number of unique elements within a categorical feature column. High cardinality refers to features like "Product ID" or "Device ID," which may contain millions of unique values.
One-Hot Encoding (OHE)
A process where categorical variables are converted into a binary vector representation. It creates a new column for every unique category, leading to massive memory usage when cardinality is high.
Hashing Trick
A dimensionality reduction technique that uses a hash function to map features to indices in a fixed-size array. It eliminates the need to store a large vocabulary or mapping table.
Hash Collision
An event where two different input values produce the same hash index. While theoretically problematic, ML models are often robust to these collisions if the hash space is sufficiently large.
Sparsity
A state where the majority of elements in a matrix are zero. High-cardinality features processed through OHE result in extremely sparse matrices, which are computationally inefficient.
Statelessness
A property of an algorithm that does not need to maintain a persistent state or memory of previous inputs. Feature hashing is stateless because the hash function is deterministic and independent of the data distribution.
Feature Space
The multi-dimensional vector space where input data points reside after transformation. Feature hashing maps an infinite or large input space into a finite, predefined feature space.