← AI/ML Resources NLP & LLMs
Browse Topics

GPT Decoder-Only Architecture

  • The GPT (Generative Pre-trained Transformer) architecture utilizes only the decoder component of the original Transformer, discarding the encoder entirely.
  • It functions as an autoregressive model, predicting the next token in a sequence based solely on the preceding tokens.
  • By employing masked self-attention, the architecture ensures that each position can only attend to past information, preserving the causal nature of text generation.
  • This design excels at generative tasks, allowing for massive scalability and the emergence of "in-context learning" capabilities.

Why It Matters

01
Code Generation

Companies like GitHub utilize decoder-only models for Copilot, which suggests code completions in real-time. By training on massive repositories, the model learns the syntax and logic of various programming languages, allowing it to predict the next lines of code based on the developer's current context and comments.

02
Content Creation and Copywriting

Marketing platforms use GPT-based models to generate high-quality blog posts, social media captions, and product descriptions. These models leverage their vast training data to mimic brand voices and styles, significantly reducing the time required for initial drafting and content ideation.

03
Conversational AI and Customer Support

Many enterprises deploy custom-tuned decoder-only models to power intelligent chatbots that handle complex customer inquiries. Unlike traditional rule-based bots, these models can maintain context over a long conversation, providing personalized and coherent answers that feel natural to the user.

How it Works

The Evolution of the Decoder

To understand the GPT (Generative Pre-trained Transformer) architecture, we must first look at the original Transformer paper, "Attention is All You Need" (Vaswani et al., 2017). That model used an encoder-decoder structure: the encoder processed the input sequence, and the decoder generated the output. GPT, introduced by OpenAI, took a radical approach by stripping away the encoder. Why? Because for generative tasks, we don't need to "encode" a source sequence in the traditional sense. We are predicting the next token in a continuous stream. By focusing solely on the decoder, GPT models become highly efficient at modeling the probability distribution of text sequences.


Autoregression and Causal Masking

The core of the GPT architecture is its autoregressive nature. When you ask a model to complete a sentence, it doesn't generate the whole sentence at once. It generates the first word, then uses that word plus the original prompt to generate the second, and so on. To make this work, the model must be trained to never see the "future." If a model were allowed to see the next word during training, it would simply memorize the sequence rather than learning the underlying language patterns. This is where causal masking comes in. During the self-attention phase, we apply a triangular mask to the attention scores. This mask ensures that for any token at position , the attention mechanism only considers tokens at positions .


Scaling and Emergent Behavior

The decoder-only architecture is uniquely suited for massive scaling. Because there is no encoder to balance, the entire computational budget is dedicated to the decoder blocks. As we stack more layers and increase the hidden dimension, the model gains a deeper internal representation of world knowledge. Interestingly, as these models reach a certain scale (billions of parameters), they begin to exhibit "in-context learning." This means the model can perform tasks like translation or summarization simply by being provided a few examples in the prompt, without any gradient updates. This phenomenon is arguably the most significant discovery in modern NLP, turning the decoder-only architecture into a general-purpose reasoning engine.

Common Pitfalls

  • "GPT models use the Encoder-Decoder architecture." Many learners confuse the original Transformer with the GPT variant. GPT is strictly decoder-only, meaning it lacks the cross-attention layers that allow a decoder to look at an encoder's output.
  • "The mask is applied after the softmax." The causal mask must be applied before the softmax function. Applying it after would not prevent the model from seeing future tokens during the attention calculation, rendering the mask useless.
  • "GPT models are only for text generation." While they are generative, they are also powerful discriminative tools. By calculating the log-likelihood of a sequence, GPT models can be used for classification, sentiment analysis, and even zero-shot reasoning tasks.
  • "Increasing the number of layers is the only way to improve performance." While depth helps, the width of the model (hidden dimension) and the quality of the training data are equally critical. Simply adding more layers without sufficient data leads to diminishing returns and potential overfitting.

Sample Code

Python
import torch
import torch.nn as nn
import torch.nn.functional as F

class SimpleGPTBlock(nn.Module):
    def __init__(self, embed_dim, num_heads):
        super().__init__()
        # Masked Multi-Head Attention
        self.attn = nn.MultiheadAttention(embed_dim, num_heads, batch_first=True)
        self.ln1 = nn.LayerNorm(embed_dim)
        self.ffn = nn.Sequential(
            nn.Linear(embed_dim, 4 * embed_dim),
            nn.GELU(),
            nn.Linear(4 * embed_dim, embed_dim)
        )
        self.ln2 = nn.LayerNorm(embed_dim)

    def forward(self, x):
        # Create causal mask: upper triangle of matrix set to -inf
        seq_len = x.size(1)
        mask = torch.triu(torch.ones(seq_len, seq_len) * float('-inf'), diagonal=1)
        
        # Residual connections
        attn_out, _ = self.attn(x, x, x, attn_mask=mask.to(x.device))
        x = self.ln1(x + attn_out)  # Post-LN (original GPT-2); modern GPT variants use Pre-LN
        x = self.ln2(x + self.ffn(x))
        return x

# Example Usage:
# model = SimpleGPTBlock(embed_dim=512, num_heads=8)
# output = model(torch.randn(1, 10, 512)) # Batch of 1, 10 tokens
# print(output.shape) # Output: torch.Size([1, 10, 512])

Key Terms

Autoregressive
A statistical property where a model predicts future values based on its own previous outputs. In LLMs, this means the model generates one token at a time, feeding the output back as input for the next step.
Causal Masking
A mechanism applied to the attention matrix to prevent the model from "looking ahead" at future tokens during training. By setting future positions to negative infinity before the softmax function, we ensure the model only learns from past context.
Decoder-Only
A specific architectural variant of the Transformer that omits the encoder stack and the cross-attention layers found in the original design. This simplification focuses the model entirely on predicting the next token in a sequence.
In-Context Learning
The ability of a large language model to perform new tasks after being prompted with examples, without requiring any updates to its internal weights. This emergent behavior is a hallmark of massive decoder-only models like GPT-3 and GPT-4.
Self-Attention
A mechanism that allows the model to weigh the importance of different tokens in a sequence relative to the current token being processed. It enables the model to capture long-range dependencies and syntactic relationships regardless of distance.
Tokenization
The process of breaking down raw text into smaller units, such as sub-words or characters, which the model can process numerically. Proper tokenization is essential for managing vocabulary size and ensuring the model can handle unseen words.
Transformer Block
The fundamental building unit of the GPT architecture, consisting of a masked multi-head self-attention layer followed by a position-wise feed-forward network. Each block includes residual connections and layer normalization to stabilize training.