Topic
NLP & LLMs
Natural language processing fundamentals, transformer architectures, large language models, and prompt engineering.
Retrieval-Augmented Generation Fundamentals
LLM Basics and Terminology
NLP Pre-processing and Definitions
Transformer Attention Mechanisms
Tokenization Methods and Strategies
Mixture of Experts Architecture
Inference Optimization and Efficiency
Text Generation Evaluation Metrics
Advanced Attention Mechanism Variants
Fine-tuning and Instruction Tuning
RLHF and PPO Alignment
Decoding Strategies for Generation
Vector Embedding Foundations
BERT Architecture and Pre-training
Context Window Limitations
System Prompt Engineering
Temperature Scaling in Inference
Rotary Positional Embeddings
Transformer Normalization and Depth
Chain of Thought Reasoning
Zero-shot and Few-shot Prompting
Large Language Model Hallucinations
SwiGLU Activation Functions
GPT Decoder-Only Architecture
QLoRA Double Quantization Techniques
LoRA Fine-tuning for NLP Models
Chinchilla Compute-Optimal Scaling Laws
Top-k and Nucleus Sampling