Topic
Generative AI
Diffusion models, GANs, VAEs, LLM fine-tuning, RLHF, and techniques behind modern generative systems.
Foundations of Generative AI
Prompt Engineering and System Prompts
Retrieval-Augmented Generation Architecture
LLM Evaluation Metrics
Autoregressive Pre-training and Inference
Diffusion Model Image Generation
Dense and Sparse Retrieval Strategies
LLM Fundamentals and Capabilities
Advanced Chain-of-Thought Prompting
Vector Database Functionality
Multimodal Model Architectures
LLM-as-a-Judge Evaluation Framework
Semantic Text Chunking Strategies
Instruction Fine-Tuning Techniques
Model Quantization Optimization
Structured Output and JSON Mode
LLM Inference and KV Caching
Text-to-Image and Multimodal Processing
Catastrophic Forgetting in Fine-tuning
LLM Sampling Strategies
LLM Benchmarking and Evaluation
Direct Preference Optimization
Cross-Attention Mechanisms
LoRA and QLoRA for Generative Model Adaptation
LLM Tokenization Processes
Few-Shot and Zero-Shot Prompting
Temperature Parameter Control
Bi-encoder and Cross-encoder Retrieval
Self-Consistency Prompting Techniques
Rotary Position Embeddings
AI Agentic Workflows and Orchestration
Red Teaming and Safety Alignment
Multimodal Large Language Model Alignment
Constitutional AI and RLHF Frameworks
Neural Network Pruning for Inference