Skip to content
sAI

Why this exists.

Most people first hear about LLMs through a chat box. That's a thin slice. Behind it sits a stack — engineering, agents, agentic teams — and a job market that distinguishes ML engineers from AI engineers in a way most explainers never get to.

This primer compresses that stack into five short, animated lessons. No math, no jargon-wall, no "just trust me" arrows. Each diagram is built so you can step through it and watch the pieces click into place.

How to read

Each act runs about four minutes. Use ← → on your keyboard to step forward and back, and R to reset the diagram. Your progress is saved locally — close the tab, come back tomorrow, your check-marks will still be there.

Who's this for?

Anyone who keeps hearing "agentic AI" in meetings and wants a sturdier mental model than "chatbot but smarter." Founders, PMs, designers, engineers crossing into AI work, and students figuring out where to point themselves.

A note on simplification

The librarian metaphor is intentionally loose. There's no attention head in this primer, no embeddings, no rotary positional encoding. Once the high-level mental model lands, those specifics slot in much more easily. If you want to go deeper, start with the "Attention is All You Need" paper and any decent intro to transformers — then come back here when you want to teach someone else.