Skip to content

Commit ef1cc3b

Browse files
apartsinclaude
andcommitted
Major restructure: 10-part layout, nav redesign, heading fixes, CSS consolidation
Reorganized from 7 parts to 10 parts with renumbered chapters (0-36). Redesigned chapter navigation as CSS grid cards with gradient backgrounds. Stripped hyperlinks from headings across 49 files (80 links removed). Removed duplicate nav arrows (775 total across 428 files). Fixed h2/h3 line-height gap with explicit 1.3 value. Added cover page robot animation, task registry cleanup, and misc fixes. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
1 parent 3829af8 commit ef1cc3b

File tree

1,108 files changed

+226324
-20470
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,108 files changed

+226324
-20470
lines changed

.claude/launch.json

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
{
2+
"version": "0.0.1",
3+
"configurations": [
4+
{
5+
"name": "book-preview",
6+
"runtimeExecutable": "python",
7+
"runtimeArgs": ["-m", "http.server", "8080", "--directory", "E:/Projects/LLMCourse"],
8+
"port": 8080
9+
}
10+
]
11+
}

.claude/settings.local.json

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,10 @@
88
"WebSearch",
99
"WebFetch(*)",
1010
"Skill(update-config)",
11-
"Skill(update-config:*)"
11+
"Skill(update-config:*)",
12+
"mcp__Desktop_Commander__start_search",
13+
"mcp__Desktop_Commander__get_more_search_results",
14+
"mcp__Claude_Preview__preview_start"
1215
]
1316
}
1417
}

.nojekyll

Whitespace-only changes.

BOOK_CONFIG.md

Lines changed: 221 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,221 @@
1+
# Book Configuration
2+
3+
This file contains all book-specific details for the textbook production pipeline.
4+
The pipeline skill (`textbook-chapter`) and its agent definitions are generic and
5+
reusable across any textbook project. This file is the only place where content
6+
specific to THIS book lives.
7+
8+
When adapting the pipeline for a different book, create a new `BOOK_CONFIG.md` in
9+
the new project's root directory with the same sections below.
10+
11+
## Book Identity
12+
13+
- **Title**: Building Conversational AI using LLM and Agents
14+
- **Subtitle**: A Practitioner's Guide to Large Language Models
15+
- **Target Audience**: Software engineers with basic Python, familiar with APIs and JSON; basic linear algebra (vectors, matrices, dot products)
16+
- **Output Format**: HTML chapter files linking to shared stylesheet `styles/book.css`
17+
18+
## Visual Style
19+
20+
- **Illustrations**: Warm, colorful, cartoon-like illustrations generated via Gemini API
21+
- **Application Examples**: Teal/green color scheme
22+
- **Bibliographies**: Card-based layout (`.bib-entry-card`)
23+
- **Epigraphs**: Humorous quotes attributed to "A [Adjective] AI Agent/Model/etc."
24+
25+
## Chapter Map (Current Structure)
26+
27+
All agents that need to reference other chapters (Cross-Reference, Bibliography,
28+
Narrative Continuity, etc.) use this canonical chapter map. This is the ACTIVE structure
29+
on disk. All agents should use this until migration to the proposed structure is complete.
30+
31+
```
32+
Part 1: Foundations (part-1-foundations/)
33+
00: ML & PyTorch Foundations module-00-ml-pytorch-foundations
34+
01: NLP & Text Representation module-01-foundations-nlp-text-representation
35+
02: Tokenization & Subword Models module-02-tokenization-subword-models
36+
03: Sequence Models & Attention module-03-sequence-models-attention
37+
04: Transformer Architecture module-04-transformer-architecture
38+
05: Decoding & Text Generation module-05-decoding-text-generation
39+
40+
Part 2: Understanding LLMs (part-2-understanding-llms/)
41+
06: Pretraining & Scaling Laws module-06-pretraining-scaling-laws
42+
07: Modern LLM Landscape module-07-modern-llm-landscape
43+
08: Reasoning & Test-Time Compute module-08-reasoning-test-time-compute
44+
09: Inference Optimization module-09-inference-optimization
45+
18: Interpretability module-18-interpretability
46+
47+
Part 3: Working with LLMs (part-3-working-with-llms/)
48+
10: LLM APIs module-10-llm-apis
49+
11: Prompt Engineering module-11-prompt-engineering
50+
12: Hybrid ML + LLM module-12-hybrid-ml-llm
51+
52+
Part 4: Training & Adapting (part-4-training-adapting/)
53+
13: Synthetic Data module-13-synthetic-data
54+
14: Fine-Tuning Fundamentals module-14-fine-tuning-fundamentals
55+
15: PEFT module-15-peft
56+
16: Distillation & Merging module-16-distillation-merging
57+
17: Alignment, RLHF & DPO module-17-alignment-rlhf-dpo
58+
59+
Part 5: Retrieval & Conversation (part-5-retrieval-conversation/)
60+
19: Embeddings & Vector DBs module-19-embeddings-vector-db
61+
20: RAG module-20-rag
62+
21: Conversational AI module-21-conversational-ai
63+
64+
Part 6: Agentic AI (part-6-agentic-ai/)
65+
22: AI Agents module-22-ai-agents
66+
23: Tool Use & Protocols module-23-tool-use-protocols
67+
24: Multi-Agent Systems module-24-multi-agent-systems
68+
25: Specialized Agents module-25-specialized-agents
69+
26: Agent Safety & Production module-26-agent-safety-production
70+
71+
Part 7: Multimodal & Applications (part-7-multimodal-applications/)
72+
27: Multimodal module-27-multimodal
73+
28: LLM Applications module-28-llm-applications
74+
75+
Part 8: Evaluation & Production (part-8-evaluation-production/)
76+
29: Evaluation & Observability module-29-evaluation-observability
77+
30: Observability & Monitoring module-30-observability-monitoring
78+
31: Production Engineering module-31-production-engineering
79+
80+
Part 9: Safety & Strategy (part-9-safety-strategy/)
81+
32: Safety, Ethics & Regulation module-32-safety-ethics-regulation
82+
33: Strategy, Product & ROI module-33-strategy-product-roi
83+
84+
Part 10: Frontiers (part-10-frontiers/)
85+
34: Emerging Architectures module-34-emerging-architectures
86+
35: AI & Society module-35-ai-society
87+
```
88+
89+
**Note:** Part 2 contains module-18 (Interpretability) and Part 6 contains module-23
90+
(tool-use-protocols) alongside the legacy module-23 (multi-agent-systems). The canonical
91+
module-23 is `module-23-tool-use-protocols`; the legacy `module-23-multi-agent-systems`
92+
directory should be removed or merged into `module-24-multi-agent-systems` when convenient.
93+
94+
## Proposed Structure (Pending, v3)
95+
96+
The following restructuring has been proposed but NOT yet executed on disk. Agents should
97+
continue using the Current Structure above until migration is complete. This section
98+
exists to document the plan and guide the Structural Architect (Agent #19) when the
99+
restructuring is approved.
100+
101+
**Key changes (v3, based on competitive analysis of 11 books and 6 courses):**
102+
- AI Agents get their own dedicated Part (Part 6) with 4 chapters
103+
- Interpretability moves from Training to Understanding (it explains models, not trains them)
104+
- Data Engineering for LLMs added as new chapter (per LLM Engineer's Handbook, Chip Huyen)
105+
- Structured Output made explicit in APIs chapter title
106+
- Multimodal stays as its own topic (not merged into Part 2; requires Part 3-5 knowledge)
107+
- Applications grouped by pattern (4 chapters: code, knowledge, enterprise, creative)
108+
- LLMOps made explicit in Production chapter
109+
- LLM Security made explicit in Safety chapter
110+
- Voice/speech AI included in Conversational AI (given book title)
111+
112+
```
113+
Part 1: Foundations (6 chapters, unchanged)
114+
00: ML & PyTorch Foundations
115+
01: NLP & Text Representation
116+
02: Tokenization & Subword Models
117+
03: Sequence Models & Attention
118+
04: Transformer Architecture
119+
05: Decoding & Text Generation
120+
121+
Part 2: Understanding LLMs (4 chapters, +1: Interpretability moved here)
122+
06: Pretraining & Scaling Laws
123+
07: Modern LLM Landscape (incl. reasoning models, SLMs, on-device)
124+
08: Inference Optimization (incl. caching strategies, edge deployment)
125+
09: Interpretability & Mechanistic Understanding [MOVED from Part 4]
126+
127+
Part 3: Working with LLMs (4 chapters, +1: Data Engineering added)
128+
10: LLM APIs & Structured Output (incl. JSON mode, function calling)
129+
11: Prompt Engineering & Advanced Techniques
130+
12: Hybrid ML + LLM Architectures
131+
13: Data Engineering for LLMs [NEW] (pipelines, quality, curation, governance)
132+
133+
Part 4: Training & Adapting (5 chapters, Interpretability moved out)
134+
14: Synthetic Data Generation
135+
15: Fine-Tuning Fundamentals
136+
16: Parameter-Efficient Fine-Tuning (PEFT)
137+
17: Distillation & Merging
138+
18: Alignment: RLHF, DPO & Preference Tuning
139+
140+
Part 5: Retrieval & Conversation (3 chapters, unchanged)
141+
19: Embeddings & Vector Databases
142+
20: RAG (incl. long-context vs. RAG tradeoffs, GraphRAG)
143+
21: Conversational AI (incl. voice/speech-to-speech, real-time)
144+
145+
Part 6: AI Agents (4 chapters, dedicated Part)
146+
22: Agent Foundations, Protocols & Tool Use (MCP, A2A, AG-UI, ReAct)
147+
23: Agent Memory, Planning & Reasoning (test-time compute, MemGPT/Letta)
148+
24: Multi-Agent Systems (orchestration, debate, swarm, simulation)
149+
25: Agent Applications (code agents, browser agents, scientific agents)
150+
151+
Part 7: Multimodal & Applications (5 chapters)
152+
26: Multimodal Models (vision, audio, cross-modal, document AI)
153+
27: Code & Development AI
154+
28: Knowledge & Search AI
155+
29: Enterprise AI Applications (healthcare, legal, finance, customer service)
156+
30: Creative & Education AI
157+
158+
Part 8: Production & Strategy (3 chapters)
159+
31: Production Engineering & LLMOps (experiment tracking, CI/CD, monitoring)
160+
32: Safety, Security, Ethics & Regulation (LLM security, red teaming, EU AI Act)
161+
33: Strategy, Product & ROI
162+
163+
Capstone:
164+
34: Toward AGI (ARC-AGI benchmarks, scaling debate, emergent capabilities, alignment)
165+
```
166+
167+
**Total: 35 chapters across 8 Parts + capstone**
168+
169+
**Migration checklist** (to execute when approved):
170+
- [ ] Rename directories and files on disk
171+
- [ ] Update all cross-references and navigation links
172+
- [ ] Update the Current Structure section above (replace with this proposed structure)
173+
- [ ] Update CROSS_REFERENCE_MAP.md with new section numbers
174+
- [ ] Update CONFORMANCE_CHECKLIST.md book-specific sections
175+
- [ ] Run Controller sweep to verify no broken links remain
176+
- [ ] Create new chapter directories for: 13 (Data Engineering), 34 (Toward AGI)
177+
- [ ] Split current Ch 25 (LLM Applications) into Chs 27-30
178+
- [ ] Renumber current Ch 14-28 to new numbering scheme
179+
180+
## Relative Path Rules
181+
182+
- Same part: `../module-XX-name/index.html`
183+
- Different part: `../../part-N-name/module-XX-name/index.html`
184+
185+
## Batch Partitioning (for parallel agent runs)
186+
187+
When running agents across the entire book, partition by Part for parallelism:
188+
189+
- Batch A: Part 1 (Chapters 0-5, 6 modules)
190+
- Batch B: Part 2 (Chapters 6-9 + 18, 5 modules)
191+
- Batch C: Part 3 (Chapters 10-12, 3 modules)
192+
- Batch D: Part 4 (Chapters 13-17, 5 modules)
193+
- Batch E: Part 5 (Chapters 19-21, 3 modules)
194+
- Batch F: Part 6 (Chapters 22-26, 5 modules)
195+
- Batch G: Part 7 (Chapters 27-28, 2 modules)
196+
- Batch H: Part 8 (Chapters 29-31, 3 modules)
197+
- Batch I: Part 9 (Chapters 32-33, 2 modules)
198+
- Batch J: Part 10 (Chapters 34-35, 2 modules)
199+
200+
## Example Epigraphs by Chapter Theme
201+
202+
These are book-specific humorous epigraph examples. Each chapter gets one epigraph
203+
attributed to a fictional AI persona using the "A [Adjective] [AI Role]" format.
204+
205+
- Tokenization: "I spent three hours debugging a Unicode error. Turns out the model
206+
thought an emoji was four separate tokens. It was, technically, correct."
207+
*A Tokenizer Who Has Seen Things*
208+
- Attention: "They told me to attend to everything. So I did. Now I am 8 heads,
209+
none of which agree with each other."
210+
*An Attention Head With Existential Questions*
211+
- Fine-tuning: "I was a perfectly good base model. Then they showed me 10,000
212+
customer support transcripts and now I cannot stop being helpful."
213+
*A Reluctantly Aligned Language Model*
214+
- Scaling laws: "More data. More parameters. More compute. At some point you stop
215+
asking 'will it work?' and start asking 'can we afford the electricity bill?'"
216+
*A Mildly Concerned Cluster Administrator*
217+
- RAG: "I used to hallucinate confidently. Now I hallucinate with citations."
218+
*An Unusually Honest Neural Network*
219+
- Agents: "They gave me tools, memory, and the ability to plan. I immediately
220+
got stuck in an infinite loop. Just like the humans, really."
221+
*A Self-Aware ReAct Agent*

0 commit comments

Comments
 (0)