Skip to content

Commit 137abf2

Browse files
committed
Refine chart styling: per-bar colors, axis labels, italic captions, descriptive caption text
- Single-dataset bar charts use per-bar colors from palette (viridis shows purple-to-yellow gradient) - Add ylabel to bar, line, grouped bar, scatter example charts - Add xlabel to line and scatter example charts - Caption styled as italic with lighter color (0.55 alpha) to distinguish from axis labels - Caption uses fullSize: true for full-width centering - Font hierarchy: ticks 18px, axis titles 20px, captions 20px italic - Captions rewritten to be descriptive rather than redundant with slide titles - Update all chart screenshots (navigation bar hidden) - Update chart-defaults.js subtitle size to 22px
1 parent 2a88c32 commit 137abf2

11 files changed

Lines changed: 15 additions & 9 deletions
-2.24 KB
Loading

docs/screenshots/chart-02-bar.png

4.68 KB
Loading

docs/screenshots/chart-03-line.png

2.74 KB
Loading
1.2 KB
Loading

docs/screenshots/chart-05-pie.png

1.64 KB
Loading
663 Bytes
Loading
5 KB
Loading
2.98 KB
Loading
-1.57 KB
Loading

examples/sample_charts.md

Lines changed: 14 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,8 @@ Contextual Dynamics Lab
1919
type: bar
2020
labels: GPT-2, LLaMA, Mistral, Claude, LLaMA-2
2121
data: 1.5, 70, 7, 52, 70
22-
caption: Parameter counts in billions
22+
ylabel: Parameters (B)
23+
caption: LLaMA and LLaMA-2 lead at 70B parameters
2324
```
2425

2526
---
@@ -34,7 +35,9 @@ datasets:
3435
data: 2.8, 2.1, 1.7, 1.4, 1.2, 1.05, 0.95, 0.88, 0.83, 0.80
3536
- label: Optimized
3637
data: 2.5, 1.6, 1.1, 0.8, 0.65, 0.55, 0.48, 0.43, 0.40, 0.38
37-
caption: Cross-entropy loss by training epoch
38+
xlabel: Epoch
39+
ylabel: Loss
40+
caption: Optimized model converges 2x faster than baseline
3841
```
3942

4043
---
@@ -51,7 +54,8 @@ datasets:
5154
data: 89, 88, 85, 79
5255
- label: Recall
5356
data: 87, 91, 83, 76
54-
caption: Benchmark scores across models (%)
57+
ylabel: Score (%)
58+
caption: GPT-4 leads across all three metrics
5559
```
5660

5761
---
@@ -62,7 +66,7 @@ caption: Benchmark scores across models (%)
6266
type: pie
6367
labels: Federal grants, Industry, Foundation, University
6468
data: 45, 25, 18, 12
65-
caption: Funding sources for fiscal year 2025
69+
caption: Federal grants account for nearly half of all funding
6670
```
6771

6872
---
@@ -73,7 +77,7 @@ caption: Funding sources for fiscal year 2025
7377
type: doughnut
7478
labels: Research, Teaching, Service, Administration
7579
data: 40, 25, 20, 15
76-
caption: Average faculty time distribution
80+
caption: Research dominates at 40% of faculty effort
7781
```
7882

7983
---
@@ -87,7 +91,9 @@ datasets:
8791
data: 1.5 78, 7 85, 13 87, 52 91, 70 90, 175 93
8892
- label: RNN baselines
8993
data: 0.5 62, 2 68, 5 72, 10 74, 20 76
90-
caption: Parameters (B) vs. benchmark accuracy (%)
94+
xlabel: Parameters (B)
95+
ylabel: Accuracy (%)
96+
caption: Accuracy plateaus above 50B parameters
9197
```
9298

9399
---
@@ -104,7 +110,7 @@ datasets:
104110
data: 93, 90, 88, 95, 94, 90
105111
- label: Open source
106112
data: 78, 82, 75, 80, 76, 72
107-
caption: Capability scores across evaluation dimensions
113+
caption: Claude excels in writing; open source lags across all dimensions
108114
```
109115

110116
---
@@ -117,5 +123,5 @@ labels: Conv1, Conv2, Conv3, Pool1, FC1, FC2, Output
117123
data: 0.82, 0.91, 0.67, 0.45, 0.93, 0.78, 0.56
118124
palette: viridis
119125
ylabel: Mean activation
120-
caption: Average activations by layer (ImageNet validation set)
126+
caption: FC1 shows highest activation; pooling layer is lowest
121127
```

0 commit comments

Comments
 (0)