-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathindex.html
More file actions
549 lines (500 loc) · 40.8 KB
/
index.html
File metadata and controls
549 lines (500 loc) · 40.8 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="description"
content="NerVE: An eigenspectral probe of FFN nonlinearities that quantifies how they restructure latent variance, yielding spectral signatures that track generalization and reveal consistent effects of architecture and optimizer design.">
<meta name="keywords" content="NerVE, Eigenspectrum, feed-forward networks, training dynamics, latent space geometry, optimizer geometry, LLM, ICLR 2026">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks</title>
<link href="https://fonts.googleapis.com/css?family=Google+Sans|Noto+Sans|Castoro"
rel="stylesheet">
<link rel="stylesheet" href="./static/css/bulma.min.css">
<link rel="stylesheet" href="./static/css/bulma-carousel.min.css">
<link rel="stylesheet" href="./static/css/bulma-slider.min.css">
<link rel="stylesheet" href="./static/css/fontawesome.all.min.css">
<link rel="stylesheet"
href="https://cdn.jsdelivr.net/gh/jpswalsh/academicons@1/css/academicons.min.css">
<link rel="stylesheet" href="./static/css/index.css">
<link rel="icon" href="./static/images/favicon.svg">
<style>
.venue-badge {
display: inline-block;
background: linear-gradient(135deg, #667eea 0%, #764ba2 50%, #a855f7 100%);
color: white;
padding: 0.8rem 2.4rem;
border-radius: 50px;
font-size: 1.8rem;
font-weight: 700;
letter-spacing: 0.5px;
box-shadow: 0 4px 15px rgba(102, 126, 234, 0.4);
animation: badgeGlow 2s ease-in-out infinite alternate;
}
@keyframes badgeGlow {
from { box-shadow: 0 4px 15px rgba(102, 126, 234, 0.4); }
to { box-shadow: 0 4px 25px rgba(168, 85, 247, 0.6); }
}
.title.is-3 {
margin-bottom: 2rem !important;
}
</style>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<script defer src="./static/js/fontawesome.all.min.js"></script>
<script src="./static/js/bulma-carousel.min.js"></script>
<script src="./static/js/bulma-slider.min.js"></script>
<script src="./static/js/index.js"></script>
</head>
<body>
<section class="hero">
<div class="hero-body">
<div class="container is-max-desktop">
<div class="columns is-centered">
<div class="column has-text-centered">
<!-- Venue Badge (top) -->
<div style="margin-bottom: 1.5rem;">
<span class="venue-badge">✨ ICLR 2026</span>
</div>
<h1 class="title is-1 publication-title">NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks</h1>
<div class="is-size-5 publication-authors">
<span class="author-block">
<a href="https://www.nankj.com">Nandan Kumar Jha</a>,</span>
<span class="author-block">
<a href="https://engineering.nyu.edu/faculty/brandon-reagen">Brandon Reagen</a>
</span>
</div>
<div class="is-size-5 publication-authors">
<span class="author-block">New York University</span>
</div>
<div class="column has-text-centered">
<div class="publication-links">
<!-- arXiv Link -->
<span class="link-block">
<a href="https://arxiv.org/abs/2603.06922"
class="external-link button is-normal is-rounded is-dark"
target="_blank">
<span class="icon">
<i class="ai ai-arxiv"></i>
</span>
<span>arXiv</span>
</a>
</span>
<!-- OpenReview Link -->
<span class="link-block">
<a href="https://openreview.net/forum?id=W5BPGXR9jf"
class="external-link button is-normal is-rounded is-dark"
target="_blank">
<span class="icon">
<i class="fas fa-file-pdf"></i>
</span>
<span>OpenReview</span>
</a>
</span>
<!-- Code Link -->
<span class="link-block">
<a href="https://github.com/nerve-eigenspectrum/NerVE"
class="external-link button is-normal is-rounded is-dark"
target="_blank">
<span class="icon">
<i class="fab fa-github"></i>
</span>
<span>Code</span>
</a>
</span>
</div>
</div>
</div>
</div>
</div>
</div>
</section>
<!-- Teaser Image Section -->
<section class="hero teaser">
<div class="container is-max-desktop">
<div class="hero-body">
<img src="./static/images/nerve-framework.png" alt="NerVE Framework Overview" style="width: 100%;">
<!-- Figure Caption -->
<p class="has-text-justified" style="color: #888; font-size: 0.95rem; margin-top: 1rem; margin-bottom: 1.5rem; line-height: 1.6;">
<strong style="color: #666;">Figure:</strong> NerVE tracks eigenspectrum dynamics at pre-activation (after W<sub>up</sub>, before σ) and post-activation (after σ, before W<sub>down</sub>) points in each FFN layer, computing four complementary metrics: Spectral Entropy (SE) for dispersion, Participation Ratio (PR) for effective dimensionality, Eigenvalue Early Enrichment (EEE) for top-heaviness, and Jensen–Shannon Divergence (JS) to quantify the distributional shift, characterizing how nonlinearities restructure the latent geometry.
</p>
</div>
</div>
</section>
<section class="section">
<div class="container is-max-desktop">
<!-- Abstract. -->
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h2 class="title is-3">Abstract</h2>
<div class="content has-text-justified">
<p>
We introduce NerVE, a unified eigenspectral framework for understanding how feed-forward networks (FFNs) in large language models (LLMs) organize and regulate information flow in high-dimensional latent space. Despite FFNs dominating the parameter budget, their high-dimensional dynamics remain poorly understood. NerVE addresses this gap through lightweight, memory-efficient tracking of eigenspectrum dynamics via four complementary metrics: Spectral Entropy (dispersion), Participation Ratio (effective dimensionality), Eigenvalue Early Enrichment (top-heaviness), and Jensen-Shannon divergence (distributional shifts). Our <em>key insight</em> is that FFN nonlinearities reinject variance across eigenmodes, fundamentally governing latent dimension utilization, and that optimizer geometry strongly modulates the extent of this variance reinjection.
</p>
<p>
We validate NerVE across model scales, and diverse architectural and optimizer configurations, each uniquely shaping FFN dynamics: normalization schemes controlling variance flow; FFN weight geometries constraining latent space; positional encoding and activation functions regulating information flow; and optimizer choices redistributing effective capacity across depth. Across these settings, NerVE consistently recovers stable spectral signatures that correlate with the model's generalization ability and respond predictably to design choices, generalizing beyond transformer to MLP-Mixer architectures, providing actionable insights for architectural and optimizer choices beyond trial-and-error.
</p>
</div>
</div>
</div>
<!--/ Abstract. -->
</div>
</section>
<!-- Section 1: What do FFN nonlinearities actually do? -->
<section class="section">
<div class="container is-max-desktop">
<div class="columns is-centered">
<div class="column is-full-width">
<h2 class="title is-3">What do FFN nonlinearities actually do?</h2>
<!-- Figure -->
<div class="has-text-centered">
<img src="./static/images/gelu-relu-combined-panel.png" alt="GELU and ReLU eigenspectrum dynamics" style="width: 100%;">
</div>
<!-- Figure Caption -->
<p class="has-text-justified" style="color: #888; font-size: 0.95rem; margin-top: 1rem; margin-bottom: 1.5rem; line-height: 1.6;">
<strong style="color: #666;">Figure:</strong> Eigen-metrics (SE, PR, EEE, and JS) illustrate how FFN nonlinearities regulate information flow and reshape the eigenspectrum during training for GELU (top) and ReLU (bottom). Pre- and post-activation dynamics are shown for SE, PR, and EEE, highlighting how nonlinearities reinject variance and alter spectral structure. JS heatmaps (rightmost) capture the layer-wise distributional shift induced by the nonlinearity. In-panel titles report Pearson correlations (<em>r</em>) between each metric and evaluation loss, shown as orange curves.
</p>
<div class="content has-text-justified">
<p>
<strong>Attention-induced rank collapse.</strong> <a href="https://proceedings.mlr.press/v139/dong21a.html" target="_blank">Dong et al. (ICML 2021)</a> showed that self-attention possesses <em>a strong inductive bias toward token uniformity:</em> a pure self-attention network (when skip connections and FFNs are disabled) loses expressive power doubly exponentially with depth. They observed a <strong>tug-of-war</strong> between self-attention and FFN nonlinearities: attention collapses rank, FFN nonlinearity somehow fights back and keeps transformer networks alive. However, the mechanism of rank inflation through FFN nonlinearity has not been well understood, and their precise role is not quantified. NerVE provides the quantitative answer.
</p>
<p>
<strong>Nonlinearity-induced rank inflation.</strong> We show that FFN nonlinearities actively <em>reinject variance</em> into under-utilized directions of the latent space, reawakening dimensions that would otherwise remain inactive, a process we term <strong>nonlinearity-induced rank inflation</strong>. This is not a passive rescaling; the nonlinearity fundamentally reorganizes the eigenspectrum, flattening its top-heavy structure by spreading variance across a broader set of directions.
</p>
<p>
NerVE tracks this mechanism through four complementary metrics. Spectral Entropy (SE) and Participation Ratio (PR) both rise after activation, indicating broader variance distribution and higher effective dimensionality. Eigenvalue Early Enrichment (EEE) drops, confirming that the spectrum becomes less top-heavy. Jensen-Shannon divergence (JS) heatmaps reveal <em>where</em> this redistribution is strongest across depth and training; a structured, depth-localized transition band rather than a uniform effect.
</p>
<!-- Metrics Table -->
<div class="has-text-centered" style="margin: 1.5rem 0;">
<img src="./static/images/nerve-metrics-table.png" alt="NerVE metrics summary table" style="max-width: 85%;">
<p class="has-text-justified" style="color: #888; font-size: 0.95rem; margin-top: 0.75rem; line-height: 1.6;">
<strong style="color: #666;">Table:</strong> Summary of NerVE's four complementary eigen-metrics, their inputs, ranges, spectral sensitivities, and what each captures about the latent space geometry. SE, PR, and EEE characterize a single spectrum; while JS quantifies the information-theoretic distance between the pre- and post-activation, and characterizes nonlinearity-induced geometric transformation. Here, λ denotes the raw eigenvalues, λ̂ the normalized eigenvalues, and <em>D</em> the FFN hidden dimension.
</p>
</div>
<p>
<strong>GELU vs. ReLU: who explores more of the latent space?</strong> GELU and ReLU follow the same qualitative trajectory (variance reinjection, spectral flattening, distributional reordering) but differ in pace and extent. ReLU stabilizes earlier; GELU progresses more gradually yet ultimately explores a broader subspace, correlating with its lower perplexity. All four metrics correlate strongly with evaluation loss (|<em>r</em>| > 0.92), confirming that spectral dynamics track generalization throughout training.
</p>
</div>
</div>
</div>
</div>
</section>
<!-- Section 2: Normalization -->
<section class="section">
<div class="container is-max-desktop">
<div class="columns is-centered">
<div class="column is-full-width">
<h2 class="title is-3">How FFN nonlinearities compensate for LayerNorms</h2>
<!-- Figure -->
<div class="has-text-centered">
<img src="./static/images/normfree_eigenmetrics_plots.png" alt="Normalization-free eigenmetrics plots" style="width: 100%;">
</div>
<!-- Figure Caption -->
<p class="has-text-justified" style="color: #888; font-size: 0.95rem; margin-top: 1rem; margin-bottom: 1.5rem; line-height: 1.6;">
<strong style="color: #666;">Figure:</strong> Eigenspectrum dynamics for norm-free GPT-2 (125M) models with GELU (top), ReLU (middle), and learnable-slope Leaky ReLU (bottom). Each row shows layer-averaged SE (pre vs. post), PR gain (post-to-pre ratio), post-activation EEE, and JS divergence across layers and training steps. Norm-free GELU exhibits spectral inertia in layers 0 to 5 (EEE → 1, JS → 0), while <em>ReLU and Leaky ReLU aggressively reinject variance</em> (PR gain > <strong>200x</strong>), flattening the spectrum (EEE < 0.3).
</p>
<div class="content has-text-justified">
<p>
<strong>Removing LayerNorm shifts the entire burden of statistical regularization onto FFN activations, and not all activations survive.</strong> LayerNorm re-centers and rescales representations at every layer, quietly preventing variance from concentrating into a few dominant directions. Without it, the FFN nonlinearity is the last line of defense against spectral collapse. NerVE reveals that GELU and ReLU respond to this pressure in fundamentally different ways.
</p>
<p>
<strong>GELU exhibits spectral inertia: early FFNs fail to reinject variance, and information flows through a narrow subspace.</strong> In normalization-free models with GELU, the post-activation EEE remains near 1 and JS near 0 in early layers; the nonlinearity is effectively acting as a near-identity, leaving the top-heavy eigenspectrum untouched. This spectral bottleneck is the geometric signature of entropic overload (<a href="https://arxiv.org/abs/2410.09637" target="_blank">Jha & Reagen, NeurIPS ATTRIB 2024</a>), where early attention heads are stuck in high-entropy states, starving deeper layers of representational diversity.
</p>
<p>
<strong>ReLU breaks spectral inertia through aggressive overcompensation (PR gains >200).</strong> In sharp contrast, ReLU and learnable-slope Leaky ReLU variants exhibit massive variance reinjection in the first two FFN layers, flattening the spectrum (EEE < 0.3) and producing non-overlapping pre/post spectral entropy curves. This compensatory behavior partially assumes the regularization role of LayerNorm, closing roughly 50% of the perplexity gap to the normalized baseline.
</p>
</div>
<!-- Perplexity Table -->
<div style="margin: 1.5rem 0;">
<table style="margin: 0 auto; border-collapse: collapse; font-size: 1rem; min-width: 500px;">
<thead>
<tr>
<th style="border-bottom: 2px solid #333; padding: 0.6rem 1rem;"></th>
<th colspan="2" style="border-bottom: 2px solid #333; padding: 0.6rem 1rem; text-align: center; font-weight: 600;">Baseline Models</th>
<th colspan="3" style="border-bottom: 2px solid #333; padding: 0.6rem 1rem; text-align: center; font-weight: 600;">Norm-free Models</th>
</tr>
<tr>
<th style="border-bottom: 1px solid #999; padding: 0.5rem 1rem;"></th>
<th style="border-bottom: 1px solid #999; padding: 0.5rem 1rem; text-align: center; font-weight: 500;">GELU</th>
<th style="border-bottom: 1px solid #999; padding: 0.5rem 1rem; text-align: center; font-weight: 500;">ReLU</th>
<th style="border-bottom: 1px solid #999; padding: 0.5rem 1rem; text-align: center; font-weight: 500;">GELU</th>
<th style="border-bottom: 1px solid #999; padding: 0.5rem 1rem; text-align: center; font-weight: 500;">ReLU</th>
<th style="border-bottom: 1px solid #999; padding: 0.5rem 1rem; text-align: center; font-weight: 500;">Leaky ReLU</th>
</tr>
</thead>
<tbody>
<tr>
<td style="padding: 0.6rem 1rem; font-weight: 600; border-bottom: 2px solid #333;">PPL</td>
<td style="padding: 0.6rem 1rem; text-align: center; border-bottom: 2px solid #333;">2.714</td>
<td style="padding: 0.6rem 1rem; text-align: center; border-bottom: 2px solid #333;">2.774</td>
<td style="padding: 0.6rem 1rem; text-align: center; border-bottom: 2px solid #333;">3.223</td>
<td style="padding: 0.6rem 1rem; text-align: center; border-bottom: 2px solid #333;">2.988</td>
<td style="padding: 0.6rem 1rem; text-align: center; border-bottom: 2px solid #333;">3.081</td>
</tr>
</tbody>
</table>
<p class="has-text-justified" style="color: #888; font-size: 0.95rem; margin-top: 0.75rem; line-height: 1.6;">
<strong style="color: #666;">Table:</strong> Evaluation perplexity (PPL) comparison across GPT-2 baseline models (GELU and ReLU), and the norm-free variants (GELU, ReLU, learnable-slope Leaky ReLU). All models trained from scratch on 2.1B tokens from the CodeParrot dataset.
</p>
</div>
</div>
</div>
</div>
</section>
<!-- Section 3: Optimizer Geometry -->
<section class="section">
<div class="container is-max-desktop">
<div class="columns is-centered">
<div class="column is-full-width">
<h2 class="title is-3">How optimizer geometry determines FFN capacity allocation</h2>
<!-- Figure 1 + Caption + Paragraph 1 -->
<div class="has-text-centered">
<img src="./static/images/combined_eigen_muon_dion_350m_512.png" alt="Optimizer-dependent FFN eigenspectrum dynamics" style="width: 100%;">
</div>
<p class="has-text-justified" style="color: #888; font-size: 0.95rem; margin-top: 1rem; margin-bottom: 1.5rem; line-height: 1.6;">
<strong style="color: #666;">Figure:</strong> Optimizer-dependent FFN eigenspectrum dynamics in GPT-2 (350M) trained on FineWeb dataset. Rows show AdamW (top), Muon (middle), and Dion (bottom). AdamW exhibits large early PR gains and high JS with relatively high post-activation EEE, indicating optimizer-induced pre-activation collapse followed by aggressive but incomplete nonlinear repair. Muon shows the smallest PR gains, lowest JS, and lowest post-activation EEE, with flatter post-spectra. Dion falls between these two regimes, improving over AdamW but not matching Muon's spectral behavior. The perplexity ordering (Muon < Dion < AdamW) aligns with post-activation spectral flatness.
</p>
<div class="content has-text-justified">
<p>
<strong>Repair or refinement: more effort does not mean better outcome.</strong> Under <a href="https://openreview.net/forum?id=Bkg6RiCqY7" target="_blank">AdamW</a>, FFN nonlinearities exhibit the largest PR gains and highest JS divergence across all three optimizers; they are working the hardest. But this effort is corrective, not productive: the nonlinearity spends its capacity undoing spectral collapse that the optimizer itself induced, and despite massive corrections, AdamW's post-activation effective dimensionality remains the lowest. <a href="https://kellerjordan.github.io/posts/muon/" target="_blank">Muon</a> achieves the opposite: highest post-activation PR with the smallest gains and lowest JS. Its nonlinearities perform modest refinement on already healthy spectra rather than expensive repair. <a href="https://arxiv.org/abs/2504.05295" target="_blank">Dion</a> falls between, improving over AdamW but not matching Muon's spectral efficiency. The perplexity ordering (Muon < Dion < AdamW) tracks this distinction: productive refinement outperforms heroic repair.
</p>
</div>
<!-- Figure 2 + Caption + Paragraph 2 -->
<div class="has-text-centered" style="margin-top: 2rem;">
<img src="./static/images/pr_pre_line_plots_350m.png" alt="Layer-wise pre-activation PR over training" style="width: 100%;">
</div>
<p class="has-text-justified" style="color: #888; font-size: 0.95rem; margin-top: 1rem; margin-bottom: 1.5rem; line-height: 1.6;">
<strong style="color: #666;">Figure:</strong> Layer-wise pre-activation PR over training for AdamW, Muon, and Dion on GPT-2 350M (24 layers) trained on FineWeb dataset. Muon maintains the highest PR<sub>pre</sub> across almost all layers throughout training, Dion is intermediate, and AdamW shows early-layer collapse.
</p>
<div class="content has-text-justified">
<p>
<strong>Muon preserves well-conditioned pre-activations; AdamW lets them collapse.</strong> The root cause of the repair-refinement divide lies in what each optimizer does to the pre-activation eigenspectrum. AdamW allows early-layer pre-activation PR to collapse during training; variance concentrates into a few dominant eigenmodes, handing the nonlinearity a spectrally damaged input. Muon maintains high pre-activation PR across nearly all layers throughout training, producing near-isotropic spectra before the nonlinearity even acts. Dion partially mitigates the early-layer collapse but does not match Muon's conditioning. These dynamics persist across model scales (160M, 350M) and context lengths (512, 1024), confirming that they are intrinsic to optimizer geometry rather than artifacts of a specific configuration.
</p>
</div>
<!-- Figure 3 + Caption + Paragraph 3 -->
<div class="has-text-centered" style="margin-top: 2rem;">
<img src="./static/images/layer_trend_adamw_muon_dion_350m.png" alt="Final post-activation PR per layer" style="width: 100%;">
</div>
<p class="has-text-justified" style="color: #888; font-size: 0.95rem; margin-top: 1rem; margin-bottom: 1.5rem; line-height: 1.6;">
<strong style="color: #666;">Figure:</strong> Final post-activation PR per layer for AdamW, Muon, and Dion on GPT-2 350M (24 layers) trained on FineWeb dataset. Muon concentrates the largest effective dimensionality in middle FFN layers, the layers most critical for generalization.
</p>
<div class="content has-text-justified">
<p>
<strong>Where capacity accumulates matters more than how much is injected.</strong> Muon concentrates the highest post-activation effective dimensionality in middle FFN layers, the layers recent evidence identifies as disproportionately important for generalization (<a href="https://openreview.net/forum?id=c5TFhCJ6fs" target="_blank">Queipo-de-Llano et al., ICLR 2026</a>; <a href="https://openreview.net/forum?id=Wxh5Xz7NpJ" target="_blank">Lad et al., NeurIPS 2025</a>; <a href="https://openreview.net/forum?id=oP3b5YBFoP" target="_blank">Ikeda et al., COLM 2025</a>; <a href="https://openreview.net/forum?id=WGXb7UdvTX" target="_blank">Skean et al., ICML 2025</a>). AdamW inflates PR<sub>post</sub> in early layers through aggressive repair but leaves middle layers underserved. Dion pushes capacity into early FFNs without yielding the best perplexity. The decisive pattern: perplexity tracks mid-layer spectral capacity, not early-layer effort. This suggests that optimizer selection should be evaluated not by aggregate training metrics but by where across depth the optimizer allocates effective representational capacity. These findings provide empirical evidence that optimizer geometry introduces qualitatively distinct representational biases, not merely different convergence rates, aligning with the recent position that optimizers should be leveraged as explicit sources of inductive biases (<a href="https://arxiv.org/abs/2507.12224" target="_blank">Pascanu et al., 2025</a>).
</p>
</div>
</div>
</div>
</div>
</section>
<!-- Section 4: MLP-Mixer -->
<section class="section">
<div class="container is-max-desktop">
<div class="columns is-centered">
<div class="column is-full-width">
<h2 class="title is-3">Beyond attention: NerVE on MLP-Mixer</h2>
<!-- Figure -->
<div class="has-text-centered">
<img src="./static/images/mlp_mixer_combined_plots.png" alt="Eigenspectrum dynamics in MLP-Mixer" style="width: 100%;">
</div>
<!-- Figure Caption -->
<p class="has-text-justified" style="color: #888; font-size: 0.95rem; margin-top: 1rem; margin-bottom: 1.5rem; line-height: 1.6;">
<strong style="color: #666;">Figure:</strong> Eigenspectrum dynamics in MLP-Mixer under activation ablations. Rows correspond to the four activation configurations for token-mixing (FFN1) and channel-mixing (FFN2) layers, and columns (from left to right) show SE, PR, EEE, and JS for the channel-mixing FFNs (FFN2). Each panel traces pre- and post-activation metrics over training, showing that ReLU in the channel-mixing MLP (3rd and 4th rows) most strongly increases SE/PR and reduces EEE, reinjects variance into low-energy directions and flattens the spectrum.
</p>
<div class="content has-text-justified">
<p>
<strong>The variance reinjection pattern is not transformer-specific; it emerges wherever deep FFNs meet nonlinearity.</strong> MLP-Mixer removes self-attention entirely, isolating the contribution of FFN nonlinear transformations from attention-specific dynamics like rank collapse. We apply NerVE to <a href="https://arxiv.org/abs/2105.01601" target="_blank">MLP-Mixer</a> (B/16) on CIFAR-100, a pure-MLP architecture with no self-attention. The same core pattern holds: post-activation SE and PR rise above pre-activation throughout training, EEE drops, and the nonlinearity actively flattens the eigenspectrum across all four activation configurations tested. NerVE further reveals that activation choice in the channel-mixing MLP (the component analogous to transformer FFNs) has a far stronger spectral impact than activation choice in the token-mixing MLP, identifying <em>which</em> nonlinearity matters most. The optimizer story also extends: SGD achieves higher post-activation SE and PR than Adam throughout training, correlating with better accuracy (68.07% vs 66.96%), confirming that optimizer-dependent spectral dynamics are not a transformer-specific phenomenon.
</p>
</div>
</div>
</div>
</div>
</section>
<!-- Section 5: Generalization -->
<section class="section">
<div class="container is-max-desktop">
<div class="columns is-centered">
<div class="column is-full-width">
<h2 class="title is-3">NerVE metrics predict generalization without evaluation</h2>
<div class="content has-text-justified">
<p>
<strong>NerVE metrics are not just descriptive; they track generalization with near-perfect correlation.</strong> Pre-activation SE and PR correlate with validation loss at |<em>r</em>| ≥ 0.97 across every FFN width configuration tested, throughout training. This means spectral health can be monitored with a single forward pass, no gradient computation, no validation set evaluation. Post-activation correlations strengthen as FFN width increases, suggesting that a modest width is needed before the spectral signal becomes generalization-predictive.
</p>
</div>
<!-- Table A: Within-run tracking -->
<div style="margin: 1.5rem 0; overflow-x: auto;">
<table style="margin: 0 auto; border-collapse: collapse; font-size: 0.95rem; width: 100%;">
<thead>
<tr>
<th style="border-bottom: 2px solid #333; padding: 0.5rem 0.6rem;"></th>
<th colspan="8" style="border-bottom: 2px solid #333; padding: 0.5rem 0.6rem; text-align: center; font-weight: 600;">FFN Width Configuration (GPT-2 GELU)</th>
</tr>
<tr>
<th style="border-bottom: 1px solid #999; padding: 0.5rem 0.6rem; text-align: left;">Metric</th>
<th style="border-bottom: 1px solid #999; padding: 0.5rem 0.6rem; text-align: center;">D=1d</th>
<th style="border-bottom: 1px solid #999; padding: 0.5rem 0.6rem; text-align: center;">D=2d</th>
<th style="border-bottom: 1px solid #999; padding: 0.5rem 0.6rem; text-align: center;">D=3d</th>
<th style="border-bottom: 1px solid #999; padding: 0.5rem 0.6rem; text-align: center;">D=4d</th>
<th style="border-bottom: 1px solid #999; padding: 0.5rem 0.6rem; text-align: center;">D=5d</th>
<th style="border-bottom: 1px solid #999; padding: 0.5rem 0.6rem; text-align: center;">D=6d</th>
<th style="border-bottom: 1px solid #999; padding: 0.5rem 0.6rem; text-align: center;">D=7d</th>
<th style="border-bottom: 1px solid #999; padding: 0.5rem 0.6rem; text-align: center;">D=8d</th>
</tr>
</thead>
<tbody>
<tr>
<td style="padding: 0.5rem 0.6rem;">SE_pre</td>
<td style="padding: 0.5rem 0.6rem; text-align: center;">-0.98</td>
<td style="padding: 0.5rem 0.6rem; text-align: center;">-0.98</td>
<td style="padding: 0.5rem 0.6rem; text-align: center; font-weight: 700;">-0.99</td>
<td style="padding: 0.5rem 0.6rem; text-align: center; font-weight: 700;">-0.99</td>
<td style="padding: 0.5rem 0.6rem; text-align: center; font-weight: 700;">-0.99</td>
<td style="padding: 0.5rem 0.6rem; text-align: center; font-weight: 700;">-0.99</td>
<td style="padding: 0.5rem 0.6rem; text-align: center; font-weight: 700;">-0.99</td>
<td style="padding: 0.5rem 0.6rem; text-align: center; font-weight: 700;">-0.99</td>
</tr>
<tr>
<td style="padding: 0.5rem 0.6rem;">SE_post</td>
<td style="padding: 0.5rem 0.6rem; text-align: center;">-0.84</td>
<td style="padding: 0.5rem 0.6rem; text-align: center;">-0.84</td>
<td style="padding: 0.5rem 0.6rem; text-align: center;">-0.86</td>
<td style="padding: 0.5rem 0.6rem; text-align: center;">-0.87</td>
<td style="padding: 0.5rem 0.6rem; text-align: center;">-0.87</td>
<td style="padding: 0.5rem 0.6rem; text-align: center;">-0.87</td>
<td style="padding: 0.5rem 0.6rem; text-align: center;">-0.87</td>
<td style="padding: 0.5rem 0.6rem; text-align: center;">-0.87</td>
</tr>
<tr style="border-top: 1px solid #ddd;">
<td style="padding: 0.5rem 0.6rem;">PR_pre</td>
<td style="padding: 0.5rem 0.6rem; text-align: center;">-0.97</td>
<td style="padding: 0.5rem 0.6rem; text-align: center;">-0.98</td>
<td style="padding: 0.5rem 0.6rem; text-align: center;">-0.98</td>
<td style="padding: 0.5rem 0.6rem; text-align: center; font-weight: 700;">-0.99</td>
<td style="padding: 0.5rem 0.6rem; text-align: center;">-0.98</td>
<td style="padding: 0.5rem 0.6rem; text-align: center;">-0.97</td>
<td style="padding: 0.5rem 0.6rem; text-align: center;">-0.98</td>
<td style="padding: 0.5rem 0.6rem; text-align: center;">-0.97</td>
</tr>
<tr>
<td style="padding: 0.5rem 0.6rem; border-bottom: 2px solid #333;">PR_post</td>
<td style="padding: 0.5rem 0.6rem; text-align: center; border-bottom: 2px solid #333;">-0.85</td>
<td style="padding: 0.5rem 0.6rem; text-align: center; border-bottom: 2px solid #333;">-0.93</td>
<td style="padding: 0.5rem 0.6rem; text-align: center; border-bottom: 2px solid #333;">-0.94</td>
<td style="padding: 0.5rem 0.6rem; text-align: center; border-bottom: 2px solid #333;">-0.94</td>
<td style="padding: 0.5rem 0.6rem; text-align: center; border-bottom: 2px solid #333;">-0.95</td>
<td style="padding: 0.5rem 0.6rem; text-align: center; border-bottom: 2px solid #333;">-0.95</td>
<td style="padding: 0.5rem 0.6rem; text-align: center; border-bottom: 2px solid #333;">-0.93</td>
<td style="padding: 0.5rem 0.6rem; text-align: center; border-bottom: 2px solid #333;">-0.93</td>
</tr>
</tbody>
</table>
<p class="has-text-justified" style="color: #888; font-size: 0.95rem; margin-top: 0.75rem; line-height: 1.6;">
<strong style="color: #666;">Table A (Within-run tracking):</strong> Pearson <em>r</em> between each metric and validation loss over training checkpoints at each FFN width (D=1d to 8d). Pre-activation correlations exceed |<em>r</em>| ≥ 0.97 at every width. Post-activation PR strengthens from 0.85 at D=1d to ≥ 0.93 at D ≥ 2d, suggesting a modest FFN width is required for generalization-predictive spectral signatures.
</p>
</div>
<div class="content has-text-justified">
<p>
<strong>Short runs can rank architectures without training to convergence.</strong> Across eight FFN width configurations and multiple activation variants, final spectral metric values correlate strongly with final perplexity (|<em>r</em>| ≥ 0.85). The one notable exception: normalization-free ReLU, where pre-activation correlations weaken while post-activation correlations strengthen. This directly reflects the compensatory dynamics identified earlier: when nonlinearity overcompensates, the post-activation spectrum becomes the more informative diagnostic. NerVE tells you not only what to measure, but which measurement to trust in each regime.
</p>
</div>
<!-- Table B: Cross-configuration ranking -->
<div style="margin: 1.5rem 0; overflow-x: auto;">
<table style="margin: 0 auto; border-collapse: collapse; font-size: 0.95rem; width: 100%;">
<thead>
<tr>
<th style="border-bottom: 2px solid #333; padding: 0.5rem 0.8rem;"></th>
<th colspan="4" style="border-bottom: 2px solid #333; padding: 0.5rem 0.8rem; text-align: center; font-weight: 600;">GPT-2</th>
<th colspan="3" style="border-bottom: 2px solid #333; border-left: 2px solid #333; padding: 0.5rem 0.8rem; text-align: center; font-weight: 600;">NormFree GPT-2</th>
</tr>
<tr>
<th style="border-bottom: 1px solid #999; padding: 0.5rem 0.8rem; text-align: left;">Metric</th>
<th style="border-bottom: 1px solid #999; padding: 0.5rem 0.8rem; text-align: center;">GELU</th>
<th style="border-bottom: 1px solid #999; padding: 0.5rem 0.8rem; text-align: center;">ReLU</th>
<th style="border-bottom: 1px solid #999; padding: 0.5rem 0.8rem; text-align: center;">GeGLU</th>
<th style="border-bottom: 1px solid #999; padding: 0.5rem 0.8rem; text-align: center;">SwiGLU</th>
<th style="border-bottom: 1px solid #999; border-left: 2px solid #333; padding: 0.5rem 0.8rem; text-align: center;">GELU</th>
<th style="border-bottom: 1px solid #999; padding: 0.5rem 0.8rem; text-align: center;">ReLU</th>
<th style="border-bottom: 1px solid #999; padding: 0.5rem 0.8rem; text-align: center;">LReLU</th>
</tr>
</thead>
<tbody>
<tr>
<td style="padding: 0.5rem 0.8rem;">SE_pre</td>
<td style="padding: 0.5rem 0.8rem; text-align: center; font-weight: 700;">-0.99</td>
<td style="padding: 0.5rem 0.8rem; text-align: center;">-0.98</td>
<td style="padding: 0.5rem 0.8rem; text-align: center;">-0.95</td>
<td style="padding: 0.5rem 0.8rem; text-align: center;">-0.97</td>
<td style="padding: 0.5rem 0.8rem; text-align: center; border-left: 2px solid #333;">-0.82</td>
<td style="padding: 0.5rem 0.8rem; text-align: center; color: #e74c3c;">0.03</td>
<td style="padding: 0.5rem 0.8rem; text-align: center; color: #e74c3c;">0.03</td>
</tr>
<tr>
<td style="padding: 0.5rem 0.8rem;">SE_post</td>
<td style="padding: 0.5rem 0.8rem; text-align: center; font-weight: 700;">-1.00</td>
<td style="padding: 0.5rem 0.8rem; text-align: center; font-weight: 700;">-1.00</td>
<td style="padding: 0.5rem 0.8rem; text-align: center;">-0.57</td>
<td style="padding: 0.5rem 0.8rem; text-align: center;">-0.85</td>
<td style="padding: 0.5rem 0.8rem; text-align: center; border-left: 2px solid #333;">-0.92</td>
<td style="padding: 0.5rem 0.8rem; text-align: center; font-weight: 700;">-0.99</td>
<td style="padding: 0.5rem 0.8rem; text-align: center; font-weight: 700;">-1.00</td>
</tr>
<tr style="border-top: 1px solid #ddd;">
<td style="padding: 0.5rem 0.8rem;">PR_pre</td>
<td style="padding: 0.5rem 0.8rem; text-align: center; font-weight: 700;">-0.99</td>
<td style="padding: 0.5rem 0.8rem; text-align: center;">-0.98</td>
<td style="padding: 0.5rem 0.8rem; text-align: center;">-0.97</td>
<td style="padding: 0.5rem 0.8rem; text-align: center;">-0.97</td>
<td style="padding: 0.5rem 0.8rem; text-align: center; border-left: 2px solid #333;">-0.93</td>
<td style="padding: 0.5rem 0.8rem; text-align: center; color: #e74c3c;">-0.55</td>
<td style="padding: 0.5rem 0.8rem; text-align: center; color: #e74c3c;">-0.60</td>
</tr>
<tr>
<td style="padding: 0.5rem 0.8rem; border-bottom: 2px solid #333;">PR_post</td>
<td style="padding: 0.5rem 0.8rem; text-align: center; border-bottom: 2px solid #333; font-weight: 700;">-1.00</td>
<td style="padding: 0.5rem 0.8rem; text-align: center; border-bottom: 2px solid #333;">-0.97</td>
<td style="padding: 0.5rem 0.8rem; text-align: center; border-bottom: 2px solid #333;">-0.94</td>
<td style="padding: 0.5rem 0.8rem; text-align: center; border-bottom: 2px solid #333;">-0.89</td>
<td style="padding: 0.5rem 0.8rem; text-align: center; border-bottom: 2px solid #333; border-left: 2px solid #333; font-weight: 700;">-0.99</td>
<td style="padding: 0.5rem 0.8rem; text-align: center; border-bottom: 2px solid #333;">-0.94</td>
<td style="padding: 0.5rem 0.8rem; text-align: center; border-bottom: 2px solid #333; font-weight: 700;">-0.99</td>
</tr>
</tbody>
</table>
<p class="has-text-justified" style="color: #888; font-size: 0.95rem; margin-top: 0.75rem; line-height: 1.6;">
<strong style="color: #666;">Table B (Cross-configuration ranking):</strong> Pearson <em>r</em> between final metric values and final perplexity across eight width configurations, for each architecture and activation variant. Correlations remain strong (|<em>r</em>| ≥ 0.85) across most configurations. The notable exception: NormFree ReLU and LReLU, where pre-activation correlations weaken (<span style="color: #e74c3c;">red values</span>) while post-activation correlations stay strong, reflecting the compensatory overcompensation dynamics.
</p>
</div>
</div>
</div>
</div>
</section>
<section class="section" id="BibTeX">
<div class="container is-max-desktop content">
<h2 class="title">BibTeX</h2>
<pre><code>@inproceedings{jha2026nerve,
title={NerVE: Nonlinear Eigenspectrum Dynamics in LLM Feed-Forward Networks},
author={Nandan Kumar Jha and Brandon Reagen},
booktitle={The Fourteenth International Conference on Learning Representations (ICLR)},
year={2026}
}</code></pre>
</div>
</section>
<footer class="footer">
<div class="container">
<div class="content has-text-centered">
<a class="icon-link" href="https://github.com/nerve-eigenspectrum/NerVE" class="external-link" target="_blank" >
<i class="fab fa-github"></i>
</a>
</div>
<div class="columns is-centered">
<div class="column is-8">
<div class="content">
<p>
This website template is borrowed from the <a
href="https://github.com/nerfies/nerfies.github.io">Nerfies</a> project page.
</p>
</div>
</div>
</div>
</div>
</footer>
</body>
</html>