Skip to content

Revamp dashboard UI with section navigation and code cleanup#15

Open
Harshith-umesh wants to merge 4 commits intoopenshift-psap:mainfrom
Harshith-umesh:main-production
Open

Revamp dashboard UI with section navigation and code cleanup#15
Harshith-umesh wants to merge 4 commits intoopenshift-psap:mainfrom
Harshith-umesh:main-production

Conversation

@Harshith-umesh
Copy link
Member

@Harshith-umesh Harshith-umesh commented Mar 9, 2026

Summary

  • Replace stacked-expander layout with a sticky sidebar section navigator (radio-based) so users view one section at a time instead of scrolling through collapsed expanders. The nav is collapsible for a full-width view.
  • Add deep-linking support: the URL now encodes the active section and its section-specific filter state (e.g., Compare Versions selections, Performance Trends filters), making views fully shareable.
  • Deduplicate utility code: extract geometric_mean() and compare_two_datasets() as top-level functions, removing ~150 lines of copy-pasted logic across Compare Versions, Compare Models, and Performance Trends sections.
  • Vectorize assign_profile() with np.select for better performance on large DataFrames.
  • Consolidate and modernize CSS: replace deprecated Streamlit class selectors (.css-*, .stInfo, etc.) with stable data-testid attribute selectors, scope * { color } overrides to .stApp descendants, and reduce ~50 lines of redundant expander rules into a single consolidated block.
  • Simplify filter initialization by merging duplicated URL/default branches and curating default accelerator selections (B200, B300, H200, MI300X).

Summary by CodeRabbit

  • New Features

    • Added section-based navigation and filtering
    • Implemented shareable URLs that preserve current section and filter states
    • Added collapsible section layouts across the dashboard
  • Improvements

    • Enhanced section navigation styling with visual indicators
    • Improved handling of edge cases with missing data and empty datasets
    • Refined data display organization with optional collapsible sections

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 9, 2026

📝 Walkthrough

Walkthrough

The PR refactors the dashboard by introducing vectorized profile assignment, new helper functions for dataset comparison and geometric mean calculations, and a comprehensive section-based navigation system with URL state synchronization and optional expander UI rendering across multiple sections.

Changes

Cohort / File(s) Summary
Dashboard Core Refactoring
dashboard.py
Added helper functions: geometric_mean(), compare_two_datasets(), and assign_profile_vectorized() to replace per-row logic. Updated 12+ render functions with use_expander parameter for conditional context management. Implemented section-based navigation with dynamic rendering (_render_selected_section()), SECTION_TO_SLUG/SLUG_TO_SECTION mappings, and SECTION_FILTER_KEYS for filter state management. Enhanced URL sharing via build_share_url() and improved decode_filters_from_url() to sync section state. Replaced assign_profile() with vectorized approach.
Dashboard Styling Enhancements
dashboard_styles.py
Added sticky section navigation UI styling with radiogroup-based controls, chevron indicators, and active/hover states. Refactored selectors to use data-testid attributes and role-based targeting instead of generic selectors. Updated styling for expanders, dataframes, code blocks, tooltips, and alerts across light/dark/auto modes. Improved visual consistency for selected navigation items and expanded content containers.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Poem

🐰 With whiskers twitching, I refactor with care,
Vectorized profiles floating through the air,
New helpers summon, comparisons bloom,
Navigation sections light up the room,
URLs sync with state, so sections stay true—
A dashboard refined, polished anew!

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Revamp dashboard UI with section navigation and code cleanup' accurately captures the main changes: UI revamp with section navigation, and code cleanup including deduplication and vectorization.
Docstring Coverage ✅ Passed Docstring coverage is 96.67% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Nitpick comments (1)
dashboard_styles.py (1)

449-452: Avoid recoloring every descendant div in .stApp.

Including .stApp div here is very broad and will bleed into Streamlit/BaseWeb/Plotly internals, which makes the theme brittle and forces lots of compensating overrides elsewhere. It would be safer to target text-bearing elements and dashboard-owned containers only.

Also applies to: 809-812

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dashboard_styles.py` around lines 449 - 452, The selector is too broad—remove
`.stApp div` from the global rule and instead limit the color override to
text-bearing elements and your dashboard-owned containers; update the selector
`.stApp, .stApp p, .stApp span, .stApp label, .stApp li` (and the duplicate
instance around lines 809-812) to exclude generic `div` and explicitly include
only text elements and your app-specific wrappers (e.g. headings h1–h6, a,
.your-dashboard-container, .card, .metric) so Streamlit/BaseWeb/Plotly internals
are not recolored.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@dashboard_styles.py`:
- Around line 195-201: The CSS rule that sets content: "›" uses a literal
ambiguous Unicode character and fails Ruff RUF001; update the content value in
the selector rule (the block containing
[data-testid="stColumn"]:has(.section-nav-marker) ... ::after) to use a CSS
escape for the chevron code point instead, e.g. replace "›" with "\203A" (so
content: "\203A";) to satisfy the linter.
- Around line 748-749: Remove the invalid attribute selectors that use
kind="..." (e.g.
[data-testid="stAlert"][data-baseweb="notification"][kind="info"]) and replace
them with stable DOM selectors: either target all notifications via
[data-baseweb="notification"] or use Streamlit's data-testid values for each
severity (data-testid="stAlertContentInfo", "stAlertContentSuccess",
"stAlertContentWarning", "stAlertContentError") so the CSS rules match the
rendered DOM; update every occurrence referenced in the diff (the
info/success/warning/error selector groups) accordingly.

In `@dashboard.py`:
- Around line 6827-6830: The Compare Models links lose the per-pair concurrency
selection because render_compare_models_section stores the selection under keys
like compare_models_conc_{model_1}_{model_2} but the shared-link mapping for the
Compare Models section (the "compare_models" dict / keys "cm_m1" and "cm_m2")
doesn't include URL handling for that field; update the shared-link handling to
include the compare_models_conc_{model_1}_{model_2} parameter (or append the
current concurrency value as a URL param when generating the shared link) so
that when compare_models_m1/compare_models_m2 are opened they read and restore
the geometric-mean selection from compare_models_conc_{model_1}_{model_2};
ensure the code path that builds shared links mirrors the special-case logic
used for Compare Versions.
- Around line 3261-3283: The URL-sync of Compare Versions is currently only
reached in the success path and can be skipped when the fragment early-returns;
move the logic that builds and updates _cv_url_params (the _cv_keys loop, the
conc_key/conc_val handling for "cv_conc", and the st.query_params.update call
wrapped in contextlib.suppress) out of the success-only tail so it always runs
regardless of UI early returns; keep using the same session_state keys
("compare_summary_v1", "compare_summary_v2", "compare_summary_accelerator",
"compare_summary_profile") and preserve the ",".join(map(str, conc_val))
behavior when conc_val is a list.
- Around line 7749-7792: The URL sync only sets params when values are truthy,
so empty filter combinations are not reflected; update the logic around
desired_params construction (variables: desired_params,
st.query_params.from_dict) to always set keys for main filters and
section-specific filters even when empty: for selected_accelerators,
selected_models, selected_versions, selected_profile, selected_tp always assign
the corresponding key to the joined string or an empty string; for section
handling (use SECTION_TO_SLUG, SECTION_FILTER_KEYS, active_section) always emit
each url_key with either the joined/str(value) or "" (treat None/empty lists as
""); for the compare_versions path (compare_summary_v1/v2/accelerator/profile
and dynamic conc_key) ensure desired_params["cv_conc"] is set to the joined list
or "" so the query string matches the current empty state, then call
st.query_params.from_dict(desired_params) as before.
- Around line 7183-7191: The reset branch currently uses the hard-coded
default_accelerators; change it to honor the URL-seeded baseline by using
baseline_accelerators when st.session_state.get("reset_to_defaults", False) is
true (falling back to default_accelerators if baseline_accelerators is missing),
i.e., set acc_default to the intersection of baseline_accelerators (or
default_accelerators if baseline not present) with accelerators so the "reset"
button restores the URL baseline rather than the hard-coded list.
- Around line 261-277: The vectorized mapper assign_profile_vectorized currently
emits an incorrect and incomplete set of profile labels (e.g., emitting "Profile
D: Prefill Heavy (8k/1k)" and missing "32k/256" and "1k/100"), which breaks
exact-name lookups and mismatches llmd_dashboard.py's A–F mapping; update
assign_profile_vectorized to restore the full profile taxonomy and exact names
used elsewhere: add conditions for the missing token configurations (32k/256 and
1k/100), rename the 8k/1k case to "Profile E: Prefill Heavy (8k/1k)" and adjust
all other choices to match the A–F mapping in llmd_dashboard.py so that filters
and measured-power table lookups resolve exactly to those strings.

---

Nitpick comments:
In `@dashboard_styles.py`:
- Around line 449-452: The selector is too broad—remove `.stApp div` from the
global rule and instead limit the color override to text-bearing elements and
your dashboard-owned containers; update the selector `.stApp, .stApp p, .stApp
span, .stApp label, .stApp li` (and the duplicate instance around lines 809-812)
to exclude generic `div` and explicitly include only text elements and your
app-specific wrappers (e.g. headings h1–h6, a, .your-dashboard-container, .card,
.metric) so Streamlit/BaseWeb/Plotly internals are not recolored.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 062a9489-144e-4825-94a1-52275f5d01b0

📥 Commits

Reviewing files that changed from the base of the PR and between 52a7418 and 3884e9a.

📒 Files selected for processing (2)
  • dashboard.py
  • dashboard_styles.py

Comment on lines +748 to +749
[data-testid="stAlert"][data-baseweb="notification"][kind="info"],
[data-testid="stNotification"][kind="info"] {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, locate the dashboard_styles.py file and examine the relevant sections
fd dashboard_styles.py

Repository: openshift-psap/performance-dashboard

Length of output: 99


🏁 Script executed:

# Also search for references to the 'kind' attribute in the codebase to understand its usage
rg "kind=" --type py --type css --type scss | head -30

Repository: openshift-psap/performance-dashboard

Length of output: 108


🏁 Script executed:

# Check if there are any tests or configuration related to Streamlit notifications
rg -i "stAlert|stNotification" --type py | head -20

Repository: openshift-psap/performance-dashboard

Length of output: 650


🏁 Script executed:

# Read the dashboard_styles.py file to see the context around lines 748-773
wc -l dashboard_styles.py

Repository: openshift-psap/performance-dashboard

Length of output: 104


🏁 Script executed:

# Read the specific sections mentioned in the review
sed -n '745,775p' dashboard_styles.py

Repository: openshift-psap/performance-dashboard

Length of output: 1138


🏁 Script executed:

# Check for Streamlit version requirements in the project
fd requirements.txt requirements-dev.txt setup.py setup.cfg pyproject.toml | xargs cat 2>/dev/null | grep -i streamlit

Repository: openshift-psap/performance-dashboard

Length of output: 359


🏁 Script executed:

# Find all requirements/configuration files
fd -e txt -e py -e cfg -e toml | grep -E "(requirements|setup|pyproject)"

Repository: openshift-psap/performance-dashboard

Length of output: 111


🏁 Script executed:

# Check the project structure to understand what files exist
ls -la | head -20

Repository: openshift-psap/performance-dashboard

Length of output: 1283


🏁 Script executed:

# Search for any Streamlit imports or version pins
rg "streamlit" --type py -i

Repository: openshift-psap/performance-dashboard

Length of output: 1469


🏁 Script executed:

# Check the Streamlit version requirements
cat requirements.txt

Repository: openshift-psap/performance-dashboard

Length of output: 152


🏁 Script executed:

# Check pyproject.toml for dependencies
cat pyproject.toml

Repository: openshift-psap/performance-dashboard

Length of output: 3813


🏁 Script executed:

# Search for any documentation or evidence about how Streamlit renders notifications
# Look for tests or examples that might show the actual DOM structure
rg -i "kind.*info|kind.*success|kind.*warning|kind.*error" --type py -A 3 -B 3

Repository: openshift-psap/performance-dashboard

Length of output: 1824


🏁 Script executed:

# Search for any tests or actual usage of st.success, st.warning, st.error, st.info to see if they work
rg -i "st\.(success|warning|error|info|alert)" --type py -B 2 -A 2

Repository: openshift-psap/performance-dashboard

Length of output: 34694


🏁 Script executed:

# Look for any browser-based tests or checks that validate the DOM
rg -i "data-testid.*stAlert|data-testid.*stNotification" --type py

Repository: openshift-psap/performance-dashboard

Length of output: 650


🏁 Script executed:

# Check if there's any documentation in the repo about Streamlit DOM or CSS selectors
find . -type f -name "*.md" -o -name "*.txt" | xargs grep -l -i "kind\|selector\|dom" 2>/dev/null

Repository: openshift-psap/performance-dashboard

Length of output: 64


🌐 Web query:

Streamlit 1.49.1 notification kind attribute DOM structure

💡 Result:

In Streamlit 1.49.1, the “notification” UI you see in the DOM comes from Base Web’s Notification component, and Streamlit’s kind is a React prop (it is not emitted as a kind="..." DOM attribute). Instead, kind affects the generated classes/styles.

Alerts (st.error, st.warning, st.info, st.success)

Streamlit renders Base Web Notification with kind={getNotificationKind(kind)} where:

  • error → KIND.negative
  • info → KIND.info
  • success → KIND.positive
  • warning → KIND.warning [1]

Streamlit also sets stable test IDs:

  • Notification Body gets: data-testid="stAlertContainer" and class="stAlertContainer" via overrides [1]
  • Inner content wrapper gets: data-testid="stAlertContentError|Info|Success|Warning" (capitalized kind) [1]

Typical DOM shape (simplified):

<div role="alert" data-baseweb="notification" ...>
  ...
  <div data-testid="stAlertContainer" class="stAlertContainer">...</div>
  <div data-testid="stAlertContentError">...</div>
</div>

Base Web’s Notification root commonly includes role="alert" and data-baseweb="notification" in the DOM [2].

Toasts (st.toast)

Toasts use Base Web’s toaster.info(...) (so they’re created through the toaster system, not the Notification component directly) [3]. Streamlit sets:

  • Toast Body overrides: data-testid="stToast" and class="stToast" [3]
  • Optional icon: data-testid="stToastDynamicIcon" [3]
  • Expand button for long messages: data-testid="stToastViewButton" [3]

Key point: if you’re trying to select by “kind” in the DOM, use Streamlit’s data-testid hooks above (or Base Web’s data-baseweb="notification" for alerts), because kind itself won’t appear as a DOM attribute.

Sources: [1] [2] [3]


🌐 Web query:

Streamlit stAlert stNotification kind attribute CSS selector

💡 Result:

In Streamlit, the “alert” boxes (st.info, st.success, st.warning, st.error) render a container like:

  • outer wrapper: div.stAlert
  • inner BaseWeb notification root: div[role="alert"][data-baseweb="notification"] (huggingface.co)

So you can target all alerts with a selector like:

/* all Streamlit alert boxes */
.stAlert [data-baseweb="notification"] {
  /* your styles */
}

But there is no stable kind="..." HTML attribute on the rendered DOM to select “info vs warning vs error” via an attribute selector (the “kind” is a React prop in BaseWeb’s Notification API, not a DOM attribute). (v8.baseweb.design)

Citations:


Remove kind="..." attribute selectors—this attribute does not exist in Streamlit 1.49.1 rendered DOM.

In Streamlit 1.49.1, kind is a React prop passed to Base Web's Notification component, not an HTML attribute. The rendered DOM contains data-baseweb="notification" and role-based attributes, but no kind="..." attribute. These selectors will never match, and the theme rules won't apply.

Use Streamlit's stable data-testid attributes instead:

  • data-testid="stAlertContentInfo" for info alerts
  • data-testid="stAlertContentSuccess" for success alerts
  • data-testid="stAlertContentWarning" for warning alerts
  • data-testid="stAlertContentError" for error alerts

Or target via data-baseweb="notification" if styling all alerts uniformly.

Applies to: 748-749, 756-757, 764-765, 772-773

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dashboard_styles.py` around lines 748 - 749, Remove the invalid attribute
selectors that use kind="..." (e.g.
[data-testid="stAlert"][data-baseweb="notification"][kind="info"]) and replace
them with stable DOM selectors: either target all notifications via
[data-baseweb="notification"] or use Streamlit's data-testid values for each
severity (data-testid="stAlertContentInfo", "stAlertContentSuccess",
"stAlertContentWarning", "stAlertContentError") so the CSS rules match the
rendered DOM; update every occurrence referenced in the diff (the
info/success/warning/error selector groups) accordingly.

Comment on lines +261 to +277
def assign_profile_vectorized(df):
"""Assigns human-readable profile names based on token counts (vectorized)."""
prompt = df["prompt toks"]
output = df["output toks"]
conditions = [
(prompt == 1000) & (output == 1000),
(prompt == 512) & (output == 2048),
(prompt == 2048) & (output == 128),
(prompt == 8000) & (output == 1000),
]
choices = [
"Profile A: Balanced (1k/1k)",
"Profile B: Variable Workload (512/2k)",
"Profile C: Large Prompt (2k/128)",
"Profile D: Prefill Heavy (8k/1k)",
]
return np.select(conditions, choices, default="Custom")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Restore the full profile taxonomy before vectorizing this path.

8k/1k is now emitted as "Profile D: Prefill Heavy (8k/1k)", while the rest of the file still expects "Profile E: Prefill Heavy (8k/1k)"; 32k/256 and 1k/100 also fall through to "Custom". That changes filter membership, makes sections appear/disappear incorrectly, and breaks exact-name lookups such as the measured-power tables in the energy section. It also drifts from llmd_dashboard.py's current A-F mapping.

Suggested fix
 def assign_profile_vectorized(df):
     """Assigns human-readable profile names based on token counts (vectorized)."""
     prompt = df["prompt toks"]
     output = df["output toks"]
     conditions = [
         (prompt == 1000) & (output == 1000),
         (prompt == 512) & (output == 2048),
         (prompt == 2048) & (output == 128),
+        (prompt == 32000) & (output == 256),
         (prompt == 8000) & (output == 1000),
+        (prompt == 1000) & (output == 100),
     ]
     choices = [
         "Profile A: Balanced (1k/1k)",
         "Profile B: Variable Workload (512/2k)",
         "Profile C: Large Prompt (2k/128)",
-        "Profile D: Prefill Heavy (8k/1k)",
+        "Profile D: Prefill Heavy (32k/256)",
+        "Profile E: Prefill Heavy (8k/1k)",
+        "Profile F: Prefill Heavy (1k/100)",
     ]
     return np.select(conditions, choices, default="Custom")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dashboard.py` around lines 261 - 277, The vectorized mapper
assign_profile_vectorized currently emits an incorrect and incomplete set of
profile labels (e.g., emitting "Profile D: Prefill Heavy (8k/1k)" and missing
"32k/256" and "1k/100"), which breaks exact-name lookups and mismatches
llmd_dashboard.py's A–F mapping; update assign_profile_vectorized to restore the
full profile taxonomy and exact names used elsewhere: add conditions for the
missing token configurations (32k/256 and 1k/100), rename the 8k/1k case to
"Profile E: Prefill Heavy (8k/1k)" and adjust all other choices to match the A–F
mapping in llmd_dashboard.py so that filters and measured-power table lookups
resolve exactly to those strings.

Comment on lines +3261 to +3283
# Sync Compare Versions filters to URL (runs inside @st.fragment)
_cv_url_params = {}
_cv_keys = {
"cv_v1": "compare_summary_v1",
"cv_v2": "compare_summary_v2",
"cv_gpu": "compare_summary_accelerator",
"cv_profile": "compare_summary_profile",
}
for url_key, ss_key in _cv_keys.items():
val = st.session_state.get(ss_key)
if val is not None:
_cv_url_params[url_key] = str(val)
cv_v1 = st.session_state.get("compare_summary_v1")
cv_v2 = st.session_state.get("compare_summary_v2")
cv_gpu = st.session_state.get("compare_summary_accelerator")
cv_prof = st.session_state.get("compare_summary_profile")
if all([cv_v1, cv_v2, cv_gpu, cv_prof]):
conc_key = f"compare_summary_conc_{cv_v1}_{cv_v2}_{cv_gpu}_{cv_prof}"
conc_val = st.session_state.get(conc_key)
if conc_val is not None and isinstance(conc_val, list):
_cv_url_params["cv_conc"] = ",".join(map(str, conc_val))
with contextlib.suppress(Exception):
st.query_params.update(_cv_url_params)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Move the Compare Versions URL sync out of the success-only tail.

This fragment returns early for common states like “no data”, “no common model+TP”, or “please select a second version” before reaching the query-param update block. Because @st.fragment reruns independently, the address bar can keep the previous comparison even though the UI is now showing a different/invalid one.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dashboard.py` around lines 3261 - 3283, The URL-sync of Compare Versions is
currently only reached in the success path and can be skipped when the fragment
early-returns; move the logic that builds and updates _cv_url_params (the
_cv_keys loop, the conc_key/conc_val handling for "cv_conc", and the
st.query_params.update call wrapped in contextlib.suppress) out of the
success-only tail so it always runs regardless of UI early returns; keep using
the same session_state keys ("compare_summary_v1", "compare_summary_v2",
"compare_summary_accelerator", "compare_summary_profile") and preserve the
",".join(map(str, conc_val)) behavior when conc_val is a list.

Comment on lines +6827 to +6830
"compare_models": {
"cm_m1": "compare_models_m1",
"cm_m2": "compare_models_m2",
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Compare Models links still lose the selected concurrency set.

render_compare_models_section() stores its geometric-mean selection under compare_models_conc_{model_1}_{model_2}, but only Compare Versions gets special URL handling here. Shared links for the Compare Models section will therefore reopen with the default concurrency selection and can show different percentage deltas.

Also applies to: 7779-7791

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dashboard.py` around lines 6827 - 6830, The Compare Models links lose the
per-pair concurrency selection because render_compare_models_section stores the
selection under keys like compare_models_conc_{model_1}_{model_2} but the
shared-link mapping for the Compare Models section (the "compare_models" dict /
keys "cm_m1" and "cm_m2") doesn't include URL handling for that field; update
the shared-link handling to include the compare_models_conc_{model_1}_{model_2}
parameter (or append the current concurrency value as a URL param when
generating the shared link) so that when compare_models_m1/compare_models_m2 are
opened they read and restore the geometric-mean selection from
compare_models_conc_{model_1}_{model_2}; ensure the code path that builds shared
links mirrors the special-case logic used for Compare Versions.

Comment on lines +7183 to 7191
default_accelerators = ["B200", "B300", "H200", "MI300X"]

if st.session_state.get("clear_all_filters", False) or st.session_state.get(
"filters_were_cleared", False
):
acc_default = []
elif st.session_state.get("reset_to_defaults", False):
baseline_accelerators = st.session_state.get(
"baseline_accelerators", accelerators
)
acc_default = [a for a in baseline_accelerators if a in accelerators]
acc_default = [a for a in default_accelerators if a in accelerators]
else:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Reset-to-defaults should honor the URL baseline for accelerators too.

The reset branch falls back to the hard-coded accelerator list instead of baseline_accelerators. A shared URL seeded with accelerators=MI300X, for example, will reset to B200/B300/H200/MI300X rather than the shared baseline, which breaks the button contract.

Suggested fix
         elif st.session_state.get("reset_to_defaults", False):
-            acc_default = [a for a in default_accelerators if a in accelerators]
+            baseline_accelerators = st.session_state.get(
+                "baseline_accelerators",
+                [a for a in default_accelerators if a in accelerators],
+            )
+            acc_default = [a for a in baseline_accelerators if a in accelerators]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dashboard.py` around lines 7183 - 7191, The reset branch currently uses the
hard-coded default_accelerators; change it to honor the URL-seeded baseline by
using baseline_accelerators when st.session_state.get("reset_to_defaults",
False) is true (falling back to default_accelerators if baseline_accelerators is
missing), i.e., set acc_default to the intersection of baseline_accelerators (or
default_accelerators if baseline not present) with accelerators so the "reset"
button restores the URL baseline rather than the hard-coded list.

Comment on lines +7749 to +7792
# Sync full URL state (main filters + section + section filters) in one atomic call
with contextlib.suppress(Exception):
desired_params = {}
# Preserve the view param
if "view" in st.query_params:
desired_params["view"] = st.query_params["view"]
# Main filters
if selected_accelerators:
desired_params["accelerators"] = ",".join(selected_accelerators)
if selected_models:
desired_params["models"] = ",".join(selected_models)
if selected_versions:
desired_params["versions"] = ",".join(selected_versions)
if selected_profile:
desired_params["profile"] = selected_profile
if selected_tp:
desired_params["tp_sizes"] = ",".join(map(str, selected_tp))
# Section + section-specific filters
active = st.session_state.get("active_section")
if active and active in SECTION_TO_SLUG:
slug = SECTION_TO_SLUG[active]
desired_params["section"] = slug
if slug in SECTION_FILTER_KEYS:
for url_key, ss_key in SECTION_FILTER_KEYS[slug].items():
val = st.session_state.get(ss_key)
if val is not None:
if isinstance(val, list):
desired_params[url_key] = ",".join(map(str, val))
else:
desired_params[url_key] = str(val)
# Compare Versions: also encode the dynamic concurrency key
if slug == "compare_versions":
cv_v1 = st.session_state.get("compare_summary_v1")
cv_v2 = st.session_state.get("compare_summary_v2")
cv_gpu = st.session_state.get("compare_summary_accelerator")
cv_prof = st.session_state.get("compare_summary_profile")
if all([cv_v1, cv_v2, cv_gpu, cv_prof]):
conc_key = (
f"compare_summary_conc_{cv_v1}_{cv_v2}_{cv_gpu}_{cv_prof}"
)
conc_val = st.session_state.get(conc_key)
if conc_val is not None and isinstance(conc_val, list):
desired_params["cv_conc"] = ",".join(map(str, conc_val))
st.query_params.from_dict(desired_params)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Keep query params in sync even for empty filter combinations.

This rewrite only happens on the non-empty path, so once the user lands on a no-data combination the UI changes but the query string stays at the last valid state. Refresh/share links then reopen a different filter set than the one currently on screen.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@dashboard.py` around lines 7749 - 7792, The URL sync only sets params when
values are truthy, so empty filter combinations are not reflected; update the
logic around desired_params construction (variables: desired_params,
st.query_params.from_dict) to always set keys for main filters and
section-specific filters even when empty: for selected_accelerators,
selected_models, selected_versions, selected_profile, selected_tp always assign
the corresponding key to the joined string or an empty string; for section
handling (use SECTION_TO_SLUG, SECTION_FILTER_KEYS, active_section) always emit
each url_key with either the joined/str(value) or "" (treat None/empty lists as
""); for the compare_versions path (compare_summary_v1/v2/accelerator/profile
and dynamic conc_key) ensure desired_params["cv_conc"] is set to the joined list
or "" so the query string matches the current empty state, then call
st.query_params.from_dict(desired_params) as before.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant