Revamp dashboard UI with section navigation and code cleanup#15
Revamp dashboard UI with section navigation and code cleanup#15Harshith-umesh wants to merge 4 commits intoopenshift-psap:mainfrom
Conversation
📝 WalkthroughWalkthroughThe PR refactors the dashboard by introducing vectorized profile assignment, new helper functions for dataset comparison and geometric mean calculations, and a comprehensive section-based navigation system with URL state synchronization and optional expander UI rendering across multiple sections. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (1)
dashboard_styles.py (1)
449-452: Avoid recoloring every descendantdivin.stApp.Including
.stApp divhere is very broad and will bleed into Streamlit/BaseWeb/Plotly internals, which makes the theme brittle and forces lots of compensating overrides elsewhere. It would be safer to target text-bearing elements and dashboard-owned containers only.Also applies to: 809-812
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dashboard_styles.py` around lines 449 - 452, The selector is too broad—remove `.stApp div` from the global rule and instead limit the color override to text-bearing elements and your dashboard-owned containers; update the selector `.stApp, .stApp p, .stApp span, .stApp label, .stApp li` (and the duplicate instance around lines 809-812) to exclude generic `div` and explicitly include only text elements and your app-specific wrappers (e.g. headings h1–h6, a, .your-dashboard-container, .card, .metric) so Streamlit/BaseWeb/Plotly internals are not recolored.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@dashboard_styles.py`:
- Around line 195-201: The CSS rule that sets content: "›" uses a literal
ambiguous Unicode character and fails Ruff RUF001; update the content value in
the selector rule (the block containing
[data-testid="stColumn"]:has(.section-nav-marker) ... ::after) to use a CSS
escape for the chevron code point instead, e.g. replace "›" with "\203A" (so
content: "\203A";) to satisfy the linter.
- Around line 748-749: Remove the invalid attribute selectors that use
kind="..." (e.g.
[data-testid="stAlert"][data-baseweb="notification"][kind="info"]) and replace
them with stable DOM selectors: either target all notifications via
[data-baseweb="notification"] or use Streamlit's data-testid values for each
severity (data-testid="stAlertContentInfo", "stAlertContentSuccess",
"stAlertContentWarning", "stAlertContentError") so the CSS rules match the
rendered DOM; update every occurrence referenced in the diff (the
info/success/warning/error selector groups) accordingly.
In `@dashboard.py`:
- Around line 6827-6830: The Compare Models links lose the per-pair concurrency
selection because render_compare_models_section stores the selection under keys
like compare_models_conc_{model_1}_{model_2} but the shared-link mapping for the
Compare Models section (the "compare_models" dict / keys "cm_m1" and "cm_m2")
doesn't include URL handling for that field; update the shared-link handling to
include the compare_models_conc_{model_1}_{model_2} parameter (or append the
current concurrency value as a URL param when generating the shared link) so
that when compare_models_m1/compare_models_m2 are opened they read and restore
the geometric-mean selection from compare_models_conc_{model_1}_{model_2};
ensure the code path that builds shared links mirrors the special-case logic
used for Compare Versions.
- Around line 3261-3283: The URL-sync of Compare Versions is currently only
reached in the success path and can be skipped when the fragment early-returns;
move the logic that builds and updates _cv_url_params (the _cv_keys loop, the
conc_key/conc_val handling for "cv_conc", and the st.query_params.update call
wrapped in contextlib.suppress) out of the success-only tail so it always runs
regardless of UI early returns; keep using the same session_state keys
("compare_summary_v1", "compare_summary_v2", "compare_summary_accelerator",
"compare_summary_profile") and preserve the ",".join(map(str, conc_val))
behavior when conc_val is a list.
- Around line 7749-7792: The URL sync only sets params when values are truthy,
so empty filter combinations are not reflected; update the logic around
desired_params construction (variables: desired_params,
st.query_params.from_dict) to always set keys for main filters and
section-specific filters even when empty: for selected_accelerators,
selected_models, selected_versions, selected_profile, selected_tp always assign
the corresponding key to the joined string or an empty string; for section
handling (use SECTION_TO_SLUG, SECTION_FILTER_KEYS, active_section) always emit
each url_key with either the joined/str(value) or "" (treat None/empty lists as
""); for the compare_versions path (compare_summary_v1/v2/accelerator/profile
and dynamic conc_key) ensure desired_params["cv_conc"] is set to the joined list
or "" so the query string matches the current empty state, then call
st.query_params.from_dict(desired_params) as before.
- Around line 7183-7191: The reset branch currently uses the hard-coded
default_accelerators; change it to honor the URL-seeded baseline by using
baseline_accelerators when st.session_state.get("reset_to_defaults", False) is
true (falling back to default_accelerators if baseline_accelerators is missing),
i.e., set acc_default to the intersection of baseline_accelerators (or
default_accelerators if baseline not present) with accelerators so the "reset"
button restores the URL baseline rather than the hard-coded list.
- Around line 261-277: The vectorized mapper assign_profile_vectorized currently
emits an incorrect and incomplete set of profile labels (e.g., emitting "Profile
D: Prefill Heavy (8k/1k)" and missing "32k/256" and "1k/100"), which breaks
exact-name lookups and mismatches llmd_dashboard.py's A–F mapping; update
assign_profile_vectorized to restore the full profile taxonomy and exact names
used elsewhere: add conditions for the missing token configurations (32k/256 and
1k/100), rename the 8k/1k case to "Profile E: Prefill Heavy (8k/1k)" and adjust
all other choices to match the A–F mapping in llmd_dashboard.py so that filters
and measured-power table lookups resolve exactly to those strings.
---
Nitpick comments:
In `@dashboard_styles.py`:
- Around line 449-452: The selector is too broad—remove `.stApp div` from the
global rule and instead limit the color override to text-bearing elements and
your dashboard-owned containers; update the selector `.stApp, .stApp p, .stApp
span, .stApp label, .stApp li` (and the duplicate instance around lines 809-812)
to exclude generic `div` and explicitly include only text elements and your
app-specific wrappers (e.g. headings h1–h6, a, .your-dashboard-container, .card,
.metric) so Streamlit/BaseWeb/Plotly internals are not recolored.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 062a9489-144e-4825-94a1-52275f5d01b0
📒 Files selected for processing (2)
dashboard.pydashboard_styles.py
| [data-testid="stAlert"][data-baseweb="notification"][kind="info"], | ||
| [data-testid="stNotification"][kind="info"] { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, locate the dashboard_styles.py file and examine the relevant sections
fd dashboard_styles.pyRepository: openshift-psap/performance-dashboard
Length of output: 99
🏁 Script executed:
# Also search for references to the 'kind' attribute in the codebase to understand its usage
rg "kind=" --type py --type css --type scss | head -30Repository: openshift-psap/performance-dashboard
Length of output: 108
🏁 Script executed:
# Check if there are any tests or configuration related to Streamlit notifications
rg -i "stAlert|stNotification" --type py | head -20Repository: openshift-psap/performance-dashboard
Length of output: 650
🏁 Script executed:
# Read the dashboard_styles.py file to see the context around lines 748-773
wc -l dashboard_styles.pyRepository: openshift-psap/performance-dashboard
Length of output: 104
🏁 Script executed:
# Read the specific sections mentioned in the review
sed -n '745,775p' dashboard_styles.pyRepository: openshift-psap/performance-dashboard
Length of output: 1138
🏁 Script executed:
# Check for Streamlit version requirements in the project
fd requirements.txt requirements-dev.txt setup.py setup.cfg pyproject.toml | xargs cat 2>/dev/null | grep -i streamlitRepository: openshift-psap/performance-dashboard
Length of output: 359
🏁 Script executed:
# Find all requirements/configuration files
fd -e txt -e py -e cfg -e toml | grep -E "(requirements|setup|pyproject)"Repository: openshift-psap/performance-dashboard
Length of output: 111
🏁 Script executed:
# Check the project structure to understand what files exist
ls -la | head -20Repository: openshift-psap/performance-dashboard
Length of output: 1283
🏁 Script executed:
# Search for any Streamlit imports or version pins
rg "streamlit" --type py -iRepository: openshift-psap/performance-dashboard
Length of output: 1469
🏁 Script executed:
# Check the Streamlit version requirements
cat requirements.txtRepository: openshift-psap/performance-dashboard
Length of output: 152
🏁 Script executed:
# Check pyproject.toml for dependencies
cat pyproject.tomlRepository: openshift-psap/performance-dashboard
Length of output: 3813
🏁 Script executed:
# Search for any documentation or evidence about how Streamlit renders notifications
# Look for tests or examples that might show the actual DOM structure
rg -i "kind.*info|kind.*success|kind.*warning|kind.*error" --type py -A 3 -B 3Repository: openshift-psap/performance-dashboard
Length of output: 1824
🏁 Script executed:
# Search for any tests or actual usage of st.success, st.warning, st.error, st.info to see if they work
rg -i "st\.(success|warning|error|info|alert)" --type py -B 2 -A 2Repository: openshift-psap/performance-dashboard
Length of output: 34694
🏁 Script executed:
# Look for any browser-based tests or checks that validate the DOM
rg -i "data-testid.*stAlert|data-testid.*stNotification" --type pyRepository: openshift-psap/performance-dashboard
Length of output: 650
🏁 Script executed:
# Check if there's any documentation in the repo about Streamlit DOM or CSS selectors
find . -type f -name "*.md" -o -name "*.txt" | xargs grep -l -i "kind\|selector\|dom" 2>/dev/nullRepository: openshift-psap/performance-dashboard
Length of output: 64
🌐 Web query:
Streamlit 1.49.1 notification kind attribute DOM structure
💡 Result:
In Streamlit 1.49.1, the “notification” UI you see in the DOM comes from Base Web’s Notification component, and Streamlit’s kind is a React prop (it is not emitted as a kind="..." DOM attribute). Instead, kind affects the generated classes/styles.
Alerts (st.error, st.warning, st.info, st.success)
Streamlit renders Base Web Notification with kind={getNotificationKind(kind)} where:
error → KIND.negativeinfo → KIND.infosuccess → KIND.positivewarning → KIND.warning[1]
Streamlit also sets stable test IDs:
- Notification Body gets:
data-testid="stAlertContainer"andclass="stAlertContainer"via overrides [1] - Inner content wrapper gets:
data-testid="stAlertContentError|Info|Success|Warning"(capitalized kind) [1]
Typical DOM shape (simplified):
<div role="alert" data-baseweb="notification" ...>
...
<div data-testid="stAlertContainer" class="stAlertContainer">...</div>
<div data-testid="stAlertContentError">...</div>
</div>Base Web’s Notification root commonly includes role="alert" and data-baseweb="notification" in the DOM [2].
Toasts (st.toast)
Toasts use Base Web’s toaster.info(...) (so they’re created through the toaster system, not the Notification component directly) [3]. Streamlit sets:
- Toast Body overrides:
data-testid="stToast"andclass="stToast"[3] - Optional icon:
data-testid="stToastDynamicIcon"[3] - Expand button for long messages:
data-testid="stToastViewButton"[3]
Key point: if you’re trying to select by “kind” in the DOM, use Streamlit’s data-testid hooks above (or Base Web’s data-baseweb="notification" for alerts), because kind itself won’t appear as a DOM attribute.
Sources: [1] [2] [3]
🌐 Web query:
Streamlit stAlert stNotification kind attribute CSS selector
💡 Result:
In Streamlit, the “alert” boxes (st.info, st.success, st.warning, st.error) render a container like:
- outer wrapper:
div.stAlert - inner BaseWeb notification root:
div[role="alert"][data-baseweb="notification"](huggingface.co)
So you can target all alerts with a selector like:
/* all Streamlit alert boxes */
.stAlert [data-baseweb="notification"] {
/* your styles */
}But there is no stable kind="..." HTML attribute on the rendered DOM to select “info vs warning vs error” via an attribute selector (the “kind” is a React prop in BaseWeb’s Notification API, not a DOM attribute). (v8.baseweb.design)
Citations:
- 1: https://huggingface.co/spaces/tom-doerr/logo_generator/blob/main/app/streamlit/app.py?utm_source=openai
- 2: https://v8.baseweb.design/components/notification/?utm_source=openai
Remove kind="..." attribute selectors—this attribute does not exist in Streamlit 1.49.1 rendered DOM.
In Streamlit 1.49.1, kind is a React prop passed to Base Web's Notification component, not an HTML attribute. The rendered DOM contains data-baseweb="notification" and role-based attributes, but no kind="..." attribute. These selectors will never match, and the theme rules won't apply.
Use Streamlit's stable data-testid attributes instead:
data-testid="stAlertContentInfo"for info alertsdata-testid="stAlertContentSuccess"for success alertsdata-testid="stAlertContentWarning"for warning alertsdata-testid="stAlertContentError"for error alerts
Or target via data-baseweb="notification" if styling all alerts uniformly.
Applies to: 748-749, 756-757, 764-765, 772-773
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dashboard_styles.py` around lines 748 - 749, Remove the invalid attribute
selectors that use kind="..." (e.g.
[data-testid="stAlert"][data-baseweb="notification"][kind="info"]) and replace
them with stable DOM selectors: either target all notifications via
[data-baseweb="notification"] or use Streamlit's data-testid values for each
severity (data-testid="stAlertContentInfo", "stAlertContentSuccess",
"stAlertContentWarning", "stAlertContentError") so the CSS rules match the
rendered DOM; update every occurrence referenced in the diff (the
info/success/warning/error selector groups) accordingly.
| def assign_profile_vectorized(df): | ||
| """Assigns human-readable profile names based on token counts (vectorized).""" | ||
| prompt = df["prompt toks"] | ||
| output = df["output toks"] | ||
| conditions = [ | ||
| (prompt == 1000) & (output == 1000), | ||
| (prompt == 512) & (output == 2048), | ||
| (prompt == 2048) & (output == 128), | ||
| (prompt == 8000) & (output == 1000), | ||
| ] | ||
| choices = [ | ||
| "Profile A: Balanced (1k/1k)", | ||
| "Profile B: Variable Workload (512/2k)", | ||
| "Profile C: Large Prompt (2k/128)", | ||
| "Profile D: Prefill Heavy (8k/1k)", | ||
| ] | ||
| return np.select(conditions, choices, default="Custom") |
There was a problem hiding this comment.
Restore the full profile taxonomy before vectorizing this path.
8k/1k is now emitted as "Profile D: Prefill Heavy (8k/1k)", while the rest of the file still expects "Profile E: Prefill Heavy (8k/1k)"; 32k/256 and 1k/100 also fall through to "Custom". That changes filter membership, makes sections appear/disappear incorrectly, and breaks exact-name lookups such as the measured-power tables in the energy section. It also drifts from llmd_dashboard.py's current A-F mapping.
Suggested fix
def assign_profile_vectorized(df):
"""Assigns human-readable profile names based on token counts (vectorized)."""
prompt = df["prompt toks"]
output = df["output toks"]
conditions = [
(prompt == 1000) & (output == 1000),
(prompt == 512) & (output == 2048),
(prompt == 2048) & (output == 128),
+ (prompt == 32000) & (output == 256),
(prompt == 8000) & (output == 1000),
+ (prompt == 1000) & (output == 100),
]
choices = [
"Profile A: Balanced (1k/1k)",
"Profile B: Variable Workload (512/2k)",
"Profile C: Large Prompt (2k/128)",
- "Profile D: Prefill Heavy (8k/1k)",
+ "Profile D: Prefill Heavy (32k/256)",
+ "Profile E: Prefill Heavy (8k/1k)",
+ "Profile F: Prefill Heavy (1k/100)",
]
return np.select(conditions, choices, default="Custom")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dashboard.py` around lines 261 - 277, The vectorized mapper
assign_profile_vectorized currently emits an incorrect and incomplete set of
profile labels (e.g., emitting "Profile D: Prefill Heavy (8k/1k)" and missing
"32k/256" and "1k/100"), which breaks exact-name lookups and mismatches
llmd_dashboard.py's A–F mapping; update assign_profile_vectorized to restore the
full profile taxonomy and exact names used elsewhere: add conditions for the
missing token configurations (32k/256 and 1k/100), rename the 8k/1k case to
"Profile E: Prefill Heavy (8k/1k)" and adjust all other choices to match the A–F
mapping in llmd_dashboard.py so that filters and measured-power table lookups
resolve exactly to those strings.
| # Sync Compare Versions filters to URL (runs inside @st.fragment) | ||
| _cv_url_params = {} | ||
| _cv_keys = { | ||
| "cv_v1": "compare_summary_v1", | ||
| "cv_v2": "compare_summary_v2", | ||
| "cv_gpu": "compare_summary_accelerator", | ||
| "cv_profile": "compare_summary_profile", | ||
| } | ||
| for url_key, ss_key in _cv_keys.items(): | ||
| val = st.session_state.get(ss_key) | ||
| if val is not None: | ||
| _cv_url_params[url_key] = str(val) | ||
| cv_v1 = st.session_state.get("compare_summary_v1") | ||
| cv_v2 = st.session_state.get("compare_summary_v2") | ||
| cv_gpu = st.session_state.get("compare_summary_accelerator") | ||
| cv_prof = st.session_state.get("compare_summary_profile") | ||
| if all([cv_v1, cv_v2, cv_gpu, cv_prof]): | ||
| conc_key = f"compare_summary_conc_{cv_v1}_{cv_v2}_{cv_gpu}_{cv_prof}" | ||
| conc_val = st.session_state.get(conc_key) | ||
| if conc_val is not None and isinstance(conc_val, list): | ||
| _cv_url_params["cv_conc"] = ",".join(map(str, conc_val)) | ||
| with contextlib.suppress(Exception): | ||
| st.query_params.update(_cv_url_params) |
There was a problem hiding this comment.
Move the Compare Versions URL sync out of the success-only tail.
This fragment returns early for common states like “no data”, “no common model+TP”, or “please select a second version” before reaching the query-param update block. Because @st.fragment reruns independently, the address bar can keep the previous comparison even though the UI is now showing a different/invalid one.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dashboard.py` around lines 3261 - 3283, The URL-sync of Compare Versions is
currently only reached in the success path and can be skipped when the fragment
early-returns; move the logic that builds and updates _cv_url_params (the
_cv_keys loop, the conc_key/conc_val handling for "cv_conc", and the
st.query_params.update call wrapped in contextlib.suppress) out of the
success-only tail so it always runs regardless of UI early returns; keep using
the same session_state keys ("compare_summary_v1", "compare_summary_v2",
"compare_summary_accelerator", "compare_summary_profile") and preserve the
",".join(map(str, conc_val)) behavior when conc_val is a list.
| "compare_models": { | ||
| "cm_m1": "compare_models_m1", | ||
| "cm_m2": "compare_models_m2", | ||
| }, |
There was a problem hiding this comment.
Compare Models links still lose the selected concurrency set.
render_compare_models_section() stores its geometric-mean selection under compare_models_conc_{model_1}_{model_2}, but only Compare Versions gets special URL handling here. Shared links for the Compare Models section will therefore reopen with the default concurrency selection and can show different percentage deltas.
Also applies to: 7779-7791
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dashboard.py` around lines 6827 - 6830, The Compare Models links lose the
per-pair concurrency selection because render_compare_models_section stores the
selection under keys like compare_models_conc_{model_1}_{model_2} but the
shared-link mapping for the Compare Models section (the "compare_models" dict /
keys "cm_m1" and "cm_m2") doesn't include URL handling for that field; update
the shared-link handling to include the compare_models_conc_{model_1}_{model_2}
parameter (or append the current concurrency value as a URL param when
generating the shared link) so that when compare_models_m1/compare_models_m2 are
opened they read and restore the geometric-mean selection from
compare_models_conc_{model_1}_{model_2}; ensure the code path that builds shared
links mirrors the special-case logic used for Compare Versions.
| default_accelerators = ["B200", "B300", "H200", "MI300X"] | ||
|
|
||
| if st.session_state.get("clear_all_filters", False) or st.session_state.get( | ||
| "filters_were_cleared", False | ||
| ): | ||
| acc_default = [] | ||
| elif st.session_state.get("reset_to_defaults", False): | ||
| baseline_accelerators = st.session_state.get( | ||
| "baseline_accelerators", accelerators | ||
| ) | ||
| acc_default = [a for a in baseline_accelerators if a in accelerators] | ||
| acc_default = [a for a in default_accelerators if a in accelerators] | ||
| else: |
There was a problem hiding this comment.
Reset-to-defaults should honor the URL baseline for accelerators too.
The reset branch falls back to the hard-coded accelerator list instead of baseline_accelerators. A shared URL seeded with accelerators=MI300X, for example, will reset to B200/B300/H200/MI300X rather than the shared baseline, which breaks the button contract.
Suggested fix
elif st.session_state.get("reset_to_defaults", False):
- acc_default = [a for a in default_accelerators if a in accelerators]
+ baseline_accelerators = st.session_state.get(
+ "baseline_accelerators",
+ [a for a in default_accelerators if a in accelerators],
+ )
+ acc_default = [a for a in baseline_accelerators if a in accelerators]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dashboard.py` around lines 7183 - 7191, The reset branch currently uses the
hard-coded default_accelerators; change it to honor the URL-seeded baseline by
using baseline_accelerators when st.session_state.get("reset_to_defaults",
False) is true (falling back to default_accelerators if baseline_accelerators is
missing), i.e., set acc_default to the intersection of baseline_accelerators (or
default_accelerators if baseline not present) with accelerators so the "reset"
button restores the URL baseline rather than the hard-coded list.
| # Sync full URL state (main filters + section + section filters) in one atomic call | ||
| with contextlib.suppress(Exception): | ||
| desired_params = {} | ||
| # Preserve the view param | ||
| if "view" in st.query_params: | ||
| desired_params["view"] = st.query_params["view"] | ||
| # Main filters | ||
| if selected_accelerators: | ||
| desired_params["accelerators"] = ",".join(selected_accelerators) | ||
| if selected_models: | ||
| desired_params["models"] = ",".join(selected_models) | ||
| if selected_versions: | ||
| desired_params["versions"] = ",".join(selected_versions) | ||
| if selected_profile: | ||
| desired_params["profile"] = selected_profile | ||
| if selected_tp: | ||
| desired_params["tp_sizes"] = ",".join(map(str, selected_tp)) | ||
| # Section + section-specific filters | ||
| active = st.session_state.get("active_section") | ||
| if active and active in SECTION_TO_SLUG: | ||
| slug = SECTION_TO_SLUG[active] | ||
| desired_params["section"] = slug | ||
| if slug in SECTION_FILTER_KEYS: | ||
| for url_key, ss_key in SECTION_FILTER_KEYS[slug].items(): | ||
| val = st.session_state.get(ss_key) | ||
| if val is not None: | ||
| if isinstance(val, list): | ||
| desired_params[url_key] = ",".join(map(str, val)) | ||
| else: | ||
| desired_params[url_key] = str(val) | ||
| # Compare Versions: also encode the dynamic concurrency key | ||
| if slug == "compare_versions": | ||
| cv_v1 = st.session_state.get("compare_summary_v1") | ||
| cv_v2 = st.session_state.get("compare_summary_v2") | ||
| cv_gpu = st.session_state.get("compare_summary_accelerator") | ||
| cv_prof = st.session_state.get("compare_summary_profile") | ||
| if all([cv_v1, cv_v2, cv_gpu, cv_prof]): | ||
| conc_key = ( | ||
| f"compare_summary_conc_{cv_v1}_{cv_v2}_{cv_gpu}_{cv_prof}" | ||
| ) | ||
| conc_val = st.session_state.get(conc_key) | ||
| if conc_val is not None and isinstance(conc_val, list): | ||
| desired_params["cv_conc"] = ",".join(map(str, conc_val)) | ||
| st.query_params.from_dict(desired_params) |
There was a problem hiding this comment.
Keep query params in sync even for empty filter combinations.
This rewrite only happens on the non-empty path, so once the user lands on a no-data combination the UI changes but the query string stays at the last valid state. Refresh/share links then reopen a different filter set than the one currently on screen.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dashboard.py` around lines 7749 - 7792, The URL sync only sets params when
values are truthy, so empty filter combinations are not reflected; update the
logic around desired_params construction (variables: desired_params,
st.query_params.from_dict) to always set keys for main filters and
section-specific filters even when empty: for selected_accelerators,
selected_models, selected_versions, selected_profile, selected_tp always assign
the corresponding key to the joined string or an empty string; for section
handling (use SECTION_TO_SLUG, SECTION_FILTER_KEYS, active_section) always emit
each url_key with either the joined/str(value) or "" (treat None/empty lists as
""); for the compare_versions path (compare_summary_v1/v2/accelerator/profile
and dynamic conc_key) ensure desired_params["cv_conc"] is set to the joined list
or "" so the query string matches the current empty state, then call
st.query_params.from_dict(desired_params) as before.
Summary
geometric_mean()andcompare_two_datasets()as top-level functions, removing ~150 lines of copy-pasted logic across Compare Versions, Compare Models, and Performance Trends sections.assign_profile()withnp.selectfor better performance on large DataFrames..css-*,.stInfo, etc.) with stabledata-testidattribute selectors, scope* { color }overrides to.stAppdescendants, and reduce ~50 lines of redundant expander rules into a single consolidated block.Summary by CodeRabbit
New Features
Improvements