Skip to content

make gateway reload use new loglevel#2155

Merged
yinwm merged 1 commit intosipeed:mainfrom
cytown:reload-log
Mar 30, 2026
Merged

make gateway reload use new loglevel#2155
yinwm merged 1 commit intosipeed:mainfrom
cytown:reload-log

Conversation

@cytown
Copy link
Copy Markdown
Contributor

@cytown cytown commented Mar 29, 2026

📝 Description

make gateway reload use new loglevel

🗣️ Type of Change

  • 🐞 Bug fix (non-breaking change which fixes an issue)
  • ✨ New feature (non-breaking change which adds functionality)
  • 📖 Documentation update
  • ⚡ Code refactoring (no functional changes, no api changes)

🤖 AI Code Generation

  • 🤖 Fully AI-generated (100% AI, 0% Human)
  • 🛠️ Mostly AI-generated (AI draft, Human verified/modified)
  • 👨‍💻 Mostly Human-written (Human lead, AI assisted or none)

🔗 Related Issue

📚 Technical Context (Skip for Docs)

  • Reference URL:
  • Reasoning:

🧪 Test Environment

  • Hardware:
  • OS:
  • Model/Provider:
  • Channels:

📸 Evidence (Optional)

Click to view Logs/Screenshots

☑️ Checklist

  • My code/docs follow the style of this project.
  • I have performed a self-review of my own changes.
  • I have updated the documentation accordingly.

@sipeed-bot sipeed-bot bot added type: enhancement New feature or request go Pull requests that update go code labels Mar 29, 2026
@wj-xiao
Copy link
Copy Markdown
Collaborator

wj-xiao commented Mar 30, 2026

handleConfigReload now applies logger.SetLevelFromString(newCfg.Gateway.LogLevel) at the start of the reload flow, before any of the failure points in the reload path.
That creates a partial-apply problem. Both createStartupProvider() and al.ReloadProviderAndConfig() can fail, and in those cases this function explicitly falls back to the old provider/config by restarting services with the previous state. However, the logger level is global process state, and once it is changed here it is not restored on those error paths.

The result is that a failed reload can still leave the process running with the new log level, even though the rest of the new configuration was not successfully applied. That breaks the current rollback semantics of the reload flow and can also make failures harder to diagnose:

  • if the new level is more restrictive, reload-related info/warn logs may disappear after the failed reload
  • if the new level is less restrictive, the process may start emitting unexpected debug logs even though the config reload did not succeed

I think the log level change should either: be applied only after the reload has succeeded, or save the previous logger level and restore it on every failure path

@cytown
Copy link
Copy Markdown
Contributor Author

cytown commented Mar 30, 2026

handleConfigReload now applies logger.SetLevelFromString(newCfg.Gateway.LogLevel) at the start of the reload flow, before any of the failure points in the reload path. That creates a partial-apply problem. Both createStartupProvider() and al.ReloadProviderAndConfig() can fail, and in those cases this function explicitly falls back to the old provider/config by restarting services with the previous state. However, the logger level is global process state, and once it is changed here it is not restored on those error paths.

The result is that a failed reload can still leave the process running with the new log level, even though the rest of the new configuration was not successfully applied. That breaks the current rollback semantics of the reload flow and can also make failures harder to diagnose:

  • if the new level is more restrictive, reload-related info/warn logs may disappear after the failed reload
  • if the new level is less restrictive, the process may start emitting unexpected debug logs even though the config reload did not succeed

I think the log level change should either: be applied only after the reload has succeeded, or save the previous logger level and restore it on every failure path

done

Copy link
Copy Markdown
Collaborator

@yinwm yinwm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this PR! The goal of syncing log level during config reload is clear and much needed.

One blocking issue I'd like addressed before merge:

Thread safety on debug fieldservices.debug is a plain bool written in Run() (main goroutine) and read in handleConfigReload() (reload goroutine via channel/select). This is a data race that the Go race detector would flag. Since debug is a CLI flag that never changes after startup, I'd suggest passing it as a function parameter to executeReloadhandleConfigReload instead of storing it in the shared services struct. This also makes the dependency explicit and easier to test.

A few non-blocking suggestions:

  • Add a log line to confirm the level change (output before calling SetLevelFromString, so it's visible regardless of the new level)
  • Consider moving the level update to the end of the reload flow, so reload-related info logs aren't suppressed mid-way
  • A brief comment explaining that debug mode permanently overrides config log level would help future maintainers

@cytown
Copy link
Copy Markdown
Contributor Author

cytown commented Mar 30, 2026

Thanks for this PR! The goal of syncing log level during config reload is clear and much needed.

One blocking issue I'd like addressed before merge:

Thread safety on debug fieldservices.debug is a plain bool written in Run() (main goroutine) and read in handleConfigReload() (reload goroutine via channel/select). This is a data race that the Go race detector would flag. Since debug is a CLI flag that never changes after startup, I'd suggest passing it as a function parameter to executeReloadhandleConfigReload instead of storing it in the shared services struct. This also makes the dependency explicit and easier to test.

A few non-blocking suggestions:

  • Add a log line to confirm the level change (output before calling SetLevelFromString, so it's visible regardless of the new level)
  • Consider moving the level update to the end of the reload flow, so reload-related info logs aren't suppressed mid-way
  • A brief comment explaining that debug mode permanently overrides config log level would help future maintainers

done

Copy link
Copy Markdown
Collaborator

@yinwm yinwm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. This correctly fills the gap left by #1853 — applying the configured log level on config reload.

A couple of non-blocking suggestions:

  1. The log message "Log level changing from current to %q" could be more informative — consider logging the previous level if the logger exposes it.
  2. It'd be nice to have a test case for log level change on reload, but not blocking for this PR.

Nice touch placing the log level update at the end of reload to avoid suppressing reload-related logs.

@yinwm yinwm merged commit 275c101 into sipeed:main Mar 30, 2026
4 checks passed
flyingleafe pushed a commit to flyingleafe/fedot that referenced this pull request Mar 30, 2026
@cytown cytown deleted the reload-log branch April 1, 2026 05:27
ra1phdd pushed a commit to ra1phdd/picoclaw-pkg that referenced this pull request Apr 12, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

go Pull requests that update go code type: enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants