Describe the bug
When you deploy a workflow that changes Kafka trigger configuration (broker hosts, topics, consumer group, etc.) via the CLI, GitHub sync, or a sandbox merge, the change doesn't stick. If anyone has the workflow open in the editor when the deploy lands, their next save writes the old Kafka config back to the database. If nobody is online, the editor loads the pre-deploy config the next time someone opens the workflow, and again the next save reverts the change.
The deploy itself succeeds and the database is updated correctly. The problem is the collaborative editor's in-memory document doesn't pick up the Kafka config change, so it overwrites the database with stale values on save.
This affects the Kafka-specific fields: hosts, topics, consumer group ID, SSL, SASL, timeouts, offset reset policy. Other trigger fields (type, enabled, cron expression) update correctly after a deploy.
Version number
2.16.3-pre
I have reproduced this locally on main:
To Reproduce
- Create a workflow with a Kafka trigger configured with specific broker hosts and a topic
- Deploy an updated version of that workflow via the CLI (or GitHub sync) with different broker hosts or a different topic
- Open the workflow in the editor (or if it was already open, just save it)
- Check the Kafka trigger configuration — it shows the pre-deploy values
- Save the workflow — the database now has the old config again, the deploy's changes are gone
Expected behavior
Deploying a workflow with updated Kafka trigger config should behave the same as deploying changes to any other trigger field — the editor reflects the new config and saving preserves it.
Additional context
Related to #4535. That issue covers the general case of provisioner changes not reaching live editor sessions, which is being fixed in #4562. This is a remaining gap specific to kafka_configuration — the reconciliation that runs after a deploy updates all other trigger fields but skips the Kafka config block.
Describe the bug
When you deploy a workflow that changes Kafka trigger configuration (broker hosts, topics, consumer group, etc.) via the CLI, GitHub sync, or a sandbox merge, the change doesn't stick. If anyone has the workflow open in the editor when the deploy lands, their next save writes the old Kafka config back to the database. If nobody is online, the editor loads the pre-deploy config the next time someone opens the workflow, and again the next save reverts the change.
The deploy itself succeeds and the database is updated correctly. The problem is the collaborative editor's in-memory document doesn't pick up the Kafka config change, so it overwrites the database with stale values on save.
This affects the Kafka-specific fields: hosts, topics, consumer group ID, SSL, SASL, timeouts, offset reset policy. Other trigger fields (type, enabled, cron expression) update correctly after a deploy.
Version number
2.16.3-preI have reproduced this locally on main:
To Reproduce
Expected behavior
Deploying a workflow with updated Kafka trigger config should behave the same as deploying changes to any other trigger field — the editor reflects the new config and saving preserves it.
Additional context
Related to #4535. That issue covers the general case of provisioner changes not reaching live editor sessions, which is being fixed in #4562. This is a remaining gap specific to
kafka_configuration— the reconciliation that runs after a deploy updates all other trigger fields but skips the Kafka config block.