Hi Supabase team and community, I need your help.
This post was generated by an AI assistant within Supabase.
I’m seeing intermittent Realtime subscription failures for my project (ref: diylminbdhprczjzlybb) hosted in Tokyo (ap-northeast-1). The issue is not consistent — there are days when failures are common and days when they do not occur. The problem tends to appear after leaving the app idle (not using Realtime) for some time and then attempting to subscribe again — sometimes the subscribe succeeds, but often it fails. I investigated the Postgres side and found signs that Realtime workers are not consistently establishing logical replication connections.
What I observed
Symptom: After a period of inactivity (minutes to hours), attempting to subscribe sometimes fails. It’s intermittent — there are days when failures are common and days when they do not occur.
Project ref: diylminbdhprczjzlybb
Region: Tokyo / ap-northeast-1
Approx diagnostics time: 2026-03-10T10:55:xxZ (see below)
Diagnostics I ran (queries + results)
Replication slots check
Query: SELECT * FROM pg_replication_slots WHERE slot_name LIKE 'supabase_realtime%';
Result: empty set — no supabase_realtime replication slots present.*
Replication activity
Query: SELECT * FROM pg_stat_replication;
Result: empty set — no streaming replication connections.*
Inspect pg_stat_activity for realtime/replication connections
Query: SELECT pid, usename, application_name, client_addr, client_port, state, backend_start, query_start, state_change FROM pg_stat_activity WHERE application_name LIKE '%realtime%' OR query LIKE '%replication%';
Result: only a dashboard/read-only connection (application_name = 'supabase/dashboard', usename = 'supabase_read_only_user'). No Realtime worker/process connections were visible.
Interpretation
It appears the Realtime service sometimes fails to create or reuse the expected logical replication slot(s) and/or to establish logical replication connections to the Postgres instance. Without a replication slot/connection, Realtime cannot stream WAL changes, which likely explains failed subscribes after idle periods.
Because replication slots were absent and there were no realtime replication backends in pg_stat_activity, this points to a Realtime-side initialization or worker instability (or a permission/connection issue on the Realtime side) rather than an obvious Postgres auth error.
Reproduction steps (how I encounter it)
Start app and subscribe to a Realtime channel — subscribe usually works initially.
Leave the app idle for a while (minutes to hours) — do not publish or keep active subscriptions.
Attempt to subscribe again (or reconnect) — sometimes the subscribe fails intermittently; retrying may succeed or fail unpredictably. Note: there are days when failures are common and days when they do not occur.
Additional context
I have not intentionally removed replication slots.
I observed only dashboard read-only connections in pg_stat_activity during my checks.
I have already opened a support ticket with Supabase support (submitted diagnostics + queries). I’m posting here to check if others have seen similar intermittent subscribe failures after idle and to gather any community/workaround advice.
Requested help / questions
Has anyone else experienced intermittent Realtime subscribe failures after idle periods that correlate with missing replication slots or absent realtime replication backends?
Does Realtime automatically recreate replication slots on demand, or is manual intervention (by Supabase) required when slots are missing?
Are there known issues or best-practices for keeping a Realtime connection healthy across idle periods (e.g., keepalive/ping, reconnect logic, or server-side settings)?
If this is a Supabase-side problem, can the team confirm whether the Realtime workers for this project/region (Tokyo / ap-northeast-1) have had restarts or errors around the diagnostics timestamp?
Attached diagnostics summary
pg_replication_slots (supabase_realtime*): empty
pg_stat_replication: empty
pg_stat_activity: only dashboard read-only connections; no realtime worker connections*
If helpful, I can paste full query outputs or relevant Realtime logs (timestamps available) — please advise what additional information would be most useful for debugging.
Thanks in advance for any insights.
Best,
Hi Supabase team and community, I need your help.
This post was generated by an AI assistant within Supabase.
I’m seeing intermittent Realtime subscription failures for my project (ref: diylminbdhprczjzlybb) hosted in Tokyo (ap-northeast-1). The issue is not consistent — there are days when failures are common and days when they do not occur. The problem tends to appear after leaving the app idle (not using Realtime) for some time and then attempting to subscribe again — sometimes the subscribe succeeds, but often it fails. I investigated the Postgres side and found signs that Realtime workers are not consistently establishing logical replication connections.
What I observed
Symptom: After a period of inactivity (minutes to hours), attempting to subscribe sometimes fails. It’s intermittent — there are days when failures are common and days when they do not occur.
Project ref: diylminbdhprczjzlybb
Region: Tokyo / ap-northeast-1
Approx diagnostics time: 2026-03-10T10:55:xxZ (see below)
Diagnostics I ran (queries + results)
Replication slots check
Query: SELECT * FROM pg_replication_slots WHERE slot_name LIKE 'supabase_realtime%';
Result: empty set — no supabase_realtime replication slots present.*
Replication activity
Query: SELECT * FROM pg_stat_replication;
Result: empty set — no streaming replication connections.*
Inspect pg_stat_activity for realtime/replication connections
Query: SELECT pid, usename, application_name, client_addr, client_port, state, backend_start, query_start, state_change FROM pg_stat_activity WHERE application_name LIKE '%realtime%' OR query LIKE '%replication%';
Result: only a dashboard/read-only connection (application_name = 'supabase/dashboard', usename = 'supabase_read_only_user'). No Realtime worker/process connections were visible.
Interpretation
It appears the Realtime service sometimes fails to create or reuse the expected logical replication slot(s) and/or to establish logical replication connections to the Postgres instance. Without a replication slot/connection, Realtime cannot stream WAL changes, which likely explains failed subscribes after idle periods.
Because replication slots were absent and there were no realtime replication backends in pg_stat_activity, this points to a Realtime-side initialization or worker instability (or a permission/connection issue on the Realtime side) rather than an obvious Postgres auth error.
Reproduction steps (how I encounter it)
Start app and subscribe to a Realtime channel — subscribe usually works initially.
Leave the app idle for a while (minutes to hours) — do not publish or keep active subscriptions.
Attempt to subscribe again (or reconnect) — sometimes the subscribe fails intermittently; retrying may succeed or fail unpredictably. Note: there are days when failures are common and days when they do not occur.
Additional context
I have not intentionally removed replication slots.
I observed only dashboard read-only connections in pg_stat_activity during my checks.
I have already opened a support ticket with Supabase support (submitted diagnostics + queries). I’m posting here to check if others have seen similar intermittent subscribe failures after idle and to gather any community/workaround advice.
Requested help / questions
Has anyone else experienced intermittent Realtime subscribe failures after idle periods that correlate with missing replication slots or absent realtime replication backends?
Does Realtime automatically recreate replication slots on demand, or is manual intervention (by Supabase) required when slots are missing?
Are there known issues or best-practices for keeping a Realtime connection healthy across idle periods (e.g., keepalive/ping, reconnect logic, or server-side settings)?
If this is a Supabase-side problem, can the team confirm whether the Realtime workers for this project/region (Tokyo / ap-northeast-1) have had restarts or errors around the diagnostics timestamp?
Attached diagnostics summary
pg_replication_slots (supabase_realtime*): empty
pg_stat_replication: empty
pg_stat_activity: only dashboard read-only connections; no realtime worker connections*
If helpful, I can paste full query outputs or relevant Realtime logs (timestamps available) — please advise what additional information would be most useful for debugging.
Thanks in advance for any insights.
Best,