Description
Bug report
- I confirm this is a bug with Supabase, not with my own application.
- I confirm I have searched the Docs, GitHub Discussions, and Discord.
Describe the bug
When running a Next.js application locally with Supabase CLI (supabase start
), WebSocket connections to the Realtime service fail. The browser console shows:
WebSocket connection to 'ws://localhost:54321/realtime/v1/websocket?apikey=[YOUR_ANON_KEY]&vsn=1.0.0' failed:
(Where [YOUR_ANON_KEY]
is the local project's anonymous key).
This issue prevents Supabase Realtime features, such as Presence, from working in the local development environment. The exact same application code works as expected in our production Supabase deployment (hosted Supabase).
The core issue, as identified in the Kong (Supabase's API gateway) logs, is that the WebSocket upgrade request is being rejected with an HTTP 431 Request Header Fields Too Large error.
To Reproduce
Steps to reproduce the behavior:
- Set up a Next.js application using
@supabase/ssr
and@supabase/supabase-js
. - Initialize a local Supabase project using the Supabase CLI (
supabase init
). - Start the local Supabase stack (
supabase start
). - Run the Next.js application (
npm run dev
oryarn dev
). - Attempt to establish a Realtime connection (e.g., by subscribing to a channel or tracking presence).
- Observe the WebSocket connection failure in the browser console and the HTTP 431 error in Kong logs (
docker logs supabase_kong_[PROJECT_ID]
).
Expected behavior
The WebSocket connection to ws://localhost:54321/realtime/v1/websocket
should establish successfully, allowing Realtime features like Presence to function correctly in the local development environment, mirroring the behavior in production.
Screenshots
(If possible, add a screenshot of the browser console showing the WebSocket error and a snippet of the Kong log showing the 431 error).
Browser Console Error:
WebSocket connection to 'ws://localhost:54321/realtime/v1/websocket?apikey=eyJhbGciOiJIUzI1NiIs...&vsn=1.0.0' failed:
Relevant Kong Log Entry:
192.168.65.1 - - [21/May/2025:13:37:05 +0000] "GET /realtime/v1/websocket?apikey=eyJhbG...vsn=1.0.0 HTTP/1.1" 431 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36"
System information
- OS: macOS (Sonoma, Ventura - tested on multiple versions)
- Browser: All browsers tested (Chrome, Arc, Safari, Firefox) - occurs in normal and incognito/private modes.
- Version of
@supabase/supabase-js
:2.49.1
- Version of
@supabase/ssr
:0.5.2
- Version of Node.js:
v20.18.3
- Supabase CLI version: 2.23.4
Additional context
The issue seems to stem from the total size of the request headers, specifically exacerbated by JWT-based session cookies that are fragmented by @supabase/ssr
(e.g., sb-[project_ref]-auth-token.0
, sb-[project_ref]-auth-token.1
) when the JWT is large. While this fragmentation also occurs in production, it only leads to the 431
error in the local environment managed by the Supabase CLI.
Debugging Steps Attempted (without success locally):
-
Custom Kong Configuration (
kong.yml
):- Implemented a
request-transformer
plugin to remove theCookie
header specifically for the/realtime/v1/websocket
path. Logs confirm the plugin is loaded by Kong.
# supabase/kong/kong.yml _format_version: "2.1" services: - name: realtime url: http://realtime:4000 routes: - name: realtime-websocket paths: - /realtime/v1/websocket strip_path: false plugins: - name: request-transformer config: remove: headers: - Cookie
- Implemented a
-
docker-compose.override.yml
Modifications:- Mounted the custom
kong.yml
. - Increased various header size limits for Kong:
KONG_CLIENT_MAX_HEADER_SIZE
KONG_NGINX_HTTP_CLIENT_HEADER_BUFFER_SIZE
KONG_NGINX_HTTP_LARGE_CLIENT_HEADER_BUFFERS
- Increased
MAX_HEADER_LENGTH
for the Realtime service. - Attempted direct Nginx directive injection via
KONG_NGINX_HTTP_...
environment variables. - Example
supabase/docker-compose.override.yml
:version: '3.8' services: kong: volumes: - ./kong/kong.yml:/home/kong/kong.yml:ro - ./kong/custom_nginx_kong.lua:/usr/local/kong/custom_nginx_kong.lua:ro # Also tried custom Nginx template environment: KONG_DECLARATIVE_CONFIG: '/home/kong/kong.yml' KONG_NGINX_CONF_TEMPLATE: '/usr/local/kong/custom_nginx_kong.lua' # For custom template KONG_ADMIN_LISTEN: '0.0.0.0:8001' KONG_PROXY_LISTEN: '0.0.0.0:8000' KONG_CLIENT_MAX_HEADER_SIZE: '32768' # Tried 8k, 16k, 32k # KONG_NGINX_HTTP_CLIENT_HEADER_BUFFER_SIZE: '32k' # These were also tried # KONG_NGINX_HTTP_LARGE_CLIENT_HEADER_BUFFERS: '4 32k' KONG_LOG_LEVEL: 'debug' ports: - '54321:8000' realtime: environment: MAX_HEADER_LENGTH: '32768' # Tried 8192, 16384, 32768
- Mounted the custom
-
Custom Nginx Template for Kong (
custom_nginx_kong.lua
):- Copied the default
nginx_kong.lua
template from the Kong container. - Directly added
client_header_buffer_size 32k;
andlarge_client_header_buffers 4 32k;
to theserver
block within the template. - Configured Kong to use this custom template via
KONG_NGINX_CONF_TEMPLATE
indocker-compose.override.yml
. - Kong logs confirmed the custom template was loaded, but the
431
error persisted. Inspection of the Nginx config inside the container (/usr/local/kong/nginx-kong.conf
) after these changes still showed the defaultclient_header_buffer_size 1k;
.
- Copied the default
Despite these efforts to increase header limits and remove cookies specifically for the WebSocket route at the Kong level, the 431
error continues to occur only in the local Supabase CLI environment. This suggests that either the configurations are not being applied as expected within the Supabase CLI's Docker setup, or there's a lower, unconfigurable limit elsewhere in the local stack that's being hit before these settings take effect.
The fact that this works in production (hosted Supabase) implies that the header limits are more generous or handled differently there.
Potentially related observation:
In some instances, Kong logs also showed connect() failed (111: Connection refused)
when trying to reach upstream: "http://172.19.0.11:8081/_internal/health"
(Supabase Functions health check). This might be unrelated but was observed during startup.