Dual Auth: Appwrite JWT → Supabase JWT via Edge Function
What is dual authentication?
Dual authentication (dual-auth) is an architecture pattern where an application uses two separate identity providers simultaneously — typically a regional provider for personal data compliance and a global provider for application data. A server-side bridge converts tokens between the two systems so the client authenticates once and accesses both backends.
TL;DR
- -Data residency laws require personal data on regional servers; application data can stay in the global cloud
- -Appwrite JWT (15 min TTL) converts to Supabase JWT (1 hour TTL) via a server-side Edge Function — client never sees the secret
- -AuthBridge singleton syncs 6 subsystems with one event; Completer-lock prevents duplicate token refresh calls
- -Cold start flow takes 1–2 seconds; Deno isolate warmup is the main latency source
- -SyncRecovery handles 3 failure tiers: network retry, partial re-sync, full re-auth — each with its own strategy
One auth provider per app is standard. Two becomes a requirement when user data must stay within a specific country’s borders while your app runs globally.
In JourneyBay, Appwrite handles login and registration on a regional server. Supabase handles data, real-time subscriptions, and server-side logic in the global cloud. Between them sits an Edge Function that converts one JWT into another. On the client side, AuthBridge — a singleton — syncs six subsystems with a single event.
Here’s how it works, why we built it, and what it costs.
Why Two Providers Instead of One
The reason is data residency. In Russia (and several other countries), the law requires storing personal user data on servers within the country. Accounts, passwords, emails, session data — all of it must live on a local server. Meanwhile, the app runs globally: routes, recommendations, AI-generated content — none of that is tied to a specific jurisdiction.
No single platform covers both requirements. You need two: a regional one for personal data and a global one for everything else.
Appwrite runs on our own VPS in the required region. Self-hosted, full control over data. It stores user accounts, manages sessions, handles OAuth flows. Personal data never leaves the country.
Supabase runs in the global cloud. PostgreSQL with everything you’d expect: joins, window functions, full-text search, JSONB. Plus Row-Level Security (RLS) that enforces access control at the database level — not in application code, but in PostgreSQL itself. And Edge Functions on Deno for server-side logic. This is where routes, POIs, checklists, and AI generations live — everything that isn’t personal data.
Could we use a single provider for everything? In theory, yes. In practice: either the self-hosted database loses cloud advantages (managed hosting, CDN, Edge Functions), or the global cloud violates local legislation. A bridge between two systems is the compromise that preserves the strengths of both platforms.
The principle: personal data stays on the regional server, application data lives in the global cloud, and a reliable bridge connects them.
The JWT Chain: From Login to Database Query
The entire dual-auth boils down to one operation: token conversion. The user authenticates through Appwrite and gets an Appwrite JWT. That token gets exchanged for a Supabase JWT through a server function. The Supabase JWT is used for all data requests.
Here’s the flow:
User
│
│ email + password
▼
┌──────────┐
│ Appwrite │ — creates session, returns Appwrite JWT (TTL 15 min)
└──────────┘
│
│ Appwrite JWT
▼
┌──────────────────┐
│ Edge Function │ — verifies Appwrite JWT → finds/creates user
│ (Token Bridge) │ in a mapping table → signs Supabase JWT (TTL 1 hour)
└──────────────────┘
│
│ Supabase JWT
▼
┌──────────────┐
│ Supabase │ — RLS checks auth.uid() from JWT → data query
│ (PostgREST) │
└──────────────┘
Two tokens, two TTLs, one server function between them. Token Bridge takes the short-lived Appwrite token and returns a long-lived Supabase token.
Why an Edge Function
Token Bridge runs as a server function on the Supabase side. Why not handle it in the client?
Because signing a JWT requires a server secret. The Supabase JWT is signed with a key that only the server knows. The client physically can’t create a valid Supabase token — and it shouldn’t. Token Bridge is the single point where Appwrite authentication turns into Supabase authorization. Compromising one token without access to the server secret isn’t enough to reach the other system’s data.
Token Lifecycle
Two tokens, two TTLs:
| Token | TTL | Renewal |
|---|---|---|
| Appwrite JWT | 15 minutes | Appwrite SDK handles it automatically via session |
| Supabase JWT | 1 hour | Client renews 5 minutes before expiry |
The client tracks the Supabase token’s lifetime. Five minutes before it expires, it requests a fresh Appwrite JWT, sends it to Token Bridge, and gets a new Supabase JWT. The user doesn’t notice.
What happens on cold start: the app opens → Appwrite SDK restores the session → requests an Appwrite JWT → sends it to Token Bridge → Supabase JWT received → all data requests start working. The whole process takes 1–2 seconds. If Token Bridge responds slowly (Deno isolate cold start), the client shows a loading screen.
User Mapping
On the first request from a new user, Token Bridge creates a record in a mapping table. This table links the user’s identifier in Appwrite to their identifier in Supabase. All RLS policies in Supabase work with a single internal ID — the same across both systems.
On first login, Token Bridge also activates a trial period through a server-side SQL function. The user gets access to paid features without explicitly registering in the payment system.
AuthBridge: One Listener for Six Systems
When the auth state changes — user logs in, logs out, token refreshes — it needs to be reflected beyond just the UI. Six subsystems need to know about the change and react.
AuthBridge is a singleton in the DI container. It subscribes to auth state changes through a Riverpod provider and dispatches updates to all dependent systems.
authProvider (Riverpod)
│
│ state change
▼
┌──────────────┐
│ AuthBridge │──► CustomAuthManager (legacy FlutterFlow, stores token)
│ (singleton) │──► UserStateProvider (global user state, userId, isLoggedIn)
│ │──► SupabaseClient (JWT initialization, HTTP client)
│ │──► Sentry (user context binding for errors)
│ │──► PostHog (identify for analytics)
│ │──► SyncRecovery (reset on success / enqueue on failure)
└──────────────┘
On login, AuthBridge performs six operations sequentially:
- CustomAuthManager receives the token and identifier — a legacy FlutterFlow integration still used by part of the codebase.
- UserStateProvider updates the global state: userId, isLoggedIn, profile data. Widgets subscribed to this provider rebuild.
- SupabaseClientService kicks off the token exchange through Token Bridge. This is the longest step — a network request to the Edge Function.
- Sentry receives the userId — now all errors are tied to a specific user.
- PostHog calls identify — analytics events get attributed to the user.
- SyncRecoveryService resets on success. If the token exchange in step 3 failed, SyncRecovery queues a recovery attempt.
On logout — the reverse. Each system clears its state. Sentry and PostHog reset user context. SupabaseClient invalidates the token.
Why a singleton instead of separate subscriptions? Because order matters. You can’t fire analytics before Supabase is initialized. You can’t show logged-in UI before UserState is set. AuthBridge guarantees the sequence.
Completer-Lock: Three Deduplication Points
In a mobile app, the same action can be triggered by multiple sources simultaneously. The app opens — initState and a lifecycle listener both check auth at the same time. Two parallel requests get 401s — both try to refresh the token.
Without protection, this leads to duplicate network requests, race conditions, and unpredictable state.
The fix is a Completer-lock pattern. It works like this: the first call creates a Completer and starts the operation. Every subsequent call discovers the Completer already exists and waits for its result. When the operation finishes, all waiters get the same result.
In JourneyBay’s auth system, this pattern is used at three points:
1. Auth Check on Startup
When the app opens, the session check can be triggered from multiple places: widget initialization, lifecycle callback, deeplink handler. The Completer guarantees that the network request to Appwrite (getAccount) fires only once. All other calls wait for the result.
2. Token Exchange via Token Bridge
SupabaseClientService uses a separate Completer for Token Bridge calls. If AuthBridge initiates a token exchange while a parallel API request also discovers an expired token, the second call won’t hit Token Bridge — it’ll wait for the first one’s result.
An important detail: on logout, the current Completer is force-completed with an error. This prevents a scenario where a Token Bridge request is stuck in the network, the user taps “Log out,” and 30 seconds later the response arrives and overwrites the cleared state.
3. Handling 401s at Runtime
When an API request returns 401, AuthRetryHandler tries to refresh the token and retry. If five parallel requests all get 401s simultaneously, only one triggers the token refresh. The other four wait for the result and retry their requests with the new token.
If the retry also returns 401, it’s a permanent failure. AuthRetryHandler fires a callback that triggers a full logout through AuthBridge. The user sees the login screen.
Three Completers, three deduplication points. In testing, not a single token exchange has been duplicated.
SyncRecovery: What to Do When Everything Breaks
Token exchange is a network operation. Networks are unreliable. Token Bridge can be unavailable due to a Deno isolate cold start, a server error, or the user having no internet.
SyncRecoveryService is a state machine that handles sync failures with exponential backoff.
successful login
│
▼
┌─────────┐
reset()──►│ idle │◄──────────── reset() (successful sync)
└─────────┘
│
│ token exchange failed
▼
┌──────────────┐
│ pendingRetry │──── timer with exponential backoff
└──────────────┘
│
│ timer fires
▼
┌──────────┐
│ syncing │──── retry token exchange
└──────────┘
│ │
success│ │failure
│ │
▼ ▼
idle attempt < 5?
│ │
yes │ │ no
▼ ▼
pendingRetry ┌────────┐
│ failed │
└────────┘
Delays between attempts grow: 5 seconds, 15, 30, 60, 120. Five attempts — then the state transitions to failed and the user sees a banner: “Sync failed. Please log in again.”
SyncRecovery broadcasts its state through a StreamProvider. The UI subscribes and shows:
syncing— a subtle “Syncing…” indicatorpendingRetry— “Retrying in N sec”failed— a banner with a “Log in again” button
There’s separate logic for checkPendingSync when the app returns from background: if sync was in pendingRetry, the service resets the attempt counter to the second-to-last value and immediately makes one final attempt. The reasoning: the user came back to the app, the network probably recovered — worth one more try.
WidgetRef in a DI Singleton: An Unconventional Pattern
AuthBridge is a singleton in get_it (DI container). Riverpod providers live in the widget tree. These are two different worlds: the DI container is created at app startup, but WidgetRef only appears after the first build().
The problem: AuthBridge needs to listen to authProvider (Riverpod), but it doesn’t have a WidgetRef.
The solution: deferred injection. AuthBridge is created by get_it at app startup with minimal dependencies (just CustomAuthManager). WidgetRef is passed later through a startSync(ref) method, called from initState() of the root widget.
App starts
│
▼
get_it creates AuthBridge (no WidgetRef)
│
▼
MyApp.build() → ConsumerStatefulWidget
│
▼
MyApp.initState()
│
│ ref (WidgetRef)
▼
authBridge.startSync(ref)
│
▼
AuthBridge subscribes to authProvider via ref.listenManual()
Why isn’t this an anti-pattern? Because the alternatives are worse:
- Make AuthBridge a Riverpod provider — then it loses access to get_it dependencies. Half the app runs on get_it (legacy FlutterFlow), half on Riverpod. AuthBridge is the bridge between them.
- Pass WidgetRef to every method — AuthBridge gets called from multiple places (retry handler, lifecycle callback). Threading ref through all those call sites is fragile and verbose.
- Use a global ProviderContainer — works, but breaks Riverpod scoping and makes testing harder.
Deferred injection through a method is a pragmatic compromise. AuthBridge receives WidgetRef once, at app startup, and uses it for the lifetime of the process. It’s tested through AuthBridgeTestBed, which creates a real widget tree with a ConsumerWidget and passes ref to startSync.
What It Costs: Trade-offs
Dual-auth isn’t free. Here’s the real price:
Double the failure points. Two providers means two sets of rate limits, two sets of incidents, two monitoring dashboards. An Appwrite SDK update can break the JWT format — and auth goes down even though Supabase is running fine.
Debugging complexity. “User can’t load data” — is it the Appwrite session? Token Bridge? The Supabase JWT? An RLS policy? Four layers instead of one. You need structured logs at every level: Appwrite auth event → Token Bridge request/response → Supabase JWT validation → RLS query. Without them, you’re debugging blind.
Developer onboarding. New developers don’t just need to understand “how auth works” — they need to understand “how two auths work and the bridge between them.” AuthBridge documentation runs 4 pages. Single-auth provider docs fit in half a page.
Two sets of secrets. Appwrite API key, Appwrite project ID, Supabase URL, Supabase anon key, Supabase JWT secret, Token Bridge endpoint. Six values instead of three. Each needs to live in CI, in .env, and in deployment secrets.
Cold start latency. The chain: Appwrite SDK init → JWT request → Token Bridge (possible Deno cold start) → Supabase JWT → first data request. On a slow connection, that’s 3–4 seconds. With a warm Deno isolate, 1–2 seconds. The fix is a WarmupService that pre-warms Edge Functions when the app launches.
Every one of these trade-offs is solvable. But solvable means more code, more monitoring, more documentation. Dual-auth isn’t for every project. It’s justified when the benefits of two platforms outweigh the integration cost.
Beyond One Project: Data Residency Through Dual-Auth
Dual-auth in JourneyBay grew out of a specific requirement — complying with Russian data protection law. But the pattern applies much more broadly.
In any country with strict data regulation (Russia, China, several EU countries with local requirements, India with the DPDP Act) the same problem comes up: sensitive data must stay on local servers while the app runs globally. A single cloud provider can’t solve that.
Dual-auth addresses the problem architecturally:
┌─────────────────────────────┐ ┌──────────────────────────────┐
│ Regional Server │ │ Global Cloud │
│ (self-hosted) │ │ (managed) │
│ │ │ │
│ • User accounts │ │ • Routes, POIs │
│ • Passwords, sessions │ JWT │ • AI generations │
│ • OAuth tokens │────►│ • Checklists, recs │
│ • Emails, phone numbers │ │ • Real-time subscriptions │
│ • Personal data │ │ • Edge Functions │
│ │ │ │
│ Appwrite / Firebase / │ │ Supabase / any cloud │
│ any self-hosted auth │ │ PostgreSQL provider │
└─────────────────────────────┘ └──────────────────────────────┘
The regional auth server stores only what the law requires. The global database handles application data without legal constraints. Token Bridge is the sole intersection point, and only tokens pass through it — not personal data.
The pattern isn’t tied to specific platforms. Appwrite can be swapped for Firebase Auth, Keycloak, or Auth0 with a local deployment. Supabase can be replaced with any cloud PostgreSQL with RLS. Token Bridge stays the same: accept a JWT from the regional provider, issue one for the global provider.
Takeaway
AuthBridge, Completer-lock, and SyncRecovery turn a potentially brittle two-BaaS setup into a predictable system. AuthBridge sequences six subsystems on every auth event. Completers deduplicate every concurrent token refresh. SyncRecovery retries every network hiccup with exponential backoff.
Dual-auth isn’t for every project. But if you’re dealing with data residency, building a multi-region app, or running two BaaS providers for any other reason, design three things before you start: a unified user identifier across systems, a server-side Token Bridge, and a client-side recovery mechanism. Everything else is implementation detail.
FAQ
Why does the Supabase JWT have a 1-hour TTL when the Appwrite JWT is only 15 minutes — and what happens if a Supabase token gets compromised during that window?
The asymmetry is intentional: Appwrite manages the identity session and can revoke it instantly, while Supabase JWTs are stateless bearer tokens that cannot be individually revoked once issued. The 1-hour TTL balances user experience (fewer refresh round-trips through Token Bridge) against security exposure. If a Supabase JWT is compromised, the attacker has at most 55 minutes of access to application data — not personal data, which stays on the Appwrite regional server. For higher-risk scenarios, reducing the TTL to 15 minutes mirrors Appwrite’s cadence at the cost of a Token Bridge call on every Appwrite refresh.
How does the Completer-lock pattern handle the case where a token refresh succeeds but the original 401-triggering request’s response arrives out of order?
AuthRetryHandler stores the original request’s options and replays it after the Completer resolves with a new token — it does not reuse the original Response object. If the original request somehow resolves before the Completer (edge case on very fast networks with race between the 401 response and the refresh), AuthRetryHandler detects that the new token is already available and retries immediately without entering the lock queue. The Completer’s force-completion on logout covers the inverse scenario: requests in-flight during logout receive an error immediately rather than retrying with an invalidated token.
What is the realistic cold-start latency breakdown for the Appwrite → Token Bridge → Supabase chain, and how does WarmupService reduce it?
On a cold Deno isolate, the chain takes 3–4 seconds: Appwrite SDK session restore (~300 ms), Appwrite JWT request (~200 ms), Token Bridge cold start (~1,500–2,000 ms for Deno isolate spin-up), JWT signing and response (~100 ms), Supabase client initialization (~200 ms). WarmupService fires a lightweight ping to Token Bridge during app launch — before the user taps any data-dependent screen — converting the cold start penalty into a background operation. With a warm isolate, the full chain drops to 600–900 ms.