A no-nonsense deep dive into OAuth 2.1 and OpenID Connect. Authorization Code + PKCE with real HTTP requests, token storage wars, refresh token rotation, OIDC discovery, multi-tenant identity, and the vulnerabilities that bite even experienced teams.
I have implemented OAuth integration in production systems for over a decade. I have read the RFCs cover to cover -- multiple times, because they keep publishing new ones. I have debugged token flows at 2 AM when a misconfigured redirect URI brought down authentication for an entire platform. And after all of that, I can tell you with absolute certainty: OAuth is confusing on purpose. Not maliciously, but structurally. It is a framework designed by committee to solve a very broad problem, and the resulting specification leaves enough ambiguity to keep security consultants employed for decades.
This post is not another "OAuth in 5 minutes" tutorial. Those tutorials are why developers get OAuth wrong. Instead, I am going to walk through every part that I have seen trip up experienced engineers -- the parts where reading the spec once is not enough, where the mental model most developers carry is subtly broken, and where subtle bugs turn into CVEs.
This is the single most important thing to understand about OAuth, and the thing most developers get wrong. OAuth 2.0 is an authorization framework. It answers the question: "Should this application be allowed to access this resource on behalf of this user?" It does not answer: "Who is this user?"
The distinction matters enormously. When you do the OAuth dance with Google and get back an access token, that token says "the bearer of this token is allowed to read this user's email." It does not say "the bearer of this token is John Smith with email john@example.com." You might think, "But I can call the /userinfo endpoint with that token and get the user's identity." And you can. But that is not part of the OAuth spec. That is OpenID Connect, which is a separate specification built on top of OAuth. When you conflate the two, you end up with authentication systems that have authorization-shaped holes.
Here is the concrete problem. Suppose you build a login flow using plain OAuth 2.0 without OIDC. You redirect the user to Google, get an authorization code, exchange it for an access token, and then call Google's user info API to get the user's email. You use that email as the user's identity in your system. This works until an attacker finds a way to substitute an access token from a different OAuth client -- one that happens to have the same scopes. The access token is perfectly valid, Google returns the correct user info, but the token was not issued for your application. You just authenticated someone using a token that was meant for a completely different service. This is the confused deputy problem, and it is exactly what the ID token in OIDC was designed to prevent.
The ID token contains an aud (audience) claim that specifies which client the token was issued for. Your application can verify that the token was issued specifically for it, not for some other OAuth client. Without this check, you are building authentication on sand.
OAuth 2.0 defined four grant types: Authorization Code, Implicit, Resource Owner Password Credentials (ROPC), and Client Credentials. OAuth 2.1, which consolidates years of best practice documents and security BCPs, officially kills two of them.
The Implicit grant was designed for single-page applications back when browsers could not make cross-origin POST requests reliably. Instead of exchanging an authorization code for tokens at the token endpoint, the access token was returned directly in the URL fragment after the authorization redirect.
The security problems are obvious in hindsight. The access token appears in the browser's URL bar, in the browser history, in server logs if there is any kind of redirect, and it is trivially extractable via JavaScript by any script running on the page. There is no way to bind the token to the client that requested it. There is no refresh token, so when the access token expires, the user has to go through the entire authorization flow again, which led developers to issue long-lived access tokens, which made the security situation even worse.
The Implicit grant was a reasonable compromise for the browser capabilities of 2012. It is inexcusable in 2026. CORS is universally supported. The Authorization Code grant with PKCE works in browsers. There is no reason to ever use the Implicit grant again.
The Resource Owner Password Credentials grant lets the client application collect the user's username and password directly and send them to the authorization server. This defeats the entire purpose of OAuth, which is to avoid sharing credentials with third-party applications. The only scenario where ROPC ever made sense was migrating legacy applications that already had the user's credentials. OAuth 2.1 removes it entirely, and I have zero sympathy for anyone who was still using it.
Authorization Code + PKCE is now the universal grant for all clients -- confidential (server-side) and public (SPAs, mobile apps). PKCE is mandatory, not optional.
Client Credentials survives for machine-to-machine communication where no user is involved. A backend service authenticating to another backend service. No user consent, no redirect, just client ID and secret exchanged for an access token.
Device Authorization Grant (RFC 8628) also survives for input-constrained devices like smart TVs and CLI tools where you cannot easily type a URL or interact with a browser.
Every OAuth tutorial shows you a diagram with arrows. I am going to show you the actual HTTP requests, because that is where the confusion lives.
Before anything else, the client generates a cryptographically random string called the code verifier, and derives a code challenge from it.
import { randomBytes, createHash } from "crypto";
function generateCodeVerifier(): string {
return randomBytes(32)
.toString("base64url");
}
function generateCodeChallenge(verifier: string): string {
return createHash("sha256")
.update(verifier)
.digest("base64url");
}
const codeVerifier = generateCodeVerifier();
const codeChallenge = generateCodeChallenge(codeVerifier);
// Store codeVerifier in the session -- you will need it laterThe code verifier is a random string between 43 and 128 characters. The code challenge is the SHA-256 hash of the verifier, base64url-encoded. The client sends the challenge in the authorization request and the verifier in the token request. The authorization server can then verify that the same client that started the flow is the one finishing it.
The client redirects the user's browser to the authorization server:
GET /authorize?
response_type=code
&client_id=my-app-client-id
&redirect_uri=https%3A%2F%2Fmyapp.com%2Fcallback
&scope=openid%20profile%20email
&state=xyzABC123
&code_challenge=E9Melhoa2OwvFrEMTJguCHaoeK1t8URWbuGJSstw-cM
&code_challenge_method=S256
&nonce=n-0S6_WzA2Mj
HTTP/1.1
Host: auth.provider.com
Let me break down every parameter, because each one matters:
response_type=code: You want an authorization code, not a token directly.client_id: Your application's identifier, registered with the authorization server.redirect_uri: Where the authorization server sends the user back after they consent. This must exactly match one of the URIs registered with the authorization server. Not approximately match. Exactly.scope: What permissions you are requesting. openid is what triggers OIDC behavior.state: A cryptographically random value that your client generates and stores. When the authorization server redirects back, it includes this value. You compare it to what you stored. This prevents CSRF attacks.code_challenge: The PKCE code challenge derived from your code verifier.code_challenge_method=S256: You are using SHA-256, not plain. Always use S256. The plain method provides no security.nonce: A random value that will be included in the ID token. You verify it to prevent replay attacks.This part happens entirely at the authorization server. The user sees a login form, enters their credentials, sees a consent screen listing the requested scopes, and clicks "Allow." Your application is not involved in this step. That is the whole point -- the user's credentials never touch your application.
The authorization server redirects the user back to your redirect URI:
HTTP/1.1 302 Found
Location: https://myapp.com/callback?
code=SplxlOBeZQQYbYS6WxSbIA
&state=xyzABC123
The authorization code is a short-lived, single-use string. It typically expires in 30 to 60 seconds. You must exchange it for tokens immediately.
Your application must verify that the state parameter matches what you stored in step 2. If it does not match, abort the flow. Someone is trying to forge a callback.
Now your server makes a back-channel POST request to the token endpoint. This is server-to-server. The user's browser is not involved.
POST /token HTTP/1.1
Host: auth.provider.com
Content-Type: application/x-www-form-urlencoded
grant_type=authorization_code
&code=SplxlOBeZQQYbYS6WxSbIA
&redirect_uri=https%3A%2F%2Fmyapp.com%2Fcallback
&client_id=my-app-client-id
&client_secret=my-app-client-secret
&code_verifier=dBjftJeZ4CVP-mB92K27uhbUJU1p1r_wW1gFWFOEjXk
For confidential clients (server-side apps), you include the client_secret. For public clients (SPAs), you omit it -- PKCE alone provides the binding. The code_verifier is the original random string you generated in step 1. The authorization server hashes it and compares the result to the code_challenge you sent in step 2. If they match, it knows the same client that started the flow is the one completing it.
{
"access_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...",
"token_type": "Bearer",
"expires_in": 3600,
"refresh_token": "8xLOxBtZp8",
"id_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...",
"scope": "openid profile email"
}You now have three different tokens. Each serves a completely different purpose. And confusing them is one of the most common mistakes I see.
The access token is what you send to resource servers (APIs) to prove you are authorized to access a resource. It is a bearer token -- whoever holds it can use it. This is why access tokens should be short-lived, typically 5 to 60 minutes.
Access tokens can be opaque strings or JWTs. The OAuth spec does not mandate a format. If the access token is a JWT, the resource server can validate it without calling the authorization server. If it is opaque, the resource server must call the token introspection endpoint to validate it.
Here is the thing about JWT access tokens that most people miss: the resource server is the intended audience of the access token, not the client application. The client should treat the access token as an opaque string, even if it is a JWT. The client should not parse it, should not make authorization decisions based on its contents, and should not rely on any claims inside it. The access token is a message from the authorization server to the resource server. The client is just the courier.
// WRONG: Client parsing the access token
const decoded = jwt.decode(accessToken);
if (decoded.role === "admin") {
showAdminPanel(); // Never do this
}
// RIGHT: Client sends token to API, API makes decisions
const response = await fetch("/api/admin/users", {
headers: { Authorization: `Bearer ${accessToken}` },
});
if (response.status === 403) {
showAccessDenied();
}The refresh token is used to get new access tokens without requiring the user to re-authenticate. It is sent to the authorization server's token endpoint, never to a resource server. Refresh tokens are typically long-lived (days, weeks, or even months) and must be stored securely.
The critical property of refresh tokens is that they are only ever sent to the authorization server. They never leave your backend. They never go to the browser. If a refresh token is compromised, the attacker can mint new access tokens indefinitely until the refresh token is revoked.
The ID token is the OIDC addition. It is always a JWT, and it contains claims about the authentication event. Who the user is, when they authenticated, how they authenticated, and critically, which client the token was issued for.
{
"iss": "https://auth.provider.com",
"sub": "user-123",
"aud": "my-app-client-id",
"exp": 1679616000,
"iat": 1679612400,
"nonce": "n-0S6_WzA2Mj",
"at_hash": "MTIzNDU2Nzg5MDEyMzQ1Ng",
"email": "user@example.com",
"name": "Jane Developer"
}
The ID token is consumed by the client application, not by resource servers. You validate it when you receive it -- check the signature, verify the issuer, verify the audience matches your client ID, verify the nonce matches what you sent, check the expiration. After validation, you extract the user's identity and create a local session. You do not send the ID token to your API on every request. That is what the access token is for.
JWTs are a compact, self-contained way to represent claims. They can be verified without a network call if you have the issuer's public key. This is great for distributed systems where you do not want every API server calling the authorization server on every request.
But JWTs have real problems that the "use JWTs for everything" crowd ignores:
Revocation is hard. Once a JWT is issued, it is valid until it expires. If you need to revoke a user's access immediately -- they are fired, their account is compromised, they violate terms of service -- you cannot invalidate a JWT that is already in the wild. You need a revocation list or a short expiration time with refresh token rotation. Neither is free.
Size matters. JWTs are large. A typical JWT with standard claims, user info, and roles can easily be 800+ bytes. If you stuff permissions into the token, it gets bigger. Every API request carries this overhead. For high-frequency APIs, this adds up.
Key management is a real operational burden. You need to rotate signing keys without invalidating all outstanding tokens. You need to distribute public keys to all resource servers. You need to handle key compromise scenarios. JWKS endpoints and key rotation policies are not trivial to operate correctly.
Information leakage. JWTs are signed, not encrypted (unless you use JWE, which almost nobody does). Anyone who intercepts a JWT can read its contents. Do not put sensitive data in JWT claims unless you are also encrypting the token.
import jwt from "jsonwebtoken";
import jwksClient from "jwks-rsa";
const client = jwksClient({
jwksUri: "https://auth.provider.com/.well-known/jwks.json",
cache: true,
cacheMaxAge: 600000, // 10 minutes
rateLimit: true,
});
function getSigningKey(header: jwt.JwtHeader): Promise<string> {
return new Promise((resolve, reject) => {
client.getSigningKey(header.kid, (err, key) => {
if (err) return reject(err);
resolve(key!.getPublicKey());
});
});
}
async function verifyAccessToken(token: string): Promise<jwt.JwtPayload> {
return new Promise((resolve, reject) => {
jwt.verify(
token,
(header, callback) => {
getSigningKey(header)
.then((key) => callback(null, key))
.catch(callback);
},
{
algorithms: ["RS256"],
issuer: "https://auth.provider.com",
audience: "my-api-resource-id",
clockTolerance: 30, // seconds
},
(err, decoded) => {
if (err) return reject(err);
resolve(decoded as jwt.JwtPayload);
}
);
});
}This is where most developers' mental models break down completely. OAuth uses three different mechanisms for access control, and they operate at three different layers.
Scopes are the coarse-grained permissions that the user grants to the client application during the consent step. When you see "This app wants to access your email and profile," those are scopes. Scopes limit what the client application can do on behalf of the user. They do not grant the user new permissions.
If a user does not have admin access to a resource, granting the admin scope to a client application does not give that application admin access. Scopes are a ceiling, not a floor.
Common scopes you will encounter: openid (required for OIDC), profile (basic user info), email (email address), offline_access (get a refresh token). Custom scopes like read:orders, write:products define what API resources the application can access.
Claims are key-value pairs inside tokens (ID tokens and optionally access tokens) that make assertions about the user or the authentication event. Standard OIDC claims include sub (subject identifier), email, name, given_name, family_name, picture, and many more.
Claims tell you facts about the user. They do not directly grant or deny access. Your application uses claims to make authorization decisions, but the claims themselves are just data.
Permissions are your application's internal access control model. Roles, ACLs, ABAC policies -- whatever you use. These are not part of the OAuth or OIDC specs. They live in your application's database and business logic.
Here is how they work together in practice:
interface AuthorizationContext {
scopes: string[]; // From the access token - what the app can do
claims: Record<string, unknown>; // From tokens - facts about the user
permissions: string[]; // From your database - what the user can do
}
async function authorizeRequest(
accessToken: string,
requiredPermission: string
): Promise<boolean> {
// 1. Verify the access token and extract scopes
const tokenPayload = await verifyAccessToken(accessToken);
const scopes = (tokenPayload.scope as string || "").split(" ");
// 2. Check if the client app has the required scope
if (!scopes.includes("read:orders")) {
return false; // App not authorized for this resource type
}
// 3. Load user permissions from your database
const userPermissions = await loadUserPermissions(tokenPayload.sub!);
// 4. Check if the user has the specific permission
if (!userPermissions.includes(requiredPermission)) {
return false; // User not authorized for this action
}
return true;
}The access token's scopes determine what the client application is allowed to request. The user's permissions in your system determine what the user is allowed to do. Both checks must pass. The scope is "the application is allowed to access orders on behalf of the user." The permission is "this specific user is allowed to view orders in the accounting department." They are independent constraints.
This is the part that starts religious wars. Where do you store tokens in a browser application? Every option has tradeoffs, and nobody wants to admit it.
Storing tokens in localStorage is the simplest approach and the one that most tutorials recommend. It is also the one that makes security researchers cringe.
The problem is XSS. If an attacker can execute JavaScript on your page -- through a dependency supply chain attack, a DOM-based XSS vulnerability, or even a malicious browser extension -- they can read localStorage and exfiltrate every token. Game over. The attacker has the user's access token and potentially their refresh token.
The counterargument is "if you have XSS, you are already compromised." This is true in a narrow sense -- an XSS attacker can make API calls using the user's session regardless of where tokens are stored. But there is a critical difference: if the attacker can exfiltrate the tokens, they can use them from their own machine, at any time, until the tokens expire. If tokens are in httpOnly cookies, the attacker can only make requests from the victim's browser session, and only while the XSS payload is executing.
Storing tokens in httpOnly, Secure, SameSite cookies prevents JavaScript from accessing them. This eliminates the token exfiltration vector from XSS attacks.
But cookies come with their own headaches. CSRF attacks are a concern, though SameSite=Strict or SameSite=Lax mitigates this significantly in modern browsers. Cookie size limits (typically 4 KB) can be a problem with large JWTs. And if your API is on a different domain than your frontend, you need to deal with third-party cookie restrictions, which browsers are increasingly tightening.
// Setting tokens as httpOnly cookies in Express
function setAuthCookies(
res: Response,
accessToken: string,
refreshToken: string
): void {
res.cookie("access_token", accessToken, {
httpOnly: true,
secure: true,
sameSite: "strict",
maxAge: 3600 * 1000, // 1 hour
path: "/",
domain: ".myapp.com",
});
res.cookie("refresh_token", refreshToken, {
httpOnly: true,
secure: true,
sameSite: "strict",
maxAge: 7 * 24 * 3600 * 1000, // 7 days
path: "/api/auth/refresh", // Only sent to refresh endpoint
domain: ".myapp.com",
});
}Notice the refresh token cookie has a restricted path. It is only sent when the browser makes a request to /api/auth/refresh. This limits the exposure of the refresh token -- it is not sent with every API request, only when the client explicitly refreshes.
The Backend-for-Frontend (BFF) pattern is the approach I recommend for any application where security is a genuine priority. The idea is simple: your SPA never touches OAuth tokens directly. Instead, you have a thin backend that handles the entire OAuth flow, stores tokens server-side, and issues a simple session cookie to the browser.
import express from "express";
import session from "express-session";
import RedisStore from "connect-redis";
import { createClient } from "redis";
const redisClient = createClient({ url: process.env.REDIS_URL });
const app = express();
app.use(
session({
store: new RedisStore({ client: redisClient }),
secret: process.env.SESSION_SECRET!,
resave: false,
saveUninitialized: false,
cookie: {
httpOnly: true,
secure: true,
sameSite: "strict",
maxAge: 24 * 60 * 60 * 1000, // 24 hours
},
})
);
// BFF proxies API calls, attaching the access token server-side
app.use("/api/proxy/*", async (req, res) => {
const tokenData = req.session.tokens;
if (!tokenData) {
return res.status(401).json({ error: "Not authenticated" });
}
// Refresh if expired
if (Date.now() > tokenData.expiresAt) {
const newTokens = await refreshAccessToken(tokenData.refreshToken);
req.session.tokens = {
accessToken: newTokens.access_token,
refreshToken: newTokens.refresh_token || tokenData.refreshToken,
expiresAt: Date.now() + newTokens.expires_in * 1000,
};
}
// Forward request to actual API with access token
const apiPath = req.params[0];
const apiResponse = await fetch(`${process.env.API_BASE}/${apiPath}`, {
method: req.method,
headers: {
Authorization: `Bearer ${req.session.tokens!.accessToken}`,
"Content-Type": req.headers["content-type"] || "application/json",
},
body: ["GET", "HEAD"].includes(req.method) ? undefined : JSON.stringify(req.body),
});
const data = await apiResponse.json();
res.status(apiResponse.status).json(data);
});The browser only ever sees a session cookie. The access and refresh tokens never leave the server. The BFF handles token refresh transparently. From the SPA's perspective, it is just making API calls to its own backend, and the backend handles all the OAuth complexity.
The downside is that you need a server. For teams building "serverless" SPAs, this means adding infrastructure. But I would argue that if your application handles sensitive data and you do not have a backend for frontend, your architecture has a security gap, and eventually you will need to address it.
Refresh token rotation is one of those security measures that sounds simple but has surprisingly sharp edges. The concept: every time a refresh token is used, the authorization server issues a new refresh token along with the new access token. The old refresh token is immediately invalidated.
The purpose is to limit the damage from a compromised refresh token. If an attacker steals a refresh token and uses it, the legitimate client's next refresh attempt will fail (because the token was already consumed). The authorization server can detect this and revoke the entire token family -- all tokens descended from the original authorization.
interface TokenFamily {
familyId: string;
currentRefreshToken: string;
userId: string;
clientId: string;
createdAt: Date;
isRevoked: boolean;
}
async function handleTokenRefresh(
refreshToken: string,
clientId: string
): Promise<TokenResponse> {
const family = await findTokenFamily(refreshToken);
if (!family) {
throw new OAuthError("invalid_grant", "Unknown refresh token");
}
if (family.isRevoked) {
// This token family was already revoked -- possible token theft
// Revoke ALL tokens for this user as a precaution
await revokeAllUserTokens(family.userId);
await notifySecurityTeam({
event: "refresh_token_reuse_detected",
userId: family.userId,
clientId: family.clientId,
familyId: family.familyId,
});
throw new OAuthError("invalid_grant", "Token family revoked");
}
if (family.currentRefreshToken !== refreshToken) {
// Someone is using an old refresh token from this family
// This indicates the current token was stolen
await revokeTokenFamily(family.familyId);
throw new OAuthError("invalid_grant", "Refresh token already used");
}
// Issue new tokens
const newAccessToken = await issueAccessToken(family.userId, clientId);
const newRefreshToken = await generateRefreshToken();
// Rotate: update the family with the new refresh token
await updateTokenFamily(family.familyId, newRefreshToken);
return {
access_token: newAccessToken,
refresh_token: newRefreshToken,
token_type: "Bearer",
expires_in: 3600,
};
}The tricky part is race conditions. What happens if the user has your app open in two tabs, and both tabs try to refresh at the same moment? Tab A sends the refresh token, gets a new one. Tab B sends the same (now-invalidated) refresh token. The server sees a reuse and revokes the entire family. Both tabs are now logged out.
This is a real problem, and there are a few approaches to handle it:
Grace period: Accept the old refresh token for a short window (say, 30 seconds) after rotation. This handles the concurrent request case but opens a small window for token reuse attacks.
Client-side coordination: Use a lock mechanism (like BroadcastChannel API or localStorage events) to ensure only one tab refreshes at a time, and the other tabs use the new token.
Backend queuing: The authorization server deduplicates concurrent refresh requests for the same token and returns the same new token set to all of them.
I have seen all three approaches in production. The grace period is the most pragmatic. The client-side coordination is the most robust. The backend queuing is the most elegant but hardest to implement correctly in a distributed system.
OpenID Connect (OIDC) is an identity layer on top of OAuth 2.0. It is not a separate protocol -- it reuses OAuth's authorization endpoint, token endpoint, and access tokens, and adds identity-specific features. Understanding what OIDC adds (and what it does not) clears up most of the confusion.
OIDC defines a discovery document at /.well-known/openid-configuration. This is a JSON document that tells clients everything they need to know about the OIDC provider: where the authorization endpoint is, where the token endpoint is, what scopes are supported, what signing algorithms are used, where the JWKS endpoint is.
interface OpenIDConfiguration {
issuer: string;
authorization_endpoint: string;
token_endpoint: string;
userinfo_endpoint: string;
jwks_uri: string;
scopes_supported: string[];
response_types_supported: string[];
grant_types_supported: string[];
id_token_signing_alg_values_supported: string[];
claims_supported: string[];
code_challenge_methods_supported: string[];
}
async function discoverOIDCProvider(
issuerUrl: string
): Promise<OpenIDConfiguration> {
const response = await fetch(
`${issuerUrl}/.well-known/openid-configuration`
);
if (!response.ok) {
throw new Error(`OIDC discovery failed: ${response.status}`);
}
const config = await response.json() as OpenIDConfiguration;
// Validate the issuer matches
if (config.issuer !== issuerUrl) {
throw new Error(
`Issuer mismatch: expected ${issuerUrl}, got ${config.issuer}`
);
}
return config;
}Discovery eliminates hard-coded endpoint URLs and makes multi-provider support significantly easier. Instead of maintaining a configuration file with endpoints for each provider, you just store the issuer URL and discover everything else at runtime (or at startup, with appropriate caching).
The UserInfo endpoint returns claims about the authenticated user. You call it with an access token, and it returns the user's profile information based on the scopes you requested.
This might seem redundant with the ID token, and it partially is. The ID token contains the essential identity claims inline. The UserInfo endpoint provides additional claims and is useful when you need more information than what was included in the ID token, or when you want to check the user's current profile rather than what was snapshotted at authentication time.
Proper ID token validation is not just "decode the JWT and check the expiration." The OIDC spec defines a specific validation procedure, and skipping steps creates vulnerabilities.
import jwt from "jsonwebtoken";
interface IDTokenClaims {
iss: string;
sub: string;
aud: string | string[];
exp: number;
iat: number;
nonce?: string;
at_hash?: string;
auth_time?: number;
acr?: string;
amr?: string[];
}
async function validateIDToken(
idToken: string,
expectedClientId: string,
expectedNonce: string,
expectedIssuer: string
): Promise<IDTokenClaims> {
// 1. Decode header to get kid (key ID)
const header = jwt.decode(idToken, { complete: true })?.header;
if (!header || !header.kid) {
throw new Error("Invalid ID token: missing header or kid");
}
// 2. Fetch the signing key from JWKS
const signingKey = await getSigningKey(header);
// 3. Verify signature and standard claims
const claims = jwt.verify(idToken, signingKey, {
algorithms: ["RS256", "ES256"], // Only allow expected algorithms
issuer: expectedIssuer,
audience: expectedClientId,
clockTolerance: 60,
}) as IDTokenClaims;
// 4. Verify nonce (prevents replay attacks)
if (claims.nonce !== expectedNonce) {
throw new Error("ID token nonce mismatch");
}
// 5. Verify iat (issued at) is not too far in the past
const maxAge = 600; // 10 minutes
const now = Math.floor(Date.now() / 1000);
if (now - claims.iat > maxAge) {
throw new Error("ID token too old");
}
// 6. If at_hash is present, verify it matches the access token
// (Prevents token substitution attacks)
return claims;
}The at_hash claim is particularly important and commonly overlooked. It is the left half of the SHA-256 hash of the access token, base64url-encoded. If present, you must verify it matches the access token you received alongside the ID token. This prevents an attacker from swapping the access token in the response -- the ID token cryptographically binds to the specific access token it was issued with.
If your application needs to support "Sign in with Google," "Sign in with Microsoft," "Sign in with your company's Okta," and a dozen enterprise SAML-to-OIDC bridges, you are doing multi-tenant OAuth. And it is where everything gets complicated.
The core challenge is that each identity provider has its own configuration, its own quirks, its own claims format, and its own interpretation of the spec. Google uses sub as a stable numeric identifier. Microsoft uses a GUID. Some enterprise IdPs use the user's email as sub, which breaks when someone changes their email.
interface OIDCProviderConfig {
providerId: string;
issuer: string;
clientId: string;
clientSecret: string;
discoveryUrl: string;
claimMappings: {
userId: string; // Which claim to use as the user ID
email: string; // Which claim contains the email
displayName: string; // Which claim contains the display name
};
allowedDomains?: string[]; // For enterprise: restrict to specific email domains
}
class MultiTenantOAuth {
private providers: Map<string, OIDCProviderConfig> = new Map();
private discoveryCache: Map<string, OpenIDConfiguration> = new Map();
async registerProvider(config: OIDCProviderConfig): Promise<void> {
// Discover and validate the provider
const discovery = await discoverOIDCProvider(config.issuer);
this.discoveryCache.set(config.providerId, discovery);
this.providers.set(config.providerId, config);
}
buildAuthorizationUrl(providerId: string, sessionState: AuthSessionState): string {
const config = this.providers.get(providerId);
const discovery = this.discoveryCache.get(providerId);
if (!config || !discovery) {
throw new Error(`Unknown provider: ${providerId}`);
}
const params = new URLSearchParams({
response_type: "code",
client_id: config.clientId,
redirect_uri: `${process.env.APP_URL}/auth/callback/${providerId}`,
scope: "openid profile email",
state: sessionState.state,
nonce: sessionState.nonce,
code_challenge: sessionState.codeChallenge,
code_challenge_method: "S256",
});
return `${discovery.authorization_endpoint}?${params}`;
}
async handleCallback(
providerId: string,
code: string,
sessionState: AuthSessionState
): Promise<NormalizedUser> {
const config = this.providers.get(providerId);
const discovery = this.discoveryCache.get(providerId);
if (!config || !discovery) {
throw new Error(`Unknown provider: ${providerId}`);
}
// Exchange code for tokens
const tokens = await this.exchangeCode(
discovery.token_endpoint,
code,
config,
sessionState,
providerId
);
// Validate ID token
const claims = await validateIDToken(
tokens.id_token,
config.clientId,
sessionState.nonce,
config.issuer
);
// Normalize user identity using provider-specific claim mappings
return {
providerId,
providerUserId: String(claims[config.claimMappings.userId as keyof IDTokenClaims]),
email: String(claims[config.claimMappings.email as keyof IDTokenClaims]),
displayName: String(claims[config.claimMappings.displayName as keyof IDTokenClaims]),
};
}
private async exchangeCode(
tokenEndpoint: string,
code: string,
config: OIDCProviderConfig,
sessionState: AuthSessionState,
providerId: string
): Promise<TokenResponse & { id_token: string }> {
const body = new URLSearchParams({
grant_type: "authorization_code",
code,
redirect_uri: `${process.env.APP_URL}/auth/callback/${providerId}`,
client_id: config.clientId,
client_secret: config.clientSecret,
code_verifier: sessionState.codeVerifier,
});
const response = await fetch(tokenEndpoint, {
method: "POST",
headers: { "Content-Type": "application/x-www-form-urlencoded" },
body,
});
if (!response.ok) {
const error = await response.json();
throw new OAuthError(error.error, error.error_description);
}
return response.json();
}
}The key insight with multi-tenant OAuth is the normalized user identity. Each provider gives you user information in a different shape. You need a normalization layer that maps provider-specific claims to your internal user model. And you need a linking strategy: if a user signs in with Google and later signs in with Microsoft using the same email, are they the same user? Usually yes, but not always. Enterprise accounts might use different email domains. Some organizations have multiple identity providers.
I recommend a federated identity table:
// Database schema concept
interface FederatedIdentity {
id: string;
userId: string; // Your internal user ID
provider: string; // "google", "microsoft", "okta-acme-corp"
providerUserId: string; // The sub claim from that provider
email: string;
linkedAt: Date;
}
// User can have multiple linked identities
// Lookup: (provider, providerUserId) -> userId
// This allows a single user to sign in with multiple providersOAuth vulnerabilities are not theoretical. They are exploited regularly, and even big companies get them wrong. Here are the ones I have seen in production.
If your authorization server does not perform exact string matching on redirect URIs, attackers can craft redirect URIs that send authorization codes to their servers. Some OAuth implementations allow wildcard matching or prefix matching on redirect URIs. This is catastrophically insecure.
// Registered: https://myapp.com/callback
// Attacker tries: https://myapp.com/callback/../../../evil.com/steal
// Or: https://myapp.com.evil.com/callback
// Or: https://myapp.com/callback?redirect=https://evil.com
The fix is simple: exact string matching on redirect URIs. No wildcards. No subdomain matching. No query parameter variations. Every redirect URI must be registered exactly as it will be used.
If you do not include and validate the state parameter, an attacker can initiate an OAuth flow with their own account and trick a victim into completing it. The victim ends up logged in as the attacker. This sounds backward, but it enables the attacker to perform actions that are attributed to the victim, or to access the victim's session from the attacker's account.
If your callback page includes links to external resources, or loads external scripts, the authorization code in the URL can leak via the Referer header. The mitigation is to immediately exchange the code and redirect to a clean URL, and to set Referrer-Policy: no-referrer on your callback page.
In a multi-provider setup, an attacker can manipulate the flow to cause the client to send an authorization code obtained from one provider to a different provider's token endpoint. The client thinks it is completing a flow with Provider A, but it is actually sending Provider A's authorization code to the attacker's server posing as Provider B.
The mitigation is to include the iss (issuer) parameter in the authorization response (defined in RFC 9207) and to bind the provider identity to the session state so the callback handler knows exactly which provider to use.
app.get("/auth/callback/:providerId", async (req, res) => {
const { providerId } = req.params;
const { code, state, iss } = req.query;
// Verify state matches what we stored
const sessionState = await getSessionState(state as string);
if (!sessionState) {
return res.status(400).json({ error: "Invalid state" });
}
// Verify the provider matches what we expected
if (sessionState.expectedProvider !== providerId) {
return res.status(400).json({ error: "Provider mismatch" });
}
// If iss parameter is present, verify it matches
if (iss && iss !== sessionState.expectedIssuer) {
return res.status(400).json({ error: "Issuer mismatch" });
}
// Proceed with token exchange
const user = await oauthManager.handleCallback(
providerId,
code as string,
sessionState
);
// Create session and redirect
req.session.userId = user.id;
res.redirect("/dashboard");
});Many applications request broad scopes and never check whether the granted scopes actually include what they need. If a user denies a specific scope during consent, the authorization server may still issue a token with reduced scopes. If your application does not check the returned scopes, it will fail in confusing ways later.
async function handleTokenResponse(tokenResponse: TokenResponse): Promise<void> {
const grantedScopes = tokenResponse.scope?.split(" ") || [];
const requiredScopes = ["openid", "email", "profile"];
const missingScopes = requiredScopes.filter(
(s) => !grantedScopes.includes(s)
);
if (missingScopes.length > 0) {
throw new Error(
`Required scopes not granted: ${missingScopes.join(", ")}. ` +
`Please re-authorize with the required permissions.`
);
}
}Let me put it all together with an Express middleware that handles the full OAuth/OIDC lifecycle. This is stripped-down production code, not a tutorial example.
import express, { Request, Response, NextFunction } from "express";
import { randomBytes, createHash } from "crypto";
// Types
interface AuthSession {
state: string;
nonce: string;
codeVerifier: string;
providerId: string;
returnTo: string;
}
interface UserSession {
userId: string;
email: string;
accessToken: string;
refreshToken: string;
tokenExpiresAt: number;
provider: string;
}
// Middleware: Require authentication
function requireAuth(req: Request, res: Response, next: NextFunction): void {
const userSession = req.session?.user as UserSession | undefined;
if (!userSession) {
// Store the requested URL for post-login redirect
req.session.returnTo = req.originalUrl;
res.redirect("/auth/login");
return;
}
// Check if access token is expired
if (Date.now() > userSession.tokenExpiresAt) {
// Token expired -- try to refresh
refreshTokens(req, res, next).catch(() => {
req.session.destroy(() => {
res.redirect("/auth/login");
});
});
return;
}
next();
}
async function refreshTokens(
req: Request,
res: Response,
next: NextFunction
): Promise<void> {
const userSession = req.session.user as UserSession;
const provider = oauthManager.getProvider(userSession.provider);
const discovery = await oauthManager.getDiscovery(userSession.provider);
const body = new URLSearchParams({
grant_type: "refresh_token",
refresh_token: userSession.refreshToken,
client_id: provider.clientId,
client_secret: provider.clientSecret,
});
const response = await fetch(discovery.token_endpoint, {
method: "POST",
headers: { "Content-Type": "application/x-www-form-urlencoded" },
body,
});
if (!response.ok) {
throw new Error("Token refresh failed");
}
const tokens = await response.json();
// Update session with new tokens (rotation)
userSession.accessToken = tokens.access_token;
userSession.tokenExpiresAt = Date.now() + tokens.expires_in * 1000;
if (tokens.refresh_token) {
userSession.refreshToken = tokens.refresh_token;
}
next();
}
// Routes
const authRouter = express.Router();
// Initiate login
authRouter.get("/login/:providerId", (req, res) => {
const { providerId } = req.params;
const state = randomBytes(32).toString("base64url");
const nonce = randomBytes(16).toString("base64url");
const codeVerifier = randomBytes(32).toString("base64url");
const codeChallenge = createHash("sha256")
.update(codeVerifier)
.digest("base64url");
// Store auth session data
req.session.authFlow = {
state,
nonce,
codeVerifier,
providerId,
returnTo: req.session.returnTo || "/",
} satisfies AuthSession;
const authUrl = oauthManager.buildAuthorizationUrl(providerId, {
state,
nonce,
codeChallenge,
codeChallengeMethod: "S256",
});
res.redirect(authUrl);
});
// Handle callback
authRouter.get("/callback/:providerId", async (req, res) => {
const { providerId } = req.params;
const { code, state, error, error_description } = req.query;
// Handle authorization errors
if (error) {
console.error(`OAuth error: ${error} - ${error_description}`);
return res.redirect(`/auth/error?message=${encodeURIComponent(String(error_description || error))}`);
}
// Validate session state
const authFlow = req.session.authFlow as AuthSession | undefined;
if (!authFlow || authFlow.state !== state || authFlow.providerId !== providerId) {
return res.status(400).json({ error: "Invalid authentication state" });
}
try {
// Exchange code for tokens and validate
const result = await oauthManager.handleCallback(
providerId,
code as string,
{
state: authFlow.state,
nonce: authFlow.nonce,
codeVerifier: authFlow.codeVerifier,
codeChallenge: "", // Not needed for callback
}
);
// Find or create user in your database
const user = await findOrCreateUser({
provider: providerId,
providerUserId: result.providerUserId,
email: result.email,
displayName: result.displayName,
});
// Create user session
req.session.user = {
userId: user.id,
email: user.email,
accessToken: result.accessToken,
refreshToken: result.refreshToken,
tokenExpiresAt: result.tokenExpiresAt,
provider: providerId,
} satisfies UserSession;
// Clean up auth flow data
delete req.session.authFlow;
const returnTo = authFlow.returnTo || "/";
delete req.session.returnTo;
res.redirect(returnTo);
} catch (err) {
console.error("OAuth callback error:", err);
res.redirect("/auth/error?message=Authentication+failed");
}
});
// Logout
authRouter.post("/logout", async (req, res) => {
const userSession = req.session.user as UserSession | undefined;
if (userSession) {
// Revoke tokens at the provider if supported
try {
const discovery = await oauthManager.getDiscovery(userSession.provider);
if (discovery.revocation_endpoint) {
await fetch(discovery.revocation_endpoint, {
method: "POST",
headers: { "Content-Type": "application/x-www-form-urlencoded" },
body: new URLSearchParams({
token: userSession.refreshToken,
token_type_hint: "refresh_token",
}),
});
}
} catch {
// Log but do not block logout
}
}
req.session.destroy(() => {
res.clearCookie("connect.sid");
res.redirect("/");
});
});
export { authRouter, requireAuth };After years of implementing OAuth in various shapes, here are the things I wish were in every tutorial:
Always use PKCE, even for confidential clients. The spec now requires it, and there is zero downside. It protects against authorization code interception regardless of client type.
Never roll your own authorization server. Use an established one -- Keycloak, Auth0, Ory Hydra, AWS Cognito. The attack surface of an authorization server is enormous, and the security requirements are beyond what most teams can maintain.
Short-lived access tokens are not optional. Five to fifteen minutes is a reasonable lifetime. If you are issuing access tokens that last 24 hours, you have a revocation problem you are ignoring.
Test the unhappy paths. What happens when the user denies consent? What happens when the authorization server is down? What happens when token refresh fails? What happens when the user's account is disabled between the time they authenticated and the time they use their access token? Every one of these scenarios will happen in production.
Log everything, but sanitize tokens. Log the authorization flow events -- who initiated it, which provider, what scopes were requested, whether it succeeded or failed. But never log actual tokens. Log a hash or the last few characters for correlation.
Plan for provider outages. If your sole identity provider goes down, all your users are locked out. Consider supporting multiple providers, or at least have a break-glass procedure.
OAuth 2.1 and OIDC are not inherently difficult. They are difficult because the specification is broad, the implementations vary, and the security implications of getting any step wrong are severe. The best thing you can do is understand the protocol well enough to know which parts matter for your specific use case, implement those parts correctly, and use battle-tested libraries for everything else. The days of hand-rolling OAuth clients from scratch should be behind us. The days of understanding what those libraries do on your behalf should not be.