A deep dive into authentication patterns for web applications. Why JWTs aren't always the answer, when sessions still win, OAuth 2.0 flows demystified, refresh token rotation, and the security mistakes I see in almost every codebase.
I have reviewed more authentication pull requests than I care to count. Not just my own code -- code from teams at startups, mid-sized companies, and a few Fortune 500 engineering orgs. And here is the uncomfortable truth: almost every codebase I have audited has at least one critical authentication vulnerability. Not edge cases. Not theoretical attacks. Real, exploitable holes that would let an attacker impersonate users, steal sessions, or escalate privileges.
The problem is not that developers are careless. The problem is that authentication is taught badly. Every blog post says "use JWTs" or "use sessions" without explaining the actual tradeoffs. Every OAuth tutorial glosses over the parts that matter. And the security implications of each decision are buried in RFCs that nobody reads.
This post is different. I am going to walk through every major authentication pattern, explain the tradeoffs that actually matter in production, and share the mistakes I have seen (and made) so you can avoid them. Fair warning: I have strong opinions. I have earned every single one of them the hard way.
Let me start with the debate that consumes more engineering hours than it should. Sessions versus JWTs. The way this debate usually goes is: someone on the team says "JWTs are stateless and scale better," someone else says "sessions are more secure," and then the team picks whichever one the most senior engineer prefers, without actually understanding the tradeoffs.
Both sides are wrong in the ways that matter most.
A session-based system works like this: the user logs in, the server creates a session record (in a database, Redis, or some other store), and sends back an opaque session ID in a cookie. On every subsequent request, the server looks up the session ID and loads the associated data.
import { randomBytes } from "crypto";
interface SessionData {
userId: string;
roles: string[];
createdAt: number;
lastActivity: number;
ipAddress: string;
userAgent: string;
csrfToken: string;
}
class SessionStore {
private redis: Redis;
private readonly SESSION_TTL = 60 * 60 * 24; // 24 hours
private readonly IDLE_TIMEOUT = 60 * 30; // 30 minutes
async create(userId: string, req: Request): Promise<string> {
const sessionId = randomBytes(32).toString("hex");
const csrfToken = randomBytes(32).toString("hex");
const session: SessionData = {
userId,
roles: await this.loadUserRoles(userId),
createdAt: Date.now(),
lastActivity: Date.now(),
ipAddress: this.extractIp(req),
userAgent: req.headers.get("user-agent") ?? "unknown",
csrfToken,
};
await this.redis.set(
`session:${sessionId}`,
JSON.stringify(session),
"EX",
this.SESSION_TTL
);
// Track active sessions per user for the "active sessions" UI
await this.redis.sadd(`user-sessions:${userId}`, sessionId);
return sessionId;
}
async validate(sessionId: string): Promise<SessionData | null> {
const raw = await this.redis.get(`session:${sessionId}`);
if (!raw) return null;
const session: SessionData = JSON.parse(raw);
// Check idle timeout
if (Date.now() - session.lastActivity > this.IDLE_TIMEOUT * 1000) {
await this.destroy(sessionId, session.userId);
return null;
}
// Sliding window: refresh TTL on activity
session.lastActivity = Date.now();
await this.redis.set(
`session:${sessionId}`,
JSON.stringify(session),
"EX",
this.SESSION_TTL
);
return session;
}
async destroy(sessionId: string, userId: string): Promise<void> {
await this.redis.del(`session:${sessionId}`);
await this.redis.srem(`user-sessions:${userId}`, sessionId);
}
async destroyAllForUser(userId: string): Promise<void> {
const sessionIds = await this.redis.smembers(`user-sessions:${userId}`);
if (sessionIds.length > 0) {
await this.redis.del(...sessionIds.map((id) => `session:${id}`));
await this.redis.del(`user-sessions:${userId}`);
}
}
}Here is why sessions are still the right default for most web applications:
Instant revocation. When you detect that a user's account has been compromised, you delete their session records and they are immediately logged out. Not "logged out after the token expires in 15 minutes." Immediately. In incident response, those 15 minutes can be the difference between a contained breach and a catastrophic one.
The "active sessions" feature is trivial. Users expect to see where they are logged in and to revoke individual sessions. With server-side sessions, this is a simple UI on top of data you already have. With JWTs, you need a token blocklist -- which means you have rebuilt server-side sessions with extra steps.
Session data stays on the server. The client never sees roles, permissions, internal user IDs, or any other sensitive data. With JWTs, all of that is in the payload. Yes, it is base64-encoded, not encrypted. Anyone can decode it. I have seen JWTs that contained email addresses, phone numbers, and internal database IDs. That is an information leak.
Cookie size stays small. A session ID is 64 characters. A JWT with a few claims is already 400+ bytes, and I have seen JWTs that were over 4KB -- the maximum cookie size in most browsers. When you hit that limit, things break in ways that are very hard to debug.
The cost is that you need a session store. In 2026, this is a non-issue. Redis can handle millions of session lookups per second on a single node. The latency is sub-millisecond. If Redis being a "single point of failure" worries you, use Redis Sentinel or a cluster. This is a solved problem.
JWTs are not bad. They are misused. Here is the narrow set of situations where JWTs genuinely shine:
Microservice-to-microservice authentication. When Service A calls Service B, and Service B needs to know who the original user is and what permissions they have, a JWT is elegant. Service B can verify the token without calling an auth service. This is the use case JWTs were designed for.
Short-lived tokens for specific actions. Password reset links, email verification tokens, file download tokens that expire in 5 minutes. The stateless nature is a feature here because you do not want to store records for throwaway tokens.
Third-party API access. When you are building an API that external developers consume, JWTs work well as access tokens. The developer gets a token, includes it in headers, and your API can verify it without a database lookup.
Here is a JWT implementation that is not terrible:
import { SignJWT, jwtVerify, type JWTPayload } from "jose";
interface AuthTokenPayload extends JWTPayload {
sub: string;
roles: string[];
sessionId: string; // Link back to the session for revocation
}
const ACCESS_TOKEN_TTL = "5m"; // 5 minutes, not 15, not 1 hour
const SIGNING_KEY = new TextEncoder().encode(process.env.JWT_SECRET);
async function createAccessToken(
userId: string,
roles: string[],
sessionId: string
): Promise<string> {
return new SignJWT({
sub: userId,
roles,
sessionId,
} satisfies AuthTokenPayload)
.setProtectedHeader({ alg: "HS256", typ: "JWT" })
.setIssuedAt()
.setExpirationTime(ACCESS_TOKEN_TTL)
.setIssuer("https://yourdomain.com")
.setAudience("https://yourdomain.com")
.setJti(crypto.randomUUID())
.sign(SIGNING_KEY);
}
async function verifyAccessToken(
token: string
): Promise<AuthTokenPayload | null> {
try {
const { payload } = await jwtVerify(token, SIGNING_KEY, {
issuer: "https://yourdomain.com",
audience: "https://yourdomain.com",
algorithms: ["HS256"],
// jose handles exp/nbf/iat validation automatically
});
return payload as AuthTokenPayload;
} catch {
return null;
}
}Notice the sessionId in the JWT payload. This is the pattern I use when JWTs are required: the JWT is a short-lived (5-minute) cache of the session data. If you need to revoke access, you revoke the session. The JWT will stop working within 5 minutes at most. This is the hybrid approach that gives you most of the benefits of both patterns.
I need to address this directly because it causes real damage. The claim is that JWTs are stateless because the server does not need to store anything. This is technically true and practically useless.
The moment you need any of the following, your JWTs are no longer stateless:
Every production system I have seen that started with "stateless JWTs" ended up with a Redis-backed token store within 6 months. Which is just sessions with extra complexity.
Regardless of whether you use sessions or JWTs, if your auth token lives in a cookie (and for web apps, it should), the cookie configuration is critical. I see mistakes here constantly.
function setAuthCookie(res: Response, name: string, value: string): void {
const isProduction = process.env.NODE_ENV === "production";
res.headers.append(
"Set-Cookie",
[
`${name}=${value}`,
"HttpOnly",
isProduction ? "Secure" : "",
"SameSite=Lax",
"Path=/",
`Max-Age=${60 * 60 * 24}`, // 24 hours
isProduction ? `Domain=.yourdomain.com` : "",
]
.filter(Boolean)
.join("; ")
);
}Let me explain each attribute and why it matters:
HttpOnly prevents JavaScript from reading the cookie. Without this, any XSS vulnerability on your site gives an attacker access to the session token. I have seen codebases where auth tokens were stored in non-HttpOnly cookies "so the frontend can read the user ID." Do not do this. If the frontend needs user data, create a /api/me endpoint.
Secure ensures the cookie is only sent over HTTPS. Without this, anyone on the same network (coffee shop WiFi, hotel WiFi, corporate proxy) can intercept the cookie in transit. This is not theoretical. Tools like Wireshark make this trivial.
SameSite controls when the cookie is sent with cross-origin requests. There are three values:
Strict: The cookie is never sent on cross-origin requests. This breaks legitimate flows like clicking a link from an email to your site -- the user will not be logged in when they arrive.Lax: The cookie is sent on top-level navigations (clicking links) but not on cross-origin subrequests (fetch, forms, iframes). This is the right default for auth cookies.None: The cookie is always sent, even on cross-origin requests. This requires Secure and is only needed for specific cases like cross-domain auth or embedded iframes.The SameSite=Lax mistake I see constantly: developers set SameSite=Strict thinking it is more secure, then spend days debugging why their OAuth flow is broken. OAuth involves redirects from the identity provider back to your site -- a cross-origin navigation. With Strict, the auth cookie is not sent on that redirect, and the user appears logged out.
Domain scoping. If you set Domain=.yourdomain.com, the cookie is accessible to all subdomains. If your main app is app.yourdomain.com and you also have blog.yourdomain.com running WordPress with known vulnerabilities, an attacker who compromises the blog can steal your app's auth cookies. Only set the Domain attribute if you genuinely need cross-subdomain auth. By default, cookies are scoped to the exact origin.
Cookie prefixes are a browser-enforced security mechanism that most developers do not know about:
// __Host- prefix requires: Secure, no Domain, Path=/
// This prevents subdomain attacks and ensures HTTPS-only
const COOKIE_NAME = "__Host-session";
// __Secure- prefix requires: Secure flag
// Less restrictive but still ensures HTTPS
const FALLBACK_COOKIE_NAME = "__Secure-session";The __Host- prefix is the gold standard. When a cookie name starts with __Host-, browsers enforce that it must have the Secure flag, must not have a Domain attribute, and must have Path=/. This means the cookie is locked to the exact origin and cannot be set or read by subdomains. Use this for auth cookies whenever possible.
OAuth 2.0 is not an authentication protocol. It is an authorization framework. This distinction matters. OAuth 2.0 by itself does not tell you who the user is. It gives you a token that says "this user granted permission for this application to access their data." The authentication layer is built on top of OAuth using OpenID Connect (OIDC), which adds an ID token containing user identity claims.
When someone says "log in with Google," they mean OAuth 2.0 + OIDC. When I see developers implementing OAuth without OIDC -- trying to extract identity from access tokens or userinfo endpoints without verifying an ID token -- I know there is a vulnerability waiting to happen.
This is the flow you should use for web applications. Always. If you are using the Implicit Flow in 2026, stop. It has been deprecated by the OAuth working group since 2019, and for good reason -- it sends tokens in URL fragments, which are logged in browser history, referrer headers, and proxy logs.
Here is how the Authorization Code flow with PKCE works in practice:
import { createHash, randomBytes } from "crypto";
// Step 1: Generate PKCE values and redirect to the authorization server
function initiateOAuthFlow(): {
authorizationUrl: string;
state: string;
codeVerifier: string;
} {
// PKCE: Generate a random code verifier
const codeVerifier = randomBytes(32)
.toString("base64url")
.slice(0, 128);
// PKCE: Hash the verifier to create the challenge
const codeChallenge = createHash("sha256")
.update(codeVerifier)
.digest("base64url");
// State parameter prevents CSRF attacks on the callback
const state = randomBytes(32).toString("hex");
const params = new URLSearchParams({
response_type: "code",
client_id: process.env.OAUTH_CLIENT_ID!,
redirect_uri: `${process.env.APP_URL}/api/auth/callback`,
scope: "openid email profile",
state,
code_challenge: codeChallenge,
code_challenge_method: "S256",
// Prevent authorization server from silently logging in
prompt: "consent",
});
return {
authorizationUrl: `https://accounts.google.com/o/oauth2/v2/auth?${params}`,
state,
codeVerifier,
};
}
// Step 2: Handle the callback and exchange the code for tokens
async function handleOAuthCallback(
code: string,
codeVerifier: string
): Promise<{
accessToken: string;
idToken: string;
refreshToken: string;
}> {
const response = await fetch("https://oauth2.googleapis.com/token", {
method: "POST",
headers: { "Content-Type": "application/x-www-form-urlencoded" },
body: new URLSearchParams({
grant_type: "authorization_code",
code,
redirect_uri: `${process.env.APP_URL}/api/auth/callback`,
client_id: process.env.OAUTH_CLIENT_ID!,
client_secret: process.env.OAUTH_CLIENT_SECRET!,
code_verifier: codeVerifier, // PKCE: Server hashes this and compares
}),
});
if (!response.ok) {
const error = await response.json();
throw new Error(`Token exchange failed: ${error.error_description}`);
}
return response.json();
}PKCE (Proof Key for Code Exchange) is not optional. Even for server-side applications. The original spec said PKCE was only needed for public clients (SPAs, mobile apps), but the current best practice is to use it everywhere. It prevents authorization code interception attacks, and there is zero downside to including it.
The state parameter is your CSRF protection. Before redirecting the user to the authorization server, you generate a random state value and store it in the user's session (or a signed cookie). When the callback comes back, you verify that the state matches. If an attacker tricks a user into visiting your callback URL with a stolen authorization code, the state will not match and the attack fails.
I cannot tell you how many OAuth implementations I have reviewed where the state parameter was either missing or not validated. This is a textbook CSRF attack vector.
When you get tokens back from the OAuth flow, you get an access token, a refresh token, and (if you requested the openid scope) an ID token. The ID token is a JWT that contains the user's identity claims:
import { createRemoteJWKSet, jwtVerify } from "jose";
// Cache the JWKS (JSON Web Key Set) to avoid fetching it on every request
const googleJwks = createRemoteJWKSet(
new URL("https://www.googleapis.com/oauth2/v3/certs")
);
interface IdTokenClaims {
iss: string; // Issuer (must be accounts.google.com)
sub: string; // Subject (stable user identifier)
aud: string; // Audience (must be your client ID)
email: string;
email_verified: boolean;
name: string;
picture: string;
exp: number;
iat: number;
nonce?: string; // If you sent a nonce in the auth request
}
async function verifyIdToken(idToken: string): Promise<IdTokenClaims> {
const { payload } = await jwtVerify(idToken, googleJwks, {
issuer: "https://accounts.google.com",
audience: process.env.OAUTH_CLIENT_ID!,
});
const claims = payload as unknown as IdTokenClaims;
// CRITICAL: Check that the email is verified
// Without this, an attacker can create a Google account with your
// user's email, not verify it, and log in as them
if (!claims.email_verified) {
throw new Error("Email not verified by the identity provider");
}
return claims;
}The email_verified check is one of the most commonly missed security checks in OAuth implementations. Here is the attack: I create a Google account with your email address. Google lets me do this -- the account is just unverified. If your application blindly trusts the email claim from the ID token without checking email_verified, I can log in as you.
This flow is for devices that do not have a browser or a convenient way to input credentials -- smart TVs, CLI tools, game consoles. The user sees a code on the device, goes to a URL on their phone or laptop, enters the code, and the device gets authenticated.
// Step 1: Device requests authorization
async function requestDeviceCode(): Promise<{
deviceCode: string;
userCode: string;
verificationUri: string;
expiresIn: number;
interval: number;
}> {
const response = await fetch(
"https://oauth2.googleapis.com/device/code",
{
method: "POST",
headers: { "Content-Type": "application/x-www-form-urlencoded" },
body: new URLSearchParams({
client_id: process.env.OAUTH_CLIENT_ID!,
scope: "openid email profile",
}),
}
);
return response.json();
}
// Step 2: Device polls for authorization (user enters code on their phone)
async function pollForDeviceToken(
deviceCode: string,
interval: number
): Promise<{ accessToken: string; idToken: string }> {
while (true) {
await new Promise((resolve) => setTimeout(resolve, interval * 1000));
const response = await fetch("https://oauth2.googleapis.com/token", {
method: "POST",
headers: { "Content-Type": "application/x-www-form-urlencoded" },
body: new URLSearchParams({
grant_type: "urn:ietf:params:oauth:grant-type:device_code",
device_code: deviceCode,
client_id: process.env.OAUTH_CLIENT_ID!,
}),
});
const data = await response.json();
if (data.error === "authorization_pending") {
continue; // User hasn't entered the code yet
}
if (data.error === "slow_down") {
interval += 5; // Back off as requested
continue;
}
if (data.error) {
throw new Error(`Device auth failed: ${data.error_description}`);
}
return { accessToken: data.access_token, idToken: data.id_token };
}
}The security concern with the Device Flow is phishing. An attacker can initiate a device authorization request, get the user code, and trick a user into entering it ("Enter this code to verify your account"). The user thinks they are verifying their own device, but they are actually granting access to the attacker's device. There is no great mitigation for this besides user education and short expiration times on device codes.
Refresh tokens are long-lived credentials that let you get new access tokens without re-authenticating the user. They are also the most valuable target for attackers, because a stolen refresh token provides persistent access.
Refresh token rotation is the defense: every time you use a refresh token, the authorization server issues a new refresh token and invalidates the old one. If an attacker steals a refresh token and uses it, the legitimate user's next refresh attempt will fail (because the token was already used), which triggers a security alert and invalidation of all tokens in the family.
interface TokenFamily {
familyId: string;
userId: string;
currentToken: string;
usedTokens: Set<string>;
createdAt: number;
lastRotatedAt: number;
}
class RefreshTokenManager {
private store: Redis;
async createFamily(userId: string): Promise<string> {
const familyId = crypto.randomUUID();
const refreshToken = this.generateToken();
const family: TokenFamily = {
familyId,
userId,
currentToken: refreshToken,
usedTokens: new Set(),
createdAt: Date.now(),
lastRotatedAt: Date.now(),
};
await this.store.set(
`token-family:${familyId}`,
JSON.stringify(family, (_, v) => (v instanceof Set ? [...v] : v)),
"EX",
60 * 60 * 24 * 30 // 30 days max lifetime
);
// Map token -> family for lookup
await this.store.set(
`refresh-token:${refreshToken}`,
familyId,
"EX",
60 * 60 * 24 * 30
);
return refreshToken;
}
async rotate(
currentRefreshToken: string
): Promise<{ accessToken: string; refreshToken: string } | null> {
const familyId = await this.store.get(
`refresh-token:${currentRefreshToken}`
);
if (!familyId) return null;
const raw = await this.store.get(`token-family:${familyId}`);
if (!raw) return null;
const family: TokenFamily = JSON.parse(raw, (_, v) =>
Array.isArray(v) && _.includes("Token") ? new Set(v) : v
);
// CRITICAL: Check if this token was already used (replay attack)
if (family.usedTokens.has(currentRefreshToken)) {
// This token was already rotated. Someone stole it.
// Kill the entire token family.
console.error(
`Refresh token replay detected for user ${family.userId}, ` +
`family ${familyId}. Revoking all tokens.`
);
await this.revokeFamily(familyId);
// Alert the user
await this.notifyUserOfSuspiciousActivity(family.userId);
return null;
}
// Check that this is actually the current token
if (family.currentToken !== currentRefreshToken) {
// Token is not current and not in usedTokens -- invalid
return null;
}
// Rotate: mark current as used, generate new token
const newRefreshToken = this.generateToken();
family.usedTokens.add(currentRefreshToken);
family.currentToken = newRefreshToken;
family.lastRotatedAt = Date.now();
// Update storage
await this.store.set(
`token-family:${familyId}`,
JSON.stringify(family, (_, v) => (v instanceof Set ? [...v] : v)),
"EX",
60 * 60 * 24 * 30
);
await this.store.del(`refresh-token:${currentRefreshToken}`);
await this.store.set(
`refresh-token:${newRefreshToken}`,
familyId,
"EX",
60 * 60 * 24 * 30
);
const accessToken = await createAccessToken(
family.userId,
await this.loadUserRoles(family.userId),
familyId
);
return { accessToken, refreshToken: newRefreshToken };
}
private generateToken(): string {
return randomBytes(48).toString("base64url");
}
private async revokeFamily(familyId: string): Promise<void> {
const raw = await this.store.get(`token-family:${familyId}`);
if (!raw) return;
const family: TokenFamily = JSON.parse(raw);
await this.store.del(`refresh-token:${family.currentToken}`);
for (const token of family.usedTokens) {
await this.store.del(`refresh-token:${token}`);
}
await this.store.del(`token-family:${familyId}`);
}
}The key insight is the "token family" concept. All refresh tokens descended from the same login form a family. When you detect a replay (a used token being presented again), you do not just reject the request -- you kill the entire family. This ensures that if an attacker and a legitimate user are both trying to use refresh tokens from the same family, the attack is detected and all access is revoked.
Cross-Site Request Forgery attacks trick a user's browser into making requests to your site with the user's cookies attached. The classic example: a malicious site includes <img src="https://bank.com/transfer?to=attacker&amount=10000">, and because the browser sends the bank's auth cookie with the request, the transfer goes through.
SameSite=Lax cookies prevent the worst CSRF attacks by not sending cookies on cross-origin subrequests (POST forms, fetch calls, iframes). But Lax still sends cookies on top-level navigations, and there are edge cases where that is exploitable.
For defense in depth, I always implement the Synchronizer Token pattern alongside SameSite cookies:
import { randomBytes, timingSafeEqual } from "crypto";
// Generate a CSRF token and store it in the session
function generateCsrfToken(session: SessionData): string {
const token = randomBytes(32).toString("hex");
session.csrfToken = token;
return token;
}
// Middleware: validate CSRF token on state-changing requests
function csrfProtection(req: Request, session: SessionData): boolean {
// Only check state-changing methods
const safeMethods = new Set(["GET", "HEAD", "OPTIONS"]);
if (safeMethods.has(req.method)) return true;
const tokenFromHeader = req.headers.get("x-csrf-token");
const tokenFromBody = (req as any).body?.csrfToken;
const submittedToken = tokenFromHeader ?? tokenFromBody;
if (!submittedToken || !session.csrfToken) return false;
// Use timing-safe comparison to prevent timing attacks
const a = Buffer.from(submittedToken);
const b = Buffer.from(session.csrfToken);
if (a.length !== b.length) return false;
return timingSafeEqual(a, b);
}The Double-Submit Cookie pattern is an alternative that does not require server-side state: you set a random value in a cookie AND require the client to send the same value in a header. Since an attacker cannot read cookies from another domain (same-origin policy), they cannot construct the header. But this pattern has subtleties -- if an attacker can set cookies on your domain (via a subdomain they control), they can perform the attack. The __Host- cookie prefix prevents this.
XSS (Cross-Site Scripting) is the most dangerous vulnerability in the context of authentication, because it gives an attacker the ability to execute JavaScript in the context of your application. If an attacker can run JavaScript on your page, they can:
localStorage or sessionStorageThis is why token storage decisions matter so much.
localStorage: Never for auth tokens. Any XSS vulnerability gives full access. The token does not expire when the tab closes. It persists across sessions. It is accessible to any JavaScript on the page, including third-party scripts. I do not care how many blog posts recommend it. Do not store auth tokens in localStorage.
sessionStorage: Slightly better, still bad. The token is scoped to the tab and cleared when the tab closes. But XSS still has full access while the tab is open, and users who open multiple tabs need to re-authenticate for each one.
HttpOnly cookies: The correct answer for web apps. JavaScript cannot read them. They are automatically included in requests to the same origin. They work across tabs. The only attack vector is CSRF, which we just covered.
In-memory variables: The SPA compromise. For SPAs that use access tokens (not cookies), storing the token in a JavaScript variable (not localStorage) limits exposure. The token is lost on page refresh, which means you need a refresh mechanism (silent auth or refresh tokens in HttpOnly cookies). This is the pattern I recommend when you cannot use HttpOnly cookies.
// Token storage for SPAs: in-memory with HttpOnly refresh cookie
class AuthClient {
private accessToken: string | null = null;
private tokenExpiresAt: number = 0;
async getAccessToken(): Promise<string> {
// If we have a valid token, use it
if (this.accessToken && Date.now() < this.tokenExpiresAt - 30_000) {
return this.accessToken;
}
// Otherwise, refresh using the HttpOnly cookie
const response = await fetch("/api/auth/refresh", {
method: "POST",
credentials: "include", // Send the HttpOnly refresh cookie
});
if (!response.ok) {
// Refresh failed -- user needs to log in again
this.accessToken = null;
throw new AuthenticationError("Session expired");
}
const { accessToken, expiresIn } = await response.json();
this.accessToken = accessToken;
this.tokenExpiresAt = Date.now() + expiresIn * 1000;
return accessToken;
}
logout(): void {
this.accessToken = null;
this.tokenExpiresAt = 0;
// Also hit the logout endpoint to clear the refresh cookie
fetch("/api/auth/logout", { method: "POST", credentials: "include" });
}
}Even with perfect input sanitization (which does not exist), you should deploy a Content Security Policy that limits what scripts can execute on your page:
function getCSPHeader(): string {
const directives = [
"default-src 'self'",
"script-src 'self' 'strict-dynamic'", // No inline scripts
"style-src 'self' 'unsafe-inline'", // CSS is less risky
"img-src 'self' data: https:",
"font-src 'self'",
"connect-src 'self' https://api.yourdomain.com",
"frame-src 'none'", // No iframes
"object-src 'none'", // No plugins
"base-uri 'self'", // Prevent base tag hijacking
"form-action 'self'", // Forms can only submit to same origin
"frame-ancestors 'none'", // Prevent clickjacking
"upgrade-insecure-requests",
];
return directives.join("; ");
}CSP is not a substitute for proper output encoding, but it is a critical defense layer. Even if an attacker finds an XSS vulnerability, a strict CSP prevents them from loading external scripts or exfiltrating data to their server.
Passwords are the weakest link in most authentication systems. They are reused across sites, they are phishable, and users choose terrible ones despite every effort to enforce complexity. Passwordless authentication eliminates the password entirely.
Magic links are the simplest passwordless approach: the user enters their email, receives a link with a one-time token, and clicking the link logs them in.
class MagicLinkAuth {
private store: Redis;
private readonly TOKEN_TTL = 60 * 10; // 10 minutes
private readonly MAX_ATTEMPTS = 3;
async sendMagicLink(email: string): Promise<void> {
// Rate limit: max 3 magic links per email per hour
const rateLimitKey = `magic-link-rate:${email}`;
const attempts = await this.store.incr(rateLimitKey);
if (attempts === 1) {
await this.store.expire(rateLimitKey, 3600);
}
if (attempts > this.MAX_ATTEMPTS) {
// Return success to prevent email enumeration
// but do not actually send
return;
}
const token = randomBytes(32).toString("base64url");
await this.store.set(
`magic-link:${token}`,
JSON.stringify({
email,
createdAt: Date.now(),
used: false,
}),
"EX",
this.TOKEN_TTL
);
const magicLink =
`${process.env.APP_URL}/api/auth/magic-link/verify?token=${token}`;
await this.sendEmail(email, {
subject: "Sign in to YourApp",
html: `
<p>Click the link below to sign in. This link expires in 10 minutes.</p>
<a href="${magicLink}">Sign in to YourApp</a>
<p>If you didn't request this, you can safely ignore this email.</p>
`,
});
}
async verifyMagicLink(token: string): Promise<string | null> {
const raw = await this.store.get(`magic-link:${token}`);
if (!raw) return null;
const data = JSON.parse(raw);
// One-time use: prevent replay
if (data.used) return null;
// Mark as used BEFORE creating the session
// This prevents race conditions where two requests use the same token
data.used = true;
await this.store.set(
`magic-link:${token}`,
JSON.stringify(data),
"EX",
60 // Keep for 1 minute for debugging, then auto-delete
);
return data.email;
}
}Magic link security considerations that are often missed:
WebAuthn is the future of authentication. It uses public key cryptography -- the user's device generates a key pair, the private key never leaves the device, and the server stores only the public key. This is phishing-resistant by design, because the browser verifies the origin before signing the challenge.
import {
generateRegistrationOptions,
verifyRegistrationResponse,
generateAuthenticationOptions,
verifyAuthenticationResponse,
type VerifiedRegistrationResponse,
} from "@simplewebauthn/server";
const rpName = "YourApp";
const rpID = "yourdomain.com";
const origin = `https://${rpID}`;
// Registration: create a new passkey
async function startPasskeyRegistration(
userId: string,
userName: string,
existingCredentials: Array<{ id: string; transports?: string[] }>
) {
const options = await generateRegistrationOptions({
rpName,
rpID,
userID: userId,
userName,
attestationType: "none", // We don't need attestation for most apps
excludeCredentials: existingCredentials.map((cred) => ({
id: cred.id,
type: "public-key",
transports: cred.transports as AuthenticatorTransport[],
})),
authenticatorSelection: {
residentKey: "preferred",
userVerification: "preferred",
},
});
// Store the challenge for verification
await redis.set(
`webauthn-challenge:${userId}`,
options.challenge,
"EX",
300
);
return options;
}
// Authentication: verify a passkey
async function startPasskeyAuthentication(userId?: string) {
const options = await generateAuthenticationOptions({
rpID,
allowCredentials: userId
? await getCredentialsForUser(userId)
: [], // Empty = discoverable credential (passkey)
userVerification: "preferred",
});
const challengeKey = userId
? `webauthn-challenge:${userId}`
: `webauthn-challenge:${options.challenge}`; // For discoverable credentials
await redis.set(challengeKey, options.challenge, "EX", 300);
return options;
}The beauty of passkeys is that they solve multiple problems simultaneously:
The downside is ecosystem maturity. As of early 2026, passkey support is excellent on Apple devices, good on Android, and improving on Windows. Cross-device flows (scanning a QR code on your phone to authenticate on a desktop) work but the UX is still clunky. My recommendation: offer passkeys as an option alongside traditional auth, and gradually nudge users toward them.
MFA is table stakes in 2026. If your application handles any sensitive data and does not offer MFA, you are behind. But the implementation details matter enormously.
TOTP is the "authenticator app" approach. Google Authenticator, Authy, 1Password -- they all generate 6-digit codes that change every 30 seconds. The algorithm is simple: HMAC-SHA1 of the current time period and a shared secret, truncated to 6 digits.
import { createHmac } from "crypto";
function generateTOTP(secret: Buffer, timeStep: number = 30): string {
const time = Math.floor(Date.now() / 1000 / timeStep);
const timeBuffer = Buffer.alloc(8);
timeBuffer.writeBigInt64BE(BigInt(time));
const hmac = createHmac("sha1", secret).update(timeBuffer).digest();
// Dynamic truncation
const offset = hmac[hmac.length - 1] & 0x0f;
const code =
((hmac[offset] & 0x7f) << 24) |
((hmac[offset + 1] & 0xff) << 16) |
((hmac[offset + 2] & 0xff) << 8) |
(hmac[offset + 3] & 0xff);
return (code % 1_000_000).toString().padStart(6, "0");
}
function verifyTOTP(
secret: Buffer,
submittedCode: string,
window: number = 1 // Allow 1 step before/after for clock drift
): boolean {
for (let i = -window; i <= window; i++) {
const time = Math.floor(Date.now() / 1000 / 30) + i;
const timeBuffer = Buffer.alloc(8);
timeBuffer.writeBigInt64BE(BigInt(time));
const hmac = createHmac("sha1", secret).update(timeBuffer).digest();
const offset = hmac[hmac.length - 1] & 0x0f;
const code =
((hmac[offset] & 0x7f) << 24) |
((hmac[offset + 1] & 0xff) << 16) |
((hmac[offset + 2] & 0xff) << 8) |
(hmac[offset + 3] & 0xff);
const expected = (code % 1_000_000).toString().padStart(6, "0");
// Timing-safe comparison
if (
submittedCode.length === expected.length &&
timingSafeEqual(Buffer.from(submittedCode), Buffer.from(expected))
) {
return true;
}
}
return false;
}TOTP implementation mistakes I see constantly:
function generateRecoveryCodes(count: number = 10): string[] {
return Array.from({ length: count }, () => {
// Format: XXXX-XXXX-XXXX (12 alphanumeric characters)
const bytes = randomBytes(9);
const code = bytes.toString("base64url").slice(0, 12).toUpperCase();
return `${code.slice(0, 4)}-${code.slice(4, 8)}-${code.slice(8, 12)}`;
});
}
// Store hashed recovery codes (never store them in plaintext)
async function storeRecoveryCodes(
userId: string,
codes: string[]
): Promise<void> {
const hashedCodes = await Promise.all(
codes.map(async (code) => {
const salt = randomBytes(16).toString("hex");
const hash = createHash("sha256")
.update(salt + code.replace(/-/g, ""))
.digest("hex");
return { salt, hash, used: false };
})
);
await db.user.update({
where: { id: userId },
data: { recoveryCodes: JSON.stringify(hashedCodes) },
});
}When your application grows beyond a single server, session management becomes a distributed systems problem. Here are the patterns that work.
Redis is the standard session store for good reasons: sub-millisecond latency, built-in TTL, atomic operations. At scale, you need Redis Cluster or Redis Sentinel for high availability.
import { Redis, Cluster } from "ioredis";
function createSessionStore(): Redis | Cluster {
if (process.env.REDIS_CLUSTER_NODES) {
const nodes = process.env.REDIS_CLUSTER_NODES.split(",").map((node) => {
const [host, port] = node.split(":");
return { host, port: parseInt(port, 10) };
});
return new Cluster(nodes, {
redisOptions: {
password: process.env.REDIS_PASSWORD,
tls: process.env.NODE_ENV === "production" ? {} : undefined,
},
// Use hash tags to ensure session data stays on the same shard
keyPrefix: "{session}:",
});
}
return new Redis({
host: process.env.REDIS_HOST ?? "127.0.0.1",
port: parseInt(process.env.REDIS_PORT ?? "6379", 10),
password: process.env.REDIS_PASSWORD,
// Connection pool settings for high throughput
maxRetriesPerRequest: 3,
retryStrategy: (times) => Math.min(times * 50, 2000),
});
}The {session}: key prefix with curly braces is a Redis Cluster hash tag. It ensures all keys with the same hash tag are stored on the same shard, which is important if you use multi-key operations (like checking a session and updating its TTL atomically).
Session fixation is an attack where the attacker sets a known session ID in the victim's browser (via XSS, a URL parameter, or a malicious link), waits for the victim to log in, and then uses the same session ID to access the victim's account.
The defense is simple: always regenerate the session ID after authentication events.
async function handleLogin(
req: Request,
credentials: { email: string; password: string }
): Promise<Response> {
const user = await verifyCredentials(credentials);
if (!user) {
return new Response("Invalid credentials", { status: 401 });
}
// CRITICAL: Destroy the old session and create a new one
// This prevents session fixation
const oldSessionId = getSessionIdFromCookie(req);
if (oldSessionId) {
await sessionStore.destroy(oldSessionId, user.id);
}
const newSessionId = await sessionStore.create(user.id, req);
const response = new Response(JSON.stringify({ success: true }), {
status: 200,
});
setAuthCookie(response, "__Host-session", newSessionId);
return response;
}Regenerate the session ID after: login, privilege escalation (switching to admin mode), MFA verification, and password change. Basically, any event that changes the security context of the session.
Most applications should limit the number of concurrent sessions per user. Without limits, a compromised account can have dozens of active sessions, making it harder to fully lock out an attacker.
async function enforceSessionLimit(
userId: string,
maxSessions: number = 5
): Promise<void> {
const sessionIds = await redis.smembers(`user-sessions:${userId}`);
if (sessionIds.length >= maxSessions) {
// Get all sessions with their creation times
const sessions = await Promise.all(
sessionIds.map(async (id) => {
const raw = await redis.get(`session:${id}`);
return raw ? { id, data: JSON.parse(raw) as SessionData } : null;
})
);
const validSessions = sessions
.filter(Boolean)
.sort((a, b) => a!.data.createdAt - b!.data.createdAt);
// Remove oldest sessions until we're under the limit
const toRemove = validSessions.slice(
0,
validSessions.length - maxSessions + 1
);
for (const session of toRemove) {
if (session) {
await redis.del(`session:${session.id}`);
await redis.srem(`user-sessions:${userId}`, session.id);
}
}
}
}After years of reviewing authentication code, I have a list of vulnerabilities I see so frequently that I check for them first in every audit.
If your application includes auth tokens in URLs (query parameters, fragment identifiers), those tokens leak to third-party sites via the Referer header. This includes password reset links when the reset page has external resources (analytics scripts, CDN-hosted images, social media widgets).
// BAD: Token in URL query parameter
// https://app.com/reset-password?token=abc123
// If this page loads Google Analytics, the Referer header sends the full URL to Google
// GOOD: Use the token to verify, then redirect to a clean URL
async function handleResetLink(req: Request): Promise<Response> {
const url = new URL(req.url);
const token = url.searchParams.get("token");
if (!token || !(await verifyResetToken(token))) {
return new Response("Invalid or expired link", { status: 400 });
}
// Create a short-lived session for the password reset
const resetSessionId = await createResetSession(token);
// Redirect to a clean URL without the token
const response = Response.redirect(
`${process.env.APP_URL}/reset-password`,
302
);
setAuthCookie(response, "__Host-reset-session", resetSessionId);
return response;
}Also, always set the Referrer-Policy header:
// In your middleware or server config
headers.set("Referrer-Policy", "strict-origin-when-cross-origin");If you compare tokens using === or ==, an attacker can determine the correct token one character at a time by measuring response times. The comparison short-circuits on the first mismatched character, so a token where the first character matches takes slightly longer to reject than one where no characters match.
// BAD: vulnerable to timing attacks
if (submittedToken === storedToken) { ... }
// GOOD: constant-time comparison
import { timingSafeEqual } from "crypto";
function safeCompare(a: string, b: string): boolean {
if (a.length !== b.length) return false;
return timingSafeEqual(Buffer.from(a), Buffer.from(b));
}"But timing attacks are theoretical," I hear you say. No. They have been demonstrated reliably over network connections, including the internet, not just local attacks. The attack requires many requests, but it is feasible. Use timing-safe comparison for any security-sensitive string comparison.
The "remember me" checkbox is almost always implemented badly. The common mistake is extending the session duration to 30 days. This means that if the session is compromised, the attacker has access for 30 days.
The correct implementation uses a separate, long-lived "remember me" token that can only be used to create a new session:
async function createRememberMeToken(userId: string): Promise<string> {
const selector = randomBytes(12).toString("hex");
const validator = randomBytes(32).toString("hex");
const validatorHash = createHash("sha256")
.update(validator)
.digest("hex");
await db.rememberMeToken.create({
data: {
selector,
validatorHash,
userId,
expiresAt: new Date(Date.now() + 30 * 24 * 60 * 60 * 1000), // 30 days
},
});
// The cookie contains selector:validator
// Selector is used for lookup, validator is used for verification
// This way, a database leak reveals only hashes, not valid tokens
return `${selector}:${validator}`;
}
async function verifyRememberMe(token: string): Promise<string | null> {
const [selector, validator] = token.split(":");
if (!selector || !validator) return null;
const record = await db.rememberMeToken.findUnique({
where: { selector },
});
if (!record || record.expiresAt < new Date()) return null;
const validatorHash = createHash("sha256")
.update(validator)
.digest("hex");
if (!safeCompare(validatorHash, record.validatorHash)) return null;
// One-time use: delete and recreate (rotation)
await db.rememberMeToken.delete({ where: { selector } });
return record.userId;
}The selector/validator split is important. The selector is used to look up the record in the database (it is the index). The validator is hashed before storage. This means that even if an attacker gets read access to your database (SQL injection, backup leak), they cannot forge remember-me tokens because they only have hashes, not the original validators.
One vulnerability I see in almost every codebase is missing binding between related authentication events. For example:
// BAD: MFA verification not bound to login attempt
async function verifyMFA(userId: string, code: string): Promise<boolean> {
const user = await db.user.findUnique({ where: { id: userId } });
return verifyTOTP(user.mfaSecret, code);
}
// GOOD: MFA verification bound to a specific login attempt
async function verifyMFA(
loginAttemptId: string,
code: string
): Promise<boolean> {
const attempt = await redis.get(`login-attempt:${loginAttemptId}`);
if (!attempt) return false;
const { userId, mfaRequired, mfaVerified, createdAt } = JSON.parse(attempt);
// Check that the login attempt is recent (5 minutes)
if (Date.now() - createdAt > 5 * 60 * 1000) return false;
// Check that MFA was actually required for this attempt
if (!mfaRequired || mfaVerified) return false;
const user = await db.user.findUnique({ where: { id: userId } });
if (!user || !user.mfaSecret) return false;
const valid = verifyTOTP(Buffer.from(user.mfaSecret, "hex"), code);
if (valid) {
// Mark this specific attempt as MFA-verified
await redis.set(
`login-attempt:${loginAttemptId}`,
JSON.stringify({ ...JSON.parse(attempt), mfaVerified: true }),
"EX",
300
);
}
return valid;
}I saved this one for last because it is my personal favorite in the "how is this still happening" category. The user clicks "log out," the frontend clears the token from memory and redirects to the login page. But the session is still valid on the server. If the attacker already has the session ID, they still have access.
A real logout must:
async function handleLogout(req: Request): Promise<Response> {
const sessionId = getSessionIdFromCookie(req);
if (sessionId) {
const session = await sessionStore.get(sessionId);
if (session) {
// Destroy the session
await sessionStore.destroy(sessionId, session.userId);
// Revoke any refresh tokens associated with this session
await refreshTokenManager.revokeBySession(sessionId);
}
}
const response = new Response(JSON.stringify({ success: true }), {
status: 200,
});
// Clear all auth cookies
response.headers.append(
"Set-Cookie",
"__Host-session=; HttpOnly; Secure; SameSite=Lax; Path=/; Max-Age=0"
);
response.headers.append(
"Set-Cookie",
"__Host-refresh=; HttpOnly; Secure; SameSite=Lax; Path=/; Max-Age=0"
);
// Clear any client-side state
response.headers.set("Clear-Site-Data", '"cookies", "storage"');
return response;
}The Clear-Site-Data header is the nuclear option -- it tells the browser to clear cookies, localStorage, sessionStorage, and cache for the origin. Use it on logout to ensure no residual auth state remains.
Authentication endpoints are the most targeted endpoints in any application. Brute force attacks, credential stuffing, and enumeration attacks all hit your login and signup endpoints. Rate limiting is essential, but the implementation details matter.
interface RateLimitEntry {
count: number;
resetAt: number;
}
class AuthRateLimiter {
private store: Redis;
// Multi-dimensional rate limiting
async checkLoginAttempt(
email: string,
ipAddress: string
): Promise<{ allowed: boolean; retryAfter?: number }> {
const now = Date.now();
// Dimension 1: Per-IP rate limit (broad protection)
const ipKey = `rate:login:ip:${ipAddress}`;
const ipLimit = await this.checkLimit(ipKey, {
maxAttempts: 20, // 20 attempts per IP per window
windowMs: 15 * 60 * 1000, // 15 minutes
});
if (!ipLimit.allowed) return ipLimit;
// Dimension 2: Per-email rate limit (targeted protection)
const emailKey = `rate:login:email:${email.toLowerCase()}`;
const emailLimit = await this.checkLimit(emailKey, {
maxAttempts: 5, // 5 attempts per email per window
windowMs: 15 * 60 * 1000, // 15 minutes
});
if (!emailLimit.allowed) return emailLimit;
// Dimension 3: Global rate limit (DDoS protection)
const globalKey = `rate:login:global`;
const globalLimit = await this.checkLimit(globalKey, {
maxAttempts: 100, // 100 total login attempts per window
windowMs: 60 * 1000, // 1 minute
});
if (!globalLimit.allowed) return globalLimit;
return { allowed: true };
}
private async checkLimit(
key: string,
config: { maxAttempts: number; windowMs: number }
): Promise<{ allowed: boolean; retryAfter?: number }> {
const current = await this.store.incr(key);
if (current === 1) {
await this.store.pexpire(key, config.windowMs);
}
if (current > config.maxAttempts) {
const ttl = await this.store.pttl(key);
return {
allowed: false,
retryAfter: Math.ceil(ttl / 1000),
};
}
return { allowed: true };
}
// Progressive delays: exponentially increase wait time after failures
async getProgressiveDelay(email: string): Promise<number> {
const failKey = `login-fails:${email.toLowerCase()}`;
const failures = parseInt((await this.store.get(failKey)) ?? "0", 10);
if (failures === 0) return 0;
// 1s, 2s, 4s, 8s, 16s, max 30s
return Math.min(Math.pow(2, failures - 1) * 1000, 30_000);
}
async recordFailure(email: string): Promise<void> {
const failKey = `login-fails:${email.toLowerCase()}`;
await this.store.incr(failKey);
await this.store.expire(failKey, 3600); // Reset after 1 hour
}
async clearFailures(email: string): Promise<void> {
await this.store.del(`login-fails:${email.toLowerCase()}`);
}
}The mistake everyone makes: rate limiting only by IP address. In 2026, attackers use botnets with thousands of IPs. A per-IP limit of 20 attempts means an attacker with 1000 IPs gets 20,000 attempts before being throttled. You need multi-dimensional rate limiting: per-IP, per-email, and global.
The other mistake: returning different error messages for "user not found" vs "wrong password." This lets attackers enumerate valid email addresses. Always return a generic error like "Invalid email or password."
Account lockout (locking the account after N failed attempts) is a denial-of-service vector. An attacker can lock out any user by deliberately failing N login attempts with the victim's email. Progressive delays are a better approach: each failed attempt increases the delay before the next attempt is allowed, but the account is never locked.
The exception is when you detect a credential stuffing attack (thousands of login attempts with different email/password combinations from the same IP range). In that case, you should block the IP range entirely, not just rate limit.
After everything we have covered, here is the concrete authentication stack I would choose for a new web application in 2026:
Session-based auth with Redis. Sessions in Redis, session ID in a __Host--prefixed HttpOnly cookie. 24-hour absolute expiration, 30-minute idle timeout, sliding window renewal. This gives you instant revocation, active session management, and small cookies.
Short-lived JWTs for API access only. If your frontend is a SPA that calls an API, use a 5-minute JWT as the access token, with a refresh token in an HttpOnly cookie. The JWT is stored in memory (not localStorage), and the refresh endpoint uses rotation with replay detection.
OAuth 2.0 + OIDC with PKCE for social login. Authorization Code flow with PKCE, always. Validate the ID token, check email_verified, and bind the external identity to your internal user model. Support Google and GitHub at minimum -- they cover most developers and most users.
Passkeys as the primary passwordless option. Offer passkey registration during onboarding and after login. Store the credential in your database. Fall back to email magic links for users on devices that do not support passkeys yet.
TOTP as the MFA standard. Support authenticator apps. Generate recovery codes during setup. Store secrets encrypted at rest. Track used codes to prevent replay.
Multi-dimensional rate limiting. Per-IP, per-email, and global limits on all auth endpoints. Progressive delays instead of account lockout. Generic error messages to prevent enumeration.
Content Security Policy and Referrer-Policy. Deploy a strict CSP from day one. Set Referrer-Policy: strict-origin-when-cross-origin to prevent token leakage.
CSRF protection. SameSite=Lax on all auth cookies, plus Synchronizer Token pattern on state-changing endpoints for defense in depth.
If I could give one piece of advice to every developer building authentication: do not try to be clever. Use established patterns, use established libraries, and focus your creativity on the parts of your application that are not responsible for keeping user accounts safe. Authentication is the one area where boring is exactly what you want.