URL versioning, header versioning, date-based versioning, deprecation policies, schema evolution, and the migration playbook. What actually works when you have real consumers depending on your API.
I have broken production APIs more times than I am comfortable admitting. Not because I wanted to ship breaking changes, but because I did not understand what "breaking" meant to the people consuming my API. To me, renaming a field from userName to username was a cleanup. To the mobile team shipping a build through App Store review, it was a week of emergency patches and a very tense Slack thread.
API versioning sounds like a solved problem. Pick a strategy, slap /v1/ on your URLs, and move on. In practice, it is one of the most consequential architectural decisions you will make, and the consequences do not show up until you have real consumers, real data, and real pressure to evolve without breaking things.
This is everything I have learned about API versioning from running APIs in production for eight years. Some of it came from good decisions. Most of it came from bad ones.
The textbook explanation of API versioning is mechanical: you have version 1, you make breaking changes, you create version 2. Simple. What the textbooks do not tell you is that API versioning is fundamentally a social contract, not a technical one.
When you publish an API, you are making a promise. You are saying: "If you build against this, it will keep working." The moment someone writes code that depends on your response shape, your status codes, your error format, even your field ordering in some unfortunate cases, you have a consumer who trusts you.
Breaking that trust is expensive. Not in the "we need to update our SDK" sense, but in the "our customer's production system went down at 2 AM and they are reconsidering whether to use our platform" sense. I have been on both sides of that phone call.
The hard part of API versioning is not the routing or the URL scheme. It is answering these questions:
Every versioning strategy is a different set of trade-offs for answering those questions. There is no universal best answer. There is only the answer that fits your situation.
GET /v1/users/123
GET /v2/users/123
This is the most common approach, and for good reason. It is explicit, visible, cacheable, and easy to understand. When a developer sees /v1/ in a URL, they know exactly which version they are targeting. When you see it in access logs, you know exactly which version is getting traffic.
I have used URL versioning on most of my projects, and here is what I have learned:
It works well when: You have a small number of major versions (2-3 at most), your API surface is relatively stable, and your consumers are mostly external developers who need crystal-clear documentation.
It breaks down when: You need to version individual endpoints independently, you want fine-grained compatibility (not just "v1 or v2" but "v1 with the new pagination format"), or you have so many versions that your router looks like a history lesson.
The routing implementation is straightforward:
// Express/Fastify style
import { Router } from 'express';
const v1Router = Router();
const v2Router = Router();
v1Router.get('/users/:id', async (req, res) => {
const user = await getUser(req.params.id);
// v1 returns flat structure
res.json({
id: user.id,
name: user.fullName,
email: user.email,
created: user.createdAt.toISOString(),
});
});
v2Router.get('/users/:id', async (req, res) => {
const user = await getUser(req.params.id);
// v2 returns nested structure with metadata
res.json({
data: {
id: user.id,
name: {
full: user.fullName,
first: user.firstName,
last: user.lastName,
},
email: user.email,
},
meta: {
createdAt: user.createdAt.toISOString(),
updatedAt: user.updatedAt.toISOString(),
},
});
});
app.use('/v1', v1Router);
app.use('/v2', v2Router);The problem shows up immediately: you are duplicating route handlers. For a small API, this is manageable. For an API with 200 endpoints, you are maintaining two copies of everything, and they inevitably drift. The v1 handler does not get the bug fix. The v2 handler does not get the performance optimization. You find yourself writing shared service layers to avoid duplication, and then you are essentially building a transformation layer anyway.
GET /users/123
Accept: application/vnd.myapi.v2+json
Or the simpler variant:
GET /users/123
X-API-Version: 2
Header versioning keeps the URL clean and lets you version without changing your routing structure. GitHub uses this approach with their Accept header, and it works well for them.
It works well when: You want stable URLs, you are building for sophisticated consumers who understand HTTP headers, and you want to version at the response format level rather than the resource level.
It breaks down when: Your consumers are not sophisticated. I mean this without judgment. If your API is consumed by frontend developers who are debugging with curl and copying URLs from Slack, header versioning adds friction. You cannot just paste a URL into a browser to test it. You cannot share a versioned URL in documentation without also explaining the header.
// Middleware approach for header versioning
function versionMiddleware(req, res, next) {
const acceptHeader = req.headers['accept'] || '';
const versionMatch = acceptHeader.match(/version=(\d+)/);
const explicitVersion = req.headers['x-api-version'];
req.apiVersion = parseInt(
explicitVersion || (versionMatch && versionMatch[1]) || '1',
10
);
next();
}
app.get('/users/:id', versionMiddleware, async (req, res) => {
const user = await getUser(req.params.id);
if (req.apiVersion >= 2) {
return res.json({
data: transformUserV2(user),
meta: buildMeta(user),
});
}
res.json(transformUserV1(user));
});This approach naturally leads to a transformation layer, which is actually a good thing. Instead of duplicating routes, you write transformers that convert your internal domain model into the appropriate version's response shape. I will come back to this pattern later.
GET /users/123?version=2
I will be honest: I do not like this approach, and I have never used it in production. The version is part of the resource identifier in a URL-versioned API. In a query parameter, it is a modifier, which means caching layers need to account for it, and it is easy to forget or strip.
That said, some teams use it effectively, particularly for internal APIs where the version is more of a "feature flag" than a hard boundary. YouTube's Data API uses it. It works if your consumers are disciplined about including it, but I have seen too many bugs from query parameters getting dropped by intermediaries to recommend it as a primary strategy.
Stripe's versioning strategy is, in my opinion, the most sophisticated approach to API versioning that exists in production. Understanding it changed how I think about backwards compatibility.
Instead of numbered versions, Stripe uses dates:
Stripe-Version: 2024-12-18
Every API change is tagged with a date. When you create a Stripe account, your account is pinned to the current API version. All your requests use that version by default, regardless of what changes Stripe makes later. If Stripe changes a response format six months after you integrated, your integration does not break.
You can upgrade by setting the Stripe-Version header to a newer date, but you do it on your own schedule. You can even test a newer version on a single request without changing your account-wide pin.
The magic is in the implementation. Stripe maintains a chain of version transformations:
// Conceptual model of Stripe's version chain
interface VersionChange {
date: string;
description: string;
transform: (response: any) => any;
affectedEndpoints: string[];
}
const versionChanges: VersionChange[] = [
{
date: '2024-06-15',
description: 'Changed invoice.lines from array to paginated list',
affectedEndpoints: ['/v1/invoices/*'],
transform: (response) => {
// Convert new paginated format back to old array format
if (response.lines && response.lines.data) {
response.lines = response.lines.data;
}
return response;
},
},
{
date: '2024-09-01',
description: 'Renamed source to payment_method on charges',
affectedEndpoints: ['/v1/charges/*'],
transform: (response) => {
if (response.payment_method) {
response.source = response.payment_method;
delete response.payment_method;
}
return response;
},
},
];
function applyVersionTransforms(
response: any,
endpoint: string,
requestedVersion: string
): any {
// Apply transforms in reverse chronological order
// for all changes newer than the requested version
const applicableChanges = versionChanges
.filter(change => change.date > requestedVersion)
.filter(change =>
change.affectedEndpoints.some(pattern =>
matchEndpoint(endpoint, pattern)
)
)
.sort((a, b) => b.date.localeCompare(a.date));
let result = response;
for (const change of applicableChanges) {
result = change.transform(result);
}
return result;
}The beauty of this approach is that internally, Stripe always works with the latest version. The version transformations are a pipeline that converts the latest response back to whatever the consumer expects. Each transformation is small, isolated, and testable.
Why this works at scale: Stripe does not maintain parallel codebases for different versions. They maintain one codebase (the latest) plus a chain of reversible transformations. Adding a new change means adding one more link to the chain. Old consumers get the same response they always got. New consumers get the latest format.
Why most teams should not start here: This approach requires significant upfront investment. You need a robust transformation pipeline, excellent test coverage for each transformation, and a culture of writing every API change as a pair: the change itself plus the backwards-compatibility transformation. For a team of 3-5 engineers, URL versioning with 2-3 major versions is probably enough.
Versioning is only half the problem. The other half is removing old versions. If you cannot deprecate, you accumulate versions forever, and every version you maintain is a version you have to test, document, and keep running.
RFC 8594 defines the Sunset HTTP header, and it is criminally underused:
HTTP/1.1 200 OK
Sunset: Sat, 01 Mar 2025 00:00:00 GMT
Deprecation: true
Link: <https://api.example.com/docs/migration-v2>; rel="successor-version"
I add sunset headers to deprecated endpoints at least six months before removal. This gives automated tooling, monitoring systems, and attentive developers a machine-readable signal that the endpoint is going away. Pair it with a Link header pointing to migration documentation, and you have given consumers everything they need.
function deprecationMiddleware(
sunsetDate: string,
migrationUrl: string
) {
return (req, res, next) => {
res.setHeader('Sunset', new Date(sunsetDate).toUTCString());
res.setHeader('Deprecation', 'true');
res.setHeader(
'Link',
`<${migrationUrl}>; rel="successor-version"`
);
// Log usage for tracking
logDeprecatedUsage({
endpoint: req.path,
apiKey: req.headers['x-api-key'],
version: req.apiVersion,
timestamp: new Date(),
});
next();
};
}
// Apply to deprecated v1 routes
v1Router.use(
'/users',
deprecationMiddleware(
'2025-06-01',
'https://api.example.com/docs/v2-migration'
)
);Never remove an API version without knowing who is still using it. I learned this the hard way when I deprecated a v1 endpoint that "nobody used" based on aggregate traffic metrics. Turns out, one enterprise customer made exactly 47 requests per day to that endpoint, every day, as part of their compliance reporting pipeline. We did not see it in the traffic graphs because it was noise compared to the thousands of requests on other endpoints. But to that customer, it was critical infrastructure.
// Track per-consumer version usage
interface VersionUsageRecord {
apiKey: string;
version: string;
endpoint: string;
lastSeen: Date;
requestCount: number;
}
async function trackVersionUsage(
apiKey: string,
version: string,
endpoint: string
) {
const key = `version_usage:${apiKey}:${version}:${endpoint}`;
await redis.multi()
.hincrby(key, 'count', 1)
.hset(key, 'lastSeen', Date.now().toString())
.expire(key, 90 * 24 * 60 * 60) // 90 days TTL
.exec();
}
// Before deprecating: check who is still using it
async function getVersionConsumers(version: string) {
const keys = await redis.keys(`version_usage:*:${version}:*`);
const consumers = new Map<string, {
endpoints: string[];
totalRequests: number;
lastSeen: Date;
}>();
for (const key of keys) {
const [, apiKey, , endpoint] = key.split(':');
const data = await redis.hgetall(key);
if (!consumers.has(apiKey)) {
consumers.set(apiKey, {
endpoints: [],
totalRequests: 0,
lastSeen: new Date(0),
});
}
const consumer = consumers.get(apiKey)!;
consumer.endpoints.push(endpoint);
consumer.totalRequests += parseInt(data.count || '0', 10);
const lastSeen = new Date(parseInt(data.lastSeen || '0', 10));
if (lastSeen > consumer.lastSeen) {
consumer.lastSeen = lastSeen;
}
}
return consumers;
}My deprecation checklist now includes: identify every consumer still using the version, reach out to any consumer that has made requests in the last 30 days, give a minimum 90-day sunset window (180 for enterprise), and only flip the switch when usage is genuinely zero.
This seems obvious until you start listing edge cases. Here is the taxonomy I use:
Adding a new field to a response. If a consumer ignores unknown fields (which they should), this is always safe. Most JSON parsers do this by default.
// Before
{ "id": 1, "name": "Alice" }
// After - safe, new field added
{ "id": 1, "name": "Alice", "avatar_url": "https://..." }Adding a new optional query parameter. Existing requests without the parameter keep working exactly as before.
Adding a new endpoint. No existing consumer is calling it, so it cannot break anything.
Adding a new enum value to a response field. This one is tricky. It is technically non-breaking, but consumers who have a switch statement over your enum values will hit their default case. I treat this as non-breaking but document it prominently.
Removing a field from a response. Even if you think nobody uses it. Especially if you think nobody uses it.
Renaming a field. This is removing the old field and adding a new one. It is breaking.
Changing a field's type. "count": 5 becoming "count": "5" will crash a consumer's deserialization.
Adding a required parameter to a request. Existing requests do not include it, so they will start failing.
Changing the meaning of a status code. If you previously returned 200 for a soft-delete and now return 204, consumers checking for 200 will see an unexpected response.
Changing error response format. Consumers parse errors too. Changing from {"error": "message"} to {"errors": [{"message": "..."}]} breaks error handling code.
Changing pagination format. Moving from offset-based to cursor-based pagination is breaking even if the data is the same.
// A practical "is this change breaking?" checker
type ChangeType =
| 'field_added'
| 'field_removed'
| 'field_renamed'
| 'field_type_changed'
| 'param_added_required'
| 'param_added_optional'
| 'enum_value_added'
| 'enum_value_removed'
| 'status_code_changed'
| 'error_format_changed';
const breakingChanges: Set<ChangeType> = new Set([
'field_removed',
'field_renamed',
'field_type_changed',
'param_added_required',
'enum_value_removed',
'status_code_changed',
'error_format_changed',
]);
function isBreaking(change: ChangeType): boolean {
return breakingChanges.has(change);
}If you are not generating your API documentation from a schema, you are maintaining two sources of truth, and they will diverge. I have seen it happen on every project that hand-writes API docs.
OpenAPI 3.1 aligns with JSON Schema draft 2020-12, which means you can use full JSON Schema features for describing your API. This matters for versioning because you can define schemas per version and validate against them automatically.
# openapi.yaml
openapi: 3.1.0
info:
title: My API
version: 2.0.0
paths:
/users/{id}:
get:
operationId: getUser
parameters:
- name: id
in: path
required: true
schema:
type: string
format: uuid
responses:
'200':
content:
application/json:
schema:
$ref: '#/components/schemas/UserV2'
components:
schemas:
UserV1:
type: object
required: [id, name, email]
properties:
id:
type: string
format: uuid
name:
type: string
email:
type: string
format: email
created:
type: string
format: date-time
UserV2:
type: object
required: [data, meta]
properties:
data:
type: object
required: [id, name, email]
properties:
id:
type: string
format: uuid
name:
type: object
properties:
full:
type: string
first:
type: string
last:
type: string
email:
type: string
format: email
meta:
type: object
properties:
createdAt:
type: string
format: date-time
updatedAt:
type: string
format: date-timeOpenAPI tells you what your API should look like. Contract testing tells you whether it actually does. Pact is the tool I have used most for this, and it is particularly valuable when multiple teams consume your API.
The idea is simple: consumers define contracts (what they expect from the provider), and the provider verifies those contracts in CI. If a change breaks a consumer's contract, the build fails before anything ships.
// Consumer side: define what you expect
import { PactV4, MatchersV3 } from '@pact-foundation/pact';
const provider = new PactV4({
consumer: 'MobileApp',
provider: 'UserAPI',
});
describe('User API Contract', () => {
it('returns user by ID in v2 format', async () => {
await provider
.addInteraction()
.given('user 123 exists')
.uponReceiving('a request for user 123')
.withRequest('GET', '/v2/users/123')
.willRespondWith(200, (builder) => {
builder.jsonBody({
data: {
id: MatchersV3.uuid('550e8400-e29b-41d4-a716-446655440000'),
name: {
full: MatchersV3.string('Alice Johnson'),
first: MatchersV3.string('Alice'),
last: MatchersV3.string('Johnson'),
},
email: MatchersV3.email('alice@example.com'),
},
meta: {
createdAt: MatchersV3.iso8601DateTime(),
updatedAt: MatchersV3.iso8601DateTime(),
},
});
})
.executeTest(async (mockServer) => {
const response = await fetch(
`${mockServer.url}/v2/users/123`
);
const body = await response.json();
expect(body.data.name.full).toBe('Alice Johnson');
});
});
});The provider then verifies all consumer contracts:
// Provider side: verify all consumer contracts
import { Verifier } from '@pact-foundation/pact';
describe('Provider Verification', () => {
it('satisfies all consumer contracts', async () => {
const verifier = new Verifier({
providerBaseUrl: 'http://localhost:3000',
pactBrokerUrl: 'https://pact-broker.example.com',
provider: 'UserAPI',
providerVersion: process.env.GIT_SHA,
publishVerificationResult: true,
stateHandlers: {
'user 123 exists': async () => {
await seedTestUser({
id: '550e8400-e29b-41d4-a716-446655440000',
name: 'Alice Johnson',
email: 'alice@example.com',
});
},
},
});
await verifier.verifyProvider();
});
});Contract testing caught more versioning bugs for me than integration tests ever did. The key insight is that it shifts the conversation from "did we break anything?" (which you find out after deploying) to "will we break anything?" (which you find out in CI).
GraphQL proponents often claim that GraphQL does not need versioning because clients request exactly the fields they want. If you add a field, clients that do not request it are unaffected. If you deprecate a field, you mark it with @deprecated and clients see a warning.
This is true, and it is also misleading.
type User {
id: ID!
name: String!
email: String!
# Old field, kept for backwards compatibility
fullName: String @deprecated(reason: "Use name instead")
# New nested type
profile: UserProfile
}
type UserProfile {
avatarUrl: String
bio: String
joinedAt: DateTime
}GraphQL avoids some versioning problems by design. You never need to version just because you added a field. You can deprecate fields with clear messaging. Schema introspection lets tools warn about deprecated usage.
But GraphQL does not solve these versioning problems:
Changing a field's type. If age was Int and you need it to be String, you cannot just change it. You need a new field name, just like REST.
Removing a field. Even deprecated fields eventually need to go away. When you remove one, clients that still query it will get errors. The deprecation annotation is advisory, not enforced.
Changing resolver behavior. If the users query used to return all users and now requires a filter, clients break even though the schema looks the same.
Schema stitching and federation. When your GraphQL API is composed from multiple services, versioning becomes a distributed coordination problem. Service A can change its schema independently of Service B, and the gateway needs to handle both.
The honest assessment: GraphQL reduces the frequency of breaking changes and provides better tooling for deprecation. It does not eliminate the need to think about backwards compatibility. I have seen teams treat GraphQL as a magic bullet for versioning and then struggle when they need to make a genuinely breaking schema change with no clean migration path.
Your API responses come from your database. When you change your API's response shape, you often need to change your database schema too. But your old API version still needs to read data in the old shape. This is where things get messy.
PostgreSQL views are underrated for API versioning. Instead of querying tables directly, have each API version query its own view:
-- The actual table evolves freely
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
first_name TEXT NOT NULL,
last_name TEXT NOT NULL,
email TEXT NOT NULL UNIQUE,
avatar_url TEXT,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
-- v1 view: flat structure, combined name
CREATE OR REPLACE VIEW users_v1 AS
SELECT
id,
first_name || ' ' || last_name AS name,
email,
created_at AS created
FROM users;
-- v2 view: nested-friendly, all fields
CREATE OR REPLACE VIEW users_v2 AS
SELECT
id,
first_name,
last_name,
first_name || ' ' || last_name AS full_name,
email,
avatar_url,
created_at,
updated_at
FROM users;Each API version handler queries its corresponding view. The table schema can evolve independently. When you add a column, you add it to the relevant views. When you rename a column, old views keep the old name.
If your version transformation involves expensive joins or aggregations, materialized views give you the same abstraction with cached results:
CREATE MATERIALIZED VIEW user_stats_v2 AS
SELECT
u.id,
u.first_name,
u.last_name,
u.email,
COUNT(o.id) AS order_count,
COALESCE(SUM(o.total), 0) AS total_spent,
MAX(o.created_at) AS last_order_at
FROM users u
LEFT JOIN orders o ON o.user_id = u.id
GROUP BY u.id, u.first_name, u.last_name, u.email;
-- Refresh on a schedule
REFRESH MATERIALIZED VIEW CONCURRENTLY user_stats_v2;For complex version differences, I use a transformation layer in the application code rather than in the database. The database returns the canonical model, and transformers convert it to version-specific shapes:
// Internal domain model - always the latest
interface User {
id: string;
firstName: string;
lastName: string;
email: string;
avatarUrl: string | null;
createdAt: Date;
updatedAt: Date;
}
// Version-specific transformers
const transformers = {
v1: {
user: (user: User) => ({
id: user.id,
name: `${user.firstName} ${user.lastName}`,
email: user.email,
created: user.createdAt.toISOString(),
}),
},
v2: {
user: (user: User) => ({
data: {
id: user.id,
name: {
full: `${user.firstName} ${user.lastName}`,
first: user.firstName,
last: user.lastName,
},
email: user.email,
avatarUrl: user.avatarUrl,
},
meta: {
createdAt: user.createdAt.toISOString(),
updatedAt: user.updatedAt.toISOString(),
},
}),
},
};
// In the route handler
app.get('/users/:id', async (req, res) => {
const user = await userRepository.findById(req.params.id);
const version = req.apiVersion || 'v2';
const transformer = transformers[version]?.user;
if (!transformer) {
return res.status(400).json({
error: `Unsupported API version: ${version}`,
});
}
res.json(transformer(user));
});This pattern scales better than database views for complex transformations, and it keeps the version logic in the application layer where it is easier to test. I have used this on an API with 150+ endpoints and three active versions. The transformer files are boring and repetitive, which is exactly what you want from infrastructure code.
If you provide SDKs for your API, versioning gets another dimension. The SDK is a contract too, and it needs to match the API version it targets.
Tools like openapi-generator and openapi-typescript can generate typed clients from your OpenAPI spec. This is valuable because the SDK automatically matches the schema:
# Generate TypeScript client from OpenAPI spec
npx openapi-typescript ./openapi-v2.yaml -o ./src/generated/api-v2.ts// Generated types match your API version exactly
import type { paths } from './generated/api-v2';
type GetUserResponse =
paths['/users/{id}']['get']['responses']['200']['content']['application/json'];
// TypeScript enforces the version's contract
async function getUser(id: string): Promise<GetUserResponse> {
const res = await fetch(`/v2/users/${id}`);
return res.json();
}I version SDKs to match API versions. The SDK's major version corresponds to the API version:
{
"name": "@myapi/sdk",
"version": "2.3.1"
}Here, 2.x.x means it targets API v2. Minor and patch versions are for SDK improvements (better error messages, new helper methods) that do not change the underlying API contract.
When you release API v3, you publish SDK 3.0.0. Consumers on v2 keep using 2.x.x until they are ready to migrate. Both SDK versions can coexist in a project during migration.
This gets more nuanced with date-based versioning like Stripe. Stripe publishes one SDK that supports all versions, with the version specified at client initialization:
import Stripe from 'stripe';
const stripe = new Stripe('sk_...', {
apiVersion: '2024-12-18', // pinned to a specific version
});This is cleaner for consumers but harder to implement, because the SDK needs to handle responses from any version the consumer might be pinned to. In practice, Stripe's SDK types reflect the latest version, and older version responses are implicitly compatible because of their transformation pipeline.
When you ship a new API version, you need a migration period where both versions are live. This period is where most teams stumble. Here is the playbook I have refined over the years.
Use a reverse proxy or API gateway to route versioned requests. Do not try to handle version routing in application code if you can avoid it.
# nginx version routing
upstream api_v1 {
server 127.0.0.1:3001;
}
upstream api_v2 {
server 127.0.0.1:3002;
}
server {
listen 443 ssl;
server_name api.example.com;
location /v1/ {
proxy_pass http://api_v1/;
}
location /v2/ {
proxy_pass http://api_v2/;
}
# Default to latest version
location / {
proxy_pass http://api_v2/;
}
}For the transformation layer approach (single codebase, multiple version outputs), routing happens in middleware:
// Version-aware middleware
function versionRouter(handlers: Record<string, RequestHandler>) {
return (req, res, next) => {
const version = extractVersion(req);
const handler = handlers[version];
if (!handler) {
return res.status(400).json({
error: 'Unsupported API version',
supported: Object.keys(handlers),
});
}
return handler(req, res, next);
};
}
app.get(
'/users/:id',
versionRouter({
v1: getUserV1Handler,
v2: getUserV2Handler,
})
);Both versions read from and write to the same database. This means write operations need extra care. If v2 accepts a new required field that v1 does not send, you need a default value or a conversion layer.
// Write operations need version-aware validation
async function createUser(data: unknown, version: string) {
if (version === 'v1') {
// v1 sends { name, email }
const validated = v1CreateSchema.parse(data);
const [firstName, ...rest] = validated.name.split(' ');
return userRepository.create({
firstName,
lastName: rest.join(' ') || '',
email: validated.email,
avatarUrl: null,
});
}
if (version === 'v2') {
// v2 sends { firstName, lastName, email, avatarUrl? }
const validated = v2CreateSchema.parse(data);
return userRepository.create({
firstName: validated.firstName,
lastName: validated.lastName,
email: validated.email,
avatarUrl: validated.avatarUrl ?? null,
});
}
}Decide upfront whether new features go into both versions or only the latest. My rule: bug fixes go into all active versions. New features go into the latest version only. Security fixes go everywhere immediately.
Document this policy clearly. Consumers need to know that staying on v1 means they will not get new capabilities.
You need dashboards. Not just "how many requests per version" but per-consumer, per-endpoint version tracking. Without this data, you are guessing about when to deprecate.
// Structured logging for version metrics
function logApiRequest(req, res, responseTime: number) {
const logEntry = {
timestamp: new Date().toISOString(),
method: req.method,
path: req.path,
version: req.apiVersion,
apiKey: hashApiKey(req.headers['x-api-key']),
statusCode: res.statusCode,
responseTimeMs: responseTime,
userAgent: req.headers['user-agent'],
deprecated: res.getHeader('Deprecation') === 'true',
};
logger.info('api_request', logEntry);
}
// Aggregation query for version adoption dashboard
// (PostgreSQL example with the logs in a table)
const versionAdoptionQuery = `
SELECT
version,
COUNT(DISTINCT api_key_hash) AS unique_consumers,
COUNT(*) AS total_requests,
MAX(timestamp) AS last_request,
COUNT(*) FILTER (
WHERE timestamp > NOW() - INTERVAL '7 days'
) AS requests_last_7d
FROM api_request_logs
WHERE timestamp > NOW() - INTERVAL '90 days'
GROUP BY version
ORDER BY version DESC;
`;The metric that matters most for deprecation decisions is not total requests. It is unique consumers in the last 30 days. If zero consumers have called v1 in the last 30 days, you can probably sunset it (after sending one final deprecation notice). If three consumers are still active, you need to reach out to them directly.
GitHub uses accept header versioning for their REST API, with a clever addition: preview features. New API functionality ships behind an accept header preview flag before becoming stable. This gives GitHub a way to iterate on API design with real consumer feedback before committing to a stable contract.
Accept: application/vnd.github.v3+json
Accept: application/vnd.github.nebula-preview+json
The downside: GitHub's v3 API has been the "current" version for over a decade. They effectively have one version with many incremental additions. When they wanted to make fundamental changes, they built an entirely separate GraphQL API rather than creating v4 of the REST API. That tells you something about how hard it is to actually ship a new major version of a widely-consumed API.
As I discussed earlier, Stripe uses date-based versioning with per-account pinning. What makes Stripe's approach exceptional is the discipline around it. Every breaking change is documented in their changelog with the exact date, the exact fields affected, and a migration guide. Their SDK handles version differences transparently. And they maintain backwards compatibility for years, not months.
Stripe can do this because they have invested heavily in the transformation pipeline infrastructure. They reportedly have a custom internal tool that enforces versioning discipline: engineers cannot ship an API change without also writing the backwards-compatibility transform and the changelog entry.
Twilio uses straightforward URL versioning (/2010-04-01/Accounts/...) with extremely long support windows. Their original API version from 2010 still works. This is possible because Twilio's API surface is relatively stable (sending SMS has not fundamentally changed) and because their revenue model depends on developers trusting that their integrations will keep working.
The trade-off is clear: Twilio carries the maintenance burden of very old versions, but they have never burned a customer with a surprise deprecation. For a communications platform where uptime is everything, this is the right call.
All three companies share one principle: they treat API stability as a competitive advantage, not a technical constraint. Their versioning strategies are different, but the outcome is the same. Consumers trust that their integrations will keep working. That trust translates directly into revenue.
After eight years of getting this wrong and occasionally getting it right, here is my practical advice:
Start with URL versioning. It is the simplest to implement, the easiest to understand, and the most widely supported by tooling. Use /v1/ and do not overthink it.
Build a transformation layer early. Even with URL versioning, do not duplicate route handlers. Write one handler that works with your internal domain model, and use transformers to convert responses to version-specific shapes.
Define what "breaking" means for your API and write it down. Share it with your team. Put it in your contributing guide. Make it part of code review.
Use OpenAPI from day one. Generate your docs from it. Generate your client types from it. Use it as the single source of truth for your API contract.
Add contract tests when you have multiple consumers. Pact or similar tools will catch breaking changes before they ship.
Track per-consumer version usage. You need this data for deprecation decisions. Start collecting it now, not when you are ready to deprecate.
Set a deprecation policy and follow it. I use: announce deprecation with a 180-day sunset window, send sunset headers on every response, email consumers with migration guides at 180, 90, 30, and 7 days, and remove the version only when usage hits zero.
Only move to date-based versioning if you have the engineering capacity. Stripe's approach is elegant but expensive to build. If you have fewer than 50 API endpoints and fewer than 10 external consumers, URL versioning with good transformers will serve you well for years.
The most important thing I have learned is that API versioning is about empathy. Your consumers have their own deadlines, their own priorities, their own production systems that depend on your API. Every versioning decision should start with the question: "How does this affect the people who depend on us?" If you can answer that honestly, you will make the right call.