Not another 'GraphQL is better' or 'REST is simpler' post. Real production experience with both, including the parts that GraphQL evangelists won't tell you and the REST limitations that actually matter.
I have built and maintained production APIs in both REST and GraphQL for years now. Not toy projects. Not tutorial apps. Real systems with real users, real performance budgets, real on-call rotations, and real 3 AM incidents that taught me things no blog post ever could.
And after all of that, my honest take is this: the internet's favorite API debate is almost entirely missing the point. The question isn't "which is better." The question is "which set of tradeoffs are you signing up for, and do you actually understand what those tradeoffs mean at 2 AM when something breaks?"
Let me walk you through every dimension of this comparison, with code, with war stories, and with the kind of honesty that conference talks and documentation pages are allergic to.
Every GraphQL pitch starts here, so let me address it first. The claim: REST APIs return too much data (over-fetching) or require multiple requests to assemble what you need (under-fetching). GraphQL solves both by letting the client specify exactly what it wants.
Here's the REST endpoint everyone uses as the punching bag:
// GET /api/users/123
{
"id": 123,
"name": "Alex Kousa",
"email": "alex@example.com",
"bio": "Software engineer who actually reads error messages.",
"avatarUrl": "https://cdn.example.com/avatars/123.jpg",
"createdAt": "2024-01-15T10:00:00Z",
"updatedAt": "2026-01-10T14:30:00Z",
"settings": {
"theme": "dark",
"language": "en",
"notifications": true,
"twoFactorEnabled": true
},
"stats": {
"postsCount": 47,
"followersCount": 1203,
"followingCount": 89
}
}And here's the GraphQL version where the client only asks for what it needs:
query {
user(id: 123) {
name
avatarUrl
}
}Clean. Elegant. Undeniably less data over the wire. The GraphQL evangelists win this slide every time.
But here's what they don't tell you: in practice, you're going to create REST endpoints that return what the client needs anyway. Not because REST forces you to, but because good API design means you think about your consumers.
// GET /api/users/123?fields=name,avatarUrl
// or even better:
// GET /api/users/123/card (a purpose-built endpoint for the user card component)
app.get("/api/users/:id/card", async (req, res) => {
const user = await db.user.findUnique({
where: { id: Number(req.params.id) },
select: { name: true, avatarUrl: true },
});
res.json(user);
});Is this more work than GraphQL's built-in field selection? Yes. Does it matter? That depends on how many different shapes of the same data your clients actually need. If you have a mobile app, a web app, and a public API all consuming the same user data differently, GraphQL's flexibility is genuinely valuable. If you have one Next.js frontend and you control both sides, you're adding GraphQL's complexity to solve a problem you could solve with a purpose-built endpoint in 10 lines.
The under-fetching problem is more legitimate. Consider loading a user profile page that needs the user, their recent posts, and the comments on those posts:
// REST: Three sequential requests (or a waterfall)
const user = await fetch(`/api/users/${id}`);
const posts = await fetch(`/api/users/${id}/posts?limit=5`);
const comments = await Promise.all(
posts.map((p) => fetch(`/api/posts/${p.id}/comments?limit=3`))
);# GraphQL: One request, exact shape
query UserProfile($id: ID!) {
user(id: $id) {
name
avatarUrl
bio
posts(limit: 5) {
id
title
excerpt
comments(limit: 3) {
id
body
author {
name
avatarUrl
}
}
}
}
}This is where GraphQL genuinely shines. That nested data requirement would need either a custom REST endpoint (which defeats the point of REST's resource-oriented design) or multiple round trips. GraphQL handles it in a single request with zero custom endpoint logic.
But -- and this is important -- that single request hides a monster under the bed.
That beautiful nested query above? Here's what's actually happening on the server if your resolvers are naive:
const resolvers = {
Query: {
user: (_, { id }) => db.user.findUnique({ where: { id } }), // 1 query
},
User: {
posts: (user) => db.post.findMany({ where: { authorId: user.id }, take: 5 }), // 1 query
},
Post: {
comments: (post) =>
db.comment.findMany({ where: { postId: post.id }, take: 3 }), // 5 queries (one per post)
},
Comment: {
author: (comment) =>
db.user.findUnique({ where: { id: comment.authorId } }), // 15 queries (one per comment)
},
};Count the database queries: 1 + 1 + 5 + 15 = 22 database queries for a single GraphQL request. This is the N+1 problem, and it's not a theoretical concern. It's the first thing that will bite you in production.
The standard solution is DataLoader:
import DataLoader from "dataloader";
function createLoaders() {
return {
userLoader: new DataLoader(async (ids: readonly number[]) => {
const users = await db.user.findMany({
where: { id: { in: [...ids] } },
});
const userMap = new Map(users.map((u) => [u.id, u]));
return ids.map((id) => userMap.get(id) ?? null);
}),
postCommentsLoader: new DataLoader(async (postIds: readonly number[]) => {
const comments = await db.comment.findMany({
where: { postId: { in: [...postIds] } },
take: 3,
orderBy: { createdAt: "desc" },
});
const grouped = new Map<number, typeof comments>();
for (const c of comments) {
if (!grouped.has(c.postId)) grouped.set(c.postId, []);
grouped.get(c.postId)!.push(c);
}
return postIds.map((id) => grouped.get(id) ?? []);
}),
};
}Now your 22 queries become 4 batched queries. Much better. But you had to write all of that batching logic yourself. Every relationship needs its own loader. Every loader needs to handle the ordering guarantee (DataLoader requires results in the same order as the input keys). Every loader needs to be created per-request to avoid cache poisoning between users.
With REST, you would have written one endpoint with a JOIN or a couple of well-placed findMany calls, and the N+1 problem simply wouldn't exist because you control the data fetching at the endpoint level, not the field level.
I'm not saying DataLoader is hard. I'm saying it's an entire category of complexity that exists because GraphQL's resolver architecture creates a problem that REST's endpoint architecture doesn't have.
This is the area where REST has a structural, architectural advantage that GraphQL fundamentally cannot match. And it's not because GraphQL caching is bad -- it's because HTTP caching was designed for REST.
REST responses map directly to URLs. URLs are the universal cache key:
// REST: HTTP caching just works
app.get("/api/posts/:id", async (req, res) => {
const post = await db.post.findUnique({ where: { id: Number(req.params.id) } });
// Browser, CDN, and proxy caching all work automatically
res.set("Cache-Control", "public, max-age=300, stale-while-revalidate=60");
res.set("ETag", generateETag(post));
res.json(post);
});
// Conditional request -- the browser sends If-None-Match
// Server returns 304 Not Modified -- zero bytes transferred
// All of this is FREE with RESTWith GraphQL, every request is a POST to /graphql. The URL is always the same. HTTP caching doesn't work. CDN caching doesn't work. Browser caching doesn't work. You're starting from zero.
// GraphQL: You need to build your own caching layer
// Option 1: Persisted queries (requires build-time extraction)
// Option 2: Response caching with cache hints
// Option 3: Normalized client-side cache (Apollo, urql)
// Option 4: CDN caching with GET requests + query hashing
// None of these are free. All of them add complexity.Apollo Client's normalized cache is genuinely impressive technology. It deduplicates entities across your entire app and updates all components when a mutation changes a shared entity. But here's the thing: you're building a client-side database to compensate for the fact that you can't use HTTP caching. That normalized cache has its own set of problems -- cache eviction policies, garbage collection, cache invalidation (the famously hard problem), and fetchPolicy configurations that every developer on your team needs to understand.
// Apollo's cache configuration is a language unto itself
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
posts: {
keyArgs: ["category", "sortBy"],
merge(existing = [], incoming, { args }) {
if (args?.offset === 0) return incoming;
return [...existing, ...incoming];
},
},
},
},
Post: {
fields: {
comments: {
merge: false, // Replace, don't merge
},
},
},
},
});Compare this to REST with SWR or React Query:
// REST + React Query: simple, predictable caching
const { data: post } = useQuery({
queryKey: ["posts", postId],
queryFn: () => fetch(`/api/posts/${postId}`).then((r) => r.json()),
staleTime: 5 * 60 * 1000,
});
// Invalidation is explicit and easy to reason about
const mutation = useMutation({
mutationFn: (data) =>
fetch(`/api/posts/${postId}`, { method: "PUT", body: JSON.stringify(data) }),
onSuccess: () => {
queryClient.invalidateQueries({ queryKey: ["posts", postId] });
queryClient.invalidateQueries({ queryKey: ["posts", "list"] });
},
});Both work. Both are production-grade. But the REST version layers on top of HTTP's native caching infrastructure, while the GraphQL version replaces it entirely.
At scale, this matters enormously. When your CDN can cache REST responses at the edge and serve them in 5ms from a POP near the user, but your GraphQL endpoint always hits your origin server because every POST request is a cache miss -- that's a real performance difference that affects real users.
Alright, enough GraphQL criticism. Let me talk about where GraphQL is genuinely transformative: schema-first development.
When you write a GraphQL schema, you're writing a contract. Not a Swagger spec that might be out of date. Not a TypeScript interface that only exists in one codebase. A living, enforced contract that both the frontend and backend must adhere to.
type User {
id: ID!
name: String!
email: String!
posts(limit: Int = 10, offset: Int = 0): PostConnection!
followers: FollowerConnection!
createdAt: DateTime!
}
type Post {
id: ID!
title: String!
body: String!
author: User!
comments(limit: Int = 10): CommentConnection!
tags: [Tag!]!
publishedAt: DateTime
isPublished: Boolean!
}
type PostConnection {
edges: [PostEdge!]!
pageInfo: PageInfo!
totalCount: Int!
}
type Query {
user(id: ID!): User
posts(
filter: PostFilter
sort: PostSort
pagination: PaginationInput
): PostConnection!
searchPosts(query: String!, limit: Int = 20): [Post!]!
}
type Mutation {
createPost(input: CreatePostInput!): Post!
updatePost(id: ID!, input: UpdatePostInput!): Post!
deletePost(id: ID!): DeleteResult!
}This schema is documentation, validation, and type generation all in one file. Frontend developers can start building components against this schema before the backend even has resolvers. Backend developers know exactly what the frontend expects. The schema is the single source of truth, and deviation from it is a compile-time error.
REST doesn't have anything this good built in. OpenAPI/Swagger comes closest, but it's a separate file that you have to maintain in sync with your implementation. I've never worked on a team where the Swagger spec was 100% accurate for more than a week after initial creation.
# OpenAPI: verbose, easy to forget to update, nobody checks it
paths:
/api/users/{id}:
get:
summary: Get user by ID
parameters:
- name: id
in: path
required: true
schema:
type: integer
responses:
"200":
description: User found
content:
application/json:
schema:
$ref: "#/components/schemas/User"
"404":
description: User not foundThat YAML file is 20 lines to describe one GET endpoint. The GraphQL type definition above describes the entire user domain in less space and is actually enforced at runtime. This isn't a minor difference -- it's the difference between documentation that exists and documentation that is always correct.
With GraphQL and code generation, you get end-to-end type safety almost for free:
// After running graphql-codegen, you get this automatically:
import { useGetUserProfileQuery } from "../generated/graphql";
function UserProfile({ userId }: { userId: string }) {
const { data, loading, error } = useGetUserProfileQuery({
variables: { id: userId },
});
if (loading) return <Skeleton />;
if (error) return <ErrorBanner error={error} />;
// data.user is fully typed -- name, avatarUrl, posts, everything
// TypeScript will catch if you access a field that doesn't exist
// or if you forget to handle a nullable field
return (
<div>
<h1>{data.user?.name}</h1>
<img src={data.user?.avatarUrl} alt={data.user?.name} />
</div>
);
}The type data.user here isn't some generic any or a manually defined interface. It's generated directly from your GraphQL schema and query, so if someone adds a required field to the User type or renames avatarUrl to avatar, your build breaks immediately. That's real safety.
REST can achieve the same thing, but it requires more setup:
// Option 1: Manually defined types (common, error-prone)
interface User {
id: number;
name: string;
avatarUrl: string;
}
// This type has zero relationship to what the server actually returns
// It's a wish, not a contract
const response = await fetch(`/api/users/${id}`);
const user: User = await response.json(); // Trust me bro
// Option 2: OpenAPI codegen (better, but requires spec maintenance)
import { UsersApi } from "../generated/api-client";
const api = new UsersApi();
const user = await api.getUser({ id }); // Typed, but spec might be stale
// Option 3: Zod runtime validation (my preferred REST approach)
import { z } from "zod";
const UserSchema = z.object({
id: z.number(),
name: z.string(),
avatarUrl: z.string().url(),
});
type User = z.infer<typeof UserSchema>;
const response = await fetch(`/api/users/${id}`);
const user = UserSchema.parse(await response.json());
// Throws at RUNTIME if the shape doesn't match
// Better than nothing, but GraphQL catches this at BUILD timeThe Zod approach is what I use for REST APIs now, and it's genuinely good. But it's runtime validation, not compile-time safety. The difference matters in large codebases where you want to catch problems before deployment, not after.
GraphQL's tooling ecosystem is one of its strongest selling points. Let me be specific about what you get:
GraphQL Tooling:
graphql-codegen -- generates TypeScript types, React hooks, everythinggraphql-eslintREST Tooling:
openapi-generator -- client generation (if you maintain the spec)curl (which, honestly, is underrated)Here's a concrete example. In GraphQL, when you write a query, your editor knows every field available, gives you autocomplete, and shows you the types inline:
# Your editor autocompletes every field, validates the query,
# and shows inline documentation from the schema
query GetPosts($filter: PostFilter!) {
posts(filter: $filter) {
edges {
node {
title # String! -- autocomplete told you this exists
publishedAt # DateTime -- autocomplete showed it's nullable
author {
name # String! -- you can explore the graph interactively
}
}
}
pageInfo {
hasNextPage
endCursor
}
}
}With REST, your editor doesn't know anything about what /api/posts returns unless you've set up OpenAPI tooling. You're relying on documentation, tribal knowledge, or actually hitting the endpoint to see the shape.
That said, REST's simplicity means you don't need specialized tooling. curl works. The browser works. You can inspect responses directly in the network tab without needing a browser extension. There's something to be said for a technology that doesn't require a specialized IDE experience to be usable.
GraphQL Subscriptions look beautiful in demos:
subscription OnNewComment($postId: ID!) {
commentAdded(postId: $postId) {
id
body
author {
name
avatarUrl
}
createdAt
}
}// Client-side with Apollo
const { data } = useSubscription(ON_NEW_COMMENT, {
variables: { postId },
});
// data.commentAdded automatically appears when someone comments
// The UI updates in real timeThis is genuinely nice. The client specifies exactly which fields it wants from the real-time event, the transport is handled for you (WebSocket under the hood), and the DX is excellent.
But let me tell you about production reality. GraphQL subscriptions mean:
WebSocket connection management at scale. Every connected client holds an open WebSocket. At 10,000 concurrent users, that's 10,000 persistent connections your server needs to manage. Load balancers need sticky sessions or a pub/sub layer (Redis, Kafka) to route subscription events to the right server.
Authentication and authorization on persistent connections. A REST request carries its auth token in every request. A WebSocket carries it once at connection time. What happens when the token expires? You need heartbeat logic, reconnection logic, and re-authentication logic.
Debugging subscriptions is painful. When a subscription stops delivering events, is it the WebSocket connection? The pub/sub layer? The resolver? The filter? The client re-render? Good luck with your Chrome DevTools.
REST's approach to real-time is less elegant but more pragmatic:
// Server-Sent Events (SSE) -- simpler than WebSocket for one-way data
app.get("/api/posts/:id/comments/stream", (req, res) => {
res.writeHead(200, {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
Connection: "keep-alive",
});
const listener = (comment: Comment) => {
res.write(`data: ${JSON.stringify(comment)}\n\n`);
};
commentEmitter.on(`post:${req.params.id}`, listener);
req.on("close", () => {
commentEmitter.off(`post:${req.params.id}`, listener);
});
});
// Or webhooks for server-to-server communication
// POST /webhooks/comments with a signed payload
// No persistent connection, no connection management, just HTTPSSE is simpler than WebSocket (one-way, auto-reconnect built into the browser, works through HTTP/2), and webhooks are the most battle-tested real-time pattern on the internet. Stripe, GitHub, Slack -- they all use webhooks, not GraphQL subscriptions.
I use GraphQL subscriptions only when I need client-specified field selection on real-time data and the infrastructure to support WebSocket at scale already exists. For everything else, SSE or webhooks.
This is one of those things that rarely comes up in "GraphQL vs REST" comparisons because it makes GraphQL look bad. Let me show you why.
REST file upload:
// Straightforward, everyone knows how this works
app.post("/api/uploads", upload.single("file"), async (req, res) => {
const file = req.file;
const url = await uploadToStorage(file);
res.json({ url, size: file.size, mimeType: file.mimetype });
});
// Client
const formData = new FormData();
formData.append("file", selectedFile);
formData.append("description", "Profile photo");
const response = await fetch("/api/uploads", {
method: "POST",
body: formData,
});That's it. Multipart form data. Every HTTP client supports it. Every server framework supports it. The browser supports it natively. Progress events work. Chunked uploads work. Resume-on-failure works.
GraphQL file upload:
# You need the graphql-upload spec
scalar Upload
type Mutation {
uploadFile(file: Upload!, description: String): FileResult!
}// Server-side: special middleware required
import { graphqlUploadExpress } from "graphql-upload";
app.use(graphqlUploadExpress({ maxFileSize: 10_000_000, maxFiles: 10 }));
// Resolver
const resolvers = {
Mutation: {
uploadFile: async (_, { file, description }) => {
const { createReadStream, filename, mimetype } = await file;
const stream = createReadStream();
const url = await uploadStreamToStorage(stream, filename);
return { url, filename, mimeType: mimetype };
},
},
};
// Client-side: special link required for Apollo
import { createUploadLink } from "apollo-upload-client";
const client = new ApolloClient({
link: createUploadLink({ uri: "/graphql" }),
cache: new InMemoryCache(),
});The graphql-upload package is not part of the GraphQL spec. It's a community convention that not all servers and clients support. Apollo has dropped built-in support for it in newer versions and recommends using a separate REST endpoint for uploads.
Read that again: the official recommendation from Apollo is to not use GraphQL for file uploads.
In production, I always use a REST endpoint for file uploads, even in GraphQL-first applications. It's not worth the complexity.
REST uses HTTP status codes, and despite the memes about everyone using 200 for everything, they work well when used correctly:
// REST: HTTP status codes carry semantic meaning
app.get("/api/posts/:id", async (req, res) => {
try {
const post = await db.post.findUnique({
where: { id: Number(req.params.id) },
});
if (!post) {
return res.status(404).json({
error: "NOT_FOUND",
message: `Post ${req.params.id} not found`,
});
}
if (!canAccess(req.user, post)) {
return res.status(403).json({
error: "FORBIDDEN",
message: "You don't have permission to view this post",
});
}
res.json(post);
} catch (err) {
res.status(500).json({
error: "INTERNAL_ERROR",
message: "Something went wrong",
});
}
});
// Client: status code tells you what happened before you parse the body
const response = await fetch(`/api/posts/${id}`);
if (response.status === 404) {
showNotFound();
} else if (response.status === 403) {
redirectToLogin();
} else if (!response.ok) {
showGenericError();
} else {
const post = await response.json();
renderPost(post);
}GraphQL takes a fundamentally different approach. Every response is HTTP 200 (mostly). Errors are part of the response body:
{
"data": {
"post": null
},
"errors": [
{
"message": "Post not found",
"locations": [{ "line": 2, "column": 3 }],
"path": ["post"],
"extensions": {
"code": "NOT_FOUND",
"statusCode": 404
}
}
]
}This has some genuine advantages. A single GraphQL request can return partial data -- some fields succeed while others fail. With REST, a failed request is a failed request; there's no partial success.
# This query might partially succeed
query Dashboard {
user {
name # succeeds
email # succeeds
}
analytics {
pageViews # fails because the analytics service is down
uniqueVisitors # fails
}
notifications {
unreadCount # succeeds
}
}The response would contain the user data and notification count, with an error for the analytics fields. The frontend can render everything except the analytics widget. With REST, you'd need three separate requests and handle each failure independently.
But there's a flip side. GraphQL's error model means you can't use standard HTTP error handling middleware. Your monitoring tools see every request as a 200. Your CDN can't cache errors differently from successes. Your load balancer can't route based on error rates. All of these are things the HTTP ecosystem gives you for free with REST, and all of them need custom solutions with GraphQL.
I've seen production GraphQL APIs where the error rate in the monitoring dashboard showed 0% because every response was HTTP 200, while the application was actually failing for 30% of users. The errors were in the response body, but nobody had configured the monitoring to parse GraphQL error responses. That's a real incident I dealt with, and it was entirely caused by GraphQL's error model.
REST API versioning is a well-understood (if contentious) practice:
// URL versioning
app.use("/api/v1/users", v1UsersRouter);
app.use("/api/v2/users", v2UsersRouter);
// Header versioning
app.use("/api/users", (req, res, next) => {
const version = req.headers["api-version"] || "1";
if (version === "2") return v2UsersRouter(req, res, next);
return v1UsersRouter(req, res, next);
});It's ugly. Nobody agrees on the best approach. But it works, and it means old clients don't break when you make breaking changes.
GraphQL's answer to versioning is: don't version. Instead, deprecate fields and add new ones:
type User {
# Old field -- deprecated but still works
name: String! @deprecated(reason: "Use firstName and lastName instead")
# New fields
firstName: String!
lastName: String!
displayName: String!
}This is elegant in theory. In practice, deprecated fields live forever because:
I've worked on GraphQL APIs where deprecated fields from three years ago were still in the schema because nobody could confirm that zero clients were using them. The schema becomes a geological record of every API decision ever made, and it only grows.
REST versioning is crude, but it gives you a clean break. /api/v2 can be a completely different shape, and /api/v1 continues to work unchanged until you explicitly sunset it.
Let me share real numbers from production systems I've worked with. These aren't benchmarks -- they're production P95 latencies:
REST API (Express + PostgreSQL, ~2000 req/s):
GraphQL API (Apollo Server + PostgreSQL, ~800 req/s):
The overhead is real. GraphQL has to:
REST just routes to a handler and returns JSON. There's less machinery between the request and the database.
That said, the absolute numbers are often small enough that they don't matter. The difference between 8ms and 18ms is invisible to users. Where it starts to matter is at scale with complex queries, where GraphQL's resolver tree can cause the latency to multiply in ways that REST's flat endpoint model doesn't.
The throughput difference (2000 req/s vs 800 req/s on equivalent hardware) is more concerning. GraphQL's parsing and validation overhead means each request uses more CPU. If you're running on fixed infrastructure (not auto-scaling), this directly affects your capacity.
This is the section that makes security engineers nervous about GraphQL, and rightly so.
Consider this schema:
type User {
friends: [User!]!
}Now consider this query:
query Evil {
user(id: 1) {
friends {
friends {
friends {
friends {
friends {
friends {
friends {
friends {
friends {
friends {
name
}
}
}
}
}
}
}
}
}
}
}
}This is a depth attack. Each level of nesting multiplies the data and database queries. Without protection, this query could bring down your entire backend.
You need defenses:
import depthLimit from "graphql-depth-limit";
import { createComplexityLimitRule } from "graphql-validation-complexity";
const server = new ApolloServer({
typeDefs,
resolvers,
validationRules: [
depthLimit(10),
createComplexityLimitRule(1000, {
scalarCost: 1,
objectCost: 10,
listFactor: 20,
}),
],
});You also need to worry about:
// Disable introspection and suggestions in production
const server = new ApolloServer({
typeDefs,
resolvers,
introspection: process.env.NODE_ENV !== "production",
plugins: [
{
requestDidStart: async () => ({
didResolveOperation: async (ctx) => {
// Block introspection queries
if (ctx.operation.operation === "query") {
const hasIntrospection = ctx.document.definitions.some(
(def) =>
def.kind === "OperationDefinition" &&
def.selectionSet.selections.some(
(sel) =>
sel.kind === "Field" &&
sel.name.value.startsWith("__")
)
);
if (hasIntrospection) {
throw new GraphQLError("Introspection is disabled");
}
}
},
}),
},
],
});REST doesn't have any of these problems because the server controls what data is fetched for every request. There's no query language for an attacker to abuse. The attack surface is exactly the set of endpoints you've defined, nothing more.
This doesn't mean REST APIs are secure by default -- you still need authentication, authorization, rate limiting, input validation, and everything else. But the attack surface is smaller and more predictable.
Let me build the same small API -- a blog with posts and comments -- in both approaches so you can see the full picture.
REST Implementation:
// routes/posts.ts
import { Router } from "express";
import { z } from "zod";
const router = Router();
const CreatePostSchema = z.object({
title: z.string().min(1).max(200),
body: z.string().min(1),
tags: z.array(z.string()).optional(),
});
const PostQuerySchema = z.object({
page: z.coerce.number().int().positive().default(1),
limit: z.coerce.number().int().min(1).max(100).default(20),
sortBy: z.enum(["createdAt", "title", "likes"]).default("createdAt"),
order: z.enum(["asc", "desc"]).default("desc"),
tag: z.string().optional(),
});
// GET /api/posts
router.get("/", async (req, res) => {
const query = PostQuerySchema.parse(req.query);
const skip = (query.page - 1) * query.limit;
const where = query.tag
? { tags: { some: { name: query.tag } }, isPublished: true }
: { isPublished: true };
const [posts, total] = await Promise.all([
db.post.findMany({
where,
select: {
id: true,
title: true,
excerpt: true,
author: { select: { id: true, name: true, avatarUrl: true } },
tags: { select: { name: true } },
likesCount: true,
commentsCount: true,
publishedAt: true,
},
orderBy: { [query.sortBy]: query.order },
skip,
take: query.limit,
}),
db.post.count({ where }),
]);
res.json({
data: posts,
pagination: {
page: query.page,
limit: query.limit,
total,
totalPages: Math.ceil(total / query.limit),
},
});
});
// GET /api/posts/:id
router.get("/:id", async (req, res) => {
const post = await db.post.findUnique({
where: { id: Number(req.params.id) },
include: {
author: { select: { id: true, name: true, avatarUrl: true, bio: true } },
tags: { select: { name: true } },
comments: {
where: { parentId: null },
include: {
author: { select: { id: true, name: true, avatarUrl: true } },
replies: {
include: {
author: { select: { id: true, name: true, avatarUrl: true } },
},
orderBy: { createdAt: "asc" },
take: 5,
},
},
orderBy: { createdAt: "desc" },
take: 20,
},
},
});
if (!post) return res.status(404).json({ error: "Post not found" });
if (!post.isPublished && post.authorId !== req.user?.id) {
return res.status(403).json({ error: "Access denied" });
}
res.set("Cache-Control", "public, max-age=60, stale-while-revalidate=30");
res.json(post);
});
// POST /api/posts
router.post("/", requireAuth, async (req, res) => {
const input = CreatePostSchema.parse(req.body);
const post = await db.post.create({
data: {
...input,
authorId: req.user.id,
tags: input.tags
? { connectOrCreate: input.tags.map((t) => ({
where: { name: t },
create: { name: t },
})) }
: undefined,
},
include: {
author: { select: { id: true, name: true, avatarUrl: true } },
tags: { select: { name: true } },
},
});
res.status(201).json(post);
});
export default router;GraphQL Implementation:
# schema.graphql
type Query {
posts(
page: Int = 1
limit: Int = 20
sortBy: PostSortField = CREATED_AT
order: SortOrder = DESC
tag: String
): PostConnection!
post(id: ID!): Post
}
type Mutation {
createPost(input: CreatePostInput!): Post!
}
type Post {
id: ID!
title: String!
body: String!
excerpt: String!
author: User!
tags: [Tag!]!
comments(limit: Int = 20): [Comment!]!
likesCount: Int!
commentsCount: Int!
isPublished: Boolean!
publishedAt: DateTime
createdAt: DateTime!
}
type User {
id: ID!
name: String!
avatarUrl: String
bio: String
}
type Comment {
id: ID!
body: String!
author: User!
replies(limit: Int = 5): [Comment!]!
createdAt: DateTime!
}
type Tag {
name: String!
}
type PostConnection {
data: [Post!]!
pagination: Pagination!
}
type Pagination {
page: Int!
limit: Int!
total: Int!
totalPages: Int!
}
input CreatePostInput {
title: String!
body: String!
tags: [String!]
}
enum PostSortField {
CREATED_AT
TITLE
LIKES
}
enum SortOrder {
ASC
DESC
}// resolvers.ts
import DataLoader from "dataloader";
// DataLoaders (created per request)
function createLoaders() {
return {
userLoader: new DataLoader(async (ids: readonly number[]) => {
const users = await db.user.findMany({
where: { id: { in: [...ids] } },
select: { id: true, name: true, avatarUrl: true, bio: true },
});
const map = new Map(users.map((u) => [u.id, u]));
return ids.map((id) => map.get(id) ?? null);
}),
commentsByPostLoader: new DataLoader(async (postIds: readonly number[]) => {
const comments = await db.comment.findMany({
where: { postId: { in: [...postIds] }, parentId: null },
include: { author: { select: { id: true, name: true, avatarUrl: true } } },
orderBy: { createdAt: "desc" },
});
const map = new Map<number, typeof comments>();
for (const c of comments) {
if (!map.has(c.postId)) map.set(c.postId, []);
map.get(c.postId)!.push(c);
}
return postIds.map((id) => (map.get(id) ?? []).slice(0, 20));
}),
repliesLoader: new DataLoader(async (parentIds: readonly number[]) => {
const replies = await db.comment.findMany({
where: { parentId: { in: [...parentIds] } },
include: { author: { select: { id: true, name: true, avatarUrl: true } } },
orderBy: { createdAt: "asc" },
});
const map = new Map<number, typeof replies>();
for (const r of replies) {
if (!map.has(r.parentId!)) map.set(r.parentId!, []);
map.get(r.parentId!)!.push(r);
}
return parentIds.map((id) => (map.get(id) ?? []).slice(0, 5));
}),
};
}
const resolvers = {
Query: {
posts: async (_, args) => {
const { page = 1, limit = 20, sortBy = "CREATED_AT", order = "DESC", tag } = args;
const skip = (page - 1) * limit;
const orderField = {
CREATED_AT: "createdAt",
TITLE: "title",
LIKES: "likesCount",
}[sortBy];
const where = tag
? { tags: { some: { name: tag } }, isPublished: true }
: { isPublished: true };
const [data, total] = await Promise.all([
db.post.findMany({
where,
orderBy: { [orderField]: order.toLowerCase() },
skip,
take: limit,
}),
db.post.count({ where }),
]);
return {
data,
pagination: { page, limit, total, totalPages: Math.ceil(total / limit) },
};
},
post: async (_, { id }, ctx) => {
const post = await db.post.findUnique({ where: { id: Number(id) } });
if (!post) return null;
if (!post.isPublished && post.authorId !== ctx.user?.id) {
throw new GraphQLError("Access denied", {
extensions: { code: "FORBIDDEN" },
});
}
return post;
},
},
Post: {
author: (post, _, ctx) => ctx.loaders.userLoader.load(post.authorId),
tags: (post) => db.tag.findMany({
where: { posts: { some: { id: post.id } } },
}),
comments: (post, { limit = 20 }, ctx) =>
ctx.loaders.commentsByPostLoader.load(post.id),
},
Comment: {
replies: (comment, { limit = 5 }, ctx) =>
ctx.loaders.repliesLoader.load(comment.id),
author: (comment, _, ctx) => ctx.loaders.userLoader.load(comment.authorId),
},
Mutation: {
createPost: async (_, { input }, ctx) => {
if (!ctx.user) throw new GraphQLError("Unauthorized");
return db.post.create({
data: {
title: input.title,
body: input.body,
authorId: ctx.user.id,
tags: input.tags
? { connectOrCreate: input.tags.map((t) => ({
where: { name: t },
create: { name: t },
})) }
: undefined,
},
});
},
},
};Look at both implementations carefully. The REST version is ~80 lines. The GraphQL version is ~150 lines (schema + resolvers + DataLoaders). The REST version has one file. The GraphQL version has three conceptual units (schema, resolvers, loaders) that need to stay in sync.
Both accomplish the same thing. Both are production-grade. The GraphQL version is more flexible for clients but requires more server-side code and infrastructure.
Let me be specific about when I've seen each approach fail in production.
REST breaks down when:
You have more than ~5 different client applications consuming the same API. A mobile app, a web app, a TV app, a watch app, and a third-party integration all need different shapes of the same data. You end up either with dozens of purpose-built endpoints or a clunky ?fields= system that reinvents GraphQL poorly.
Your data is deeply relational and clients frequently need different depths of the graph. If component A needs user.posts and component B needs user.posts.comments.author, you either over-fetch for component A or under-fetch for component B.
Your frontend team and backend team are on different release cycles and don't communicate well. REST requires coordination for every new data requirement. GraphQL lets the frontend team self-serve.
GraphQL breaks down when:
Your data model is simple and flat. If every request maps neatly to one or two database tables, GraphQL's resolver architecture adds complexity without benefit.
You need aggressive caching. If your API's main job is serving mostly-static data to millions of users, REST + CDN will outperform GraphQL by an order of magnitude.
You're a small team. GraphQL has a higher operational overhead: DataLoaders, query complexity limits, persisted queries, monitoring that understands GraphQL responses, cache policies. For a team of 1-5 engineers, this overhead may not be justified.
Your API is public-facing. Public GraphQL APIs require extensive security hardening (depth limiting, complexity analysis, rate limiting per query complexity, introspection control). Public REST APIs have a smaller, more predictable attack surface.
File operations are a core part of your API. As I discussed, GraphQL's file handling story is weak.
In the last two production systems I architected, I used both. Not as a compromise, but as a deliberate design choice.
/api/v1/* -> REST (public API, webhooks, file uploads, health checks)
/api/graphql -> GraphQL (internal frontend consumption)
/api/v1/stream/* -> SSE (real-time updates)
The public API is REST because external consumers expect REST, the caching story is better, and the security surface is smaller. The internal API is GraphQL because our frontend team moves faster with it, and we control both sides so the security concerns are manageable. Real-time is SSE because it's simpler than GraphQL subscriptions for our use case.
This isn't an unusual pattern. GitHub has both a REST API and a GraphQL API. Shopify has both. The GraphQL Foundation itself doesn't claim GraphQL should replace REST everywhere.
// Express setup with hybrid API
const app = express();
// REST endpoints -- public, cacheable, simple
app.use("/api/v1/posts", postsRouter);
app.use("/api/v1/users", usersRouter);
app.use("/api/v1/uploads", uploadsRouter);
app.use("/api/v1/webhooks", webhooksRouter);
// GraphQL endpoint -- internal, flexible, typed
app.use(
"/api/graphql",
requireInternalAuth, // Only our own frontends
graphqlHTTP({
schema,
graphiql: process.env.NODE_ENV === "development",
validationRules: [depthLimit(8), complexityLimit(500)],
})
);
// SSE endpoints -- real-time
app.use("/api/v1/stream", sseRouter);After years of doing this, here's how I actually decide between REST and GraphQL for a new project. Not "it depends" -- actual criteria.
Use REST when:
Use GraphQL when:
Use both when:
Use neither (use tRPC) when:
One thing I want to emphasize: the technology choice matters less than the execution. A well-designed REST API will outperform a poorly designed GraphQL API in every dimension. A well-tuned GraphQL API with proper DataLoaders, caching, and security will serve your frontend team better than a REST API with inconsistent response shapes and poor documentation.
The worst outcome is choosing GraphQL because it sounds modern and then not investing in the operational complexity it demands. The second worst is sticking with REST out of familiarity when your application's data access patterns are genuinely screaming for a graph query language.
Be honest about your actual needs, your team's actual capabilities, and your application's actual data model. The right choice follows from that honesty, not from blog posts -- including this one.