We've built APIs for e-commerce platforms, event management systems, internal tools, and mobile applications. If there's one thing we've learned, it's this: the best API style depends on who's consuming it and how.
This isn't an academic comparison. It's a practical guide based on real decisions we've made across dozens of projects — when we chose REST, when we went with GraphQL, when we reached for tRPC, and when we regretted our choice.
REST: the reliable default
REST is our default for most projects, and for good reason. It's well-understood, works with every HTTP client ever made, and the tooling ecosystem is massive.
But "REST" doesn't mean "throw some JSON endpoints together." Good REST API design follows principles that make your API predictable, discoverable, and maintainable.
Our REST conventions
Resource-oriented URLs. Every URL represents a resource, not an action:
# Good
GET /api/v1/orders
GET /api/v1/orders/123
POST /api/v1/orders
PATCH /api/v1/orders/123
DELETE /api/v1/orders/123
# Bad
GET /api/v1/getOrders
POST /api/v1/createOrder
POST /api/v1/updateOrder/123
POST /api/v1/deleteOrder/123
Consistent response envelopes. Every response follows the same structure:
// Success response
{
"data": {
"id": "ord_123",
"status": "confirmed",
"items": [...],
"total": 89.99
},
"meta": {
"requestId": "req_abc123"
}
}
// Error response
{
"error": {
"code": "VALIDATION_ERROR",
"message": "Invalid order data",
"details": [
{
"field": "items",
"message": "At least one item is required"
}
]
},
"meta": {
"requestId": "req_abc124"
}
}
The requestId in every response is crucial for debugging. When a client reports an issue, they give us the request ID and we can trace the entire flow in our logs.
Pagination that works. We use cursor-based pagination for any list that might grow:
// Request
GET /api/v1/orders?cursor=eyJpZCI6MTIzfQ&limit=20
// Response
{
"data": [...],
"pagination": {
"hasMore": true,
"nextCursor": "eyJpZCI6MTQzfQ",
"total": 1284
}
}
Cursor-based pagination is more reliable than offset-based (?page=3&limit=20) because it handles concurrent inserts and deletes correctly. With offset pagination, adding a new record while a user is browsing page 2 causes them to see a duplicate on page 3.
Filtering, sorting, and field selection
For complex list endpoints, we support query parameters for filtering and sorting:
GET /api/v1/orders?status=confirmed&createdAfter=2025-01-01&sort=-createdAt&fields=id,status,total
The fields parameter is a lightweight alternative to GraphQL's field selection. It doesn't solve the nested data problem, but for flat resources it reduces payload size significantly.
GraphQL: when the client drives the data
GraphQL shines when you have multiple clients with different data needs consuming the same backend. A mobile app that needs a compact order summary, a web dashboard that needs full order details with customer history, and an admin panel that needs everything — GraphQL lets each client request exactly what it needs.
Where GraphQL has genuinely helped us
Project: Multi-platform event management. The same backend served a mobile app (React Native), a web dashboard (Next.js), and a kiosk interface (embedded Chromium). Each screen needed different slices of the same data:
# Mobile: compact event list
query MobileEvents {
events(upcoming: true, limit: 10) {
id
title
startDate
venue {
name
}
}
}
# Dashboard: full event details with analytics
query DashboardEvent($id: ID!) {
event(id: $id) {
id
title
description
startDate
endDate
venue {
name
address
capacity
}
attendees {
total
checkedIn
}
analytics {
registrationRate
peakHour
satisfactionScore
}
}
}
Without GraphQL, we would have built separate REST endpoints for each view or used ?fields= parameters that quickly become unwieldy for nested data.
Setting up a GraphQL server the right way
We use code-first schema generation with libraries like Pothos (TypeScript) rather than writing SDL files by hand. Type safety from database to API response:
import SchemaBuilder from "@pothos/core";
import PrismaPlugin from "@pothos/plugin-prisma";
import { prisma } from "./db";
const builder = new SchemaBuilder({
plugins: [PrismaPlugin],
prisma: { client: prisma },
});
builder.prismaObject("Event", {
fields: (t) => ({
id: t.exposeID("id"),
title: t.exposeString("title"),
startDate: t.expose("startDate", { type: "DateTime" }),
venue: t.relation("venue"),
attendeeCount: t.int({
resolve: async (event) =>
prisma.registration.count({
where: { eventId: event.id },
}),
}),
}),
});
The N+1 problem is real
The biggest operational issue with GraphQL is the N+1 query problem. When a client requests a list of events with their venues, a naive implementation runs one query for the events and then one query per event to fetch its venue.
Dataloaders are mandatory. We use them on every GraphQL project:
import DataLoader from "dataloader";
const venueLoader = new DataLoader(async (venueIds: readonly string[]) => {
const venues = await prisma.venue.findMany({
where: { id: { in: [...venueIds] } },
});
const venueMap = new Map(venues.map((v) => [v.id, v]));
return venueIds.map((id) => venueMap.get(id) ?? null);
});
Without dataloaders, a GraphQL API with relational data will be slower than the equivalent REST endpoints. This is the number one mistake we see in GraphQL implementations.
tRPC: the full-stack TypeScript shortcut
For projects where the frontend and backend share a TypeScript codebase — particularly Next.js monorepos — tRPC has become our preferred choice over both REST and GraphQL.
Why? End-to-end type safety with zero code generation. You define a procedure on the server and call it on the client with full autocomplete and type checking:
// Server: define the router
import { router, publicProcedure, protectedProcedure } from "./trpc";
import { z } from "zod";
export const orderRouter = router({
list: protectedProcedure
.input(
z.object({
cursor: z.string().optional(),
limit: z.number().min(1).max(50).default(20),
status: z.enum(["pending", "confirmed", "shipped"]).optional(),
})
)
.query(async ({ ctx, input }) => {
const orders = await ctx.db.order.findMany({
where: {
userId: ctx.user.id,
...(input.status && { status: input.status }),
},
take: input.limit + 1,
cursor: input.cursor ? { id: input.cursor } : undefined,
orderBy: { createdAt: "desc" },
});
const hasMore = orders.length > input.limit;
return {
orders: orders.slice(0, input.limit),
nextCursor: hasMore ? orders[input.limit - 1].id : null,
};
}),
create: protectedProcedure
.input(orderCreateSchema)
.mutation(async ({ ctx, input }) => {
return ctx.db.order.create({
data: { ...input, userId: ctx.user.id },
});
}),
});
// Client: fully typed, no code generation needed
const { data, isLoading } = trpc.order.list.useQuery({
status: "confirmed",
limit: 10,
});
// data is fully typed: { orders: Order[], nextCursor: string | null }
When we don't use tRPC: when the API needs to be consumed by external clients, mobile apps in Swift/Kotlin, or any non-TypeScript consumer. tRPC is a TypeScript-to-TypeScript protocol.
The comparison table
Here's how we decide:
| Factor | REST | GraphQL | tRPC | |---|---|---|---| | Multiple client types | Works, may need variations | Excellent | TypeScript clients only | | External/public API | Best choice | Good | Not suitable | | Type safety | Needs code gen (OpenAPI) | Needs code gen | Built-in | | Learning curve | Low | Medium | Low (if you know TS) | | Caching | HTTP caching works great | Requires client-side cache | React Query built-in | | File uploads | Native support | Awkward (multipart spec) | Supported | | Real-time | WebSockets / SSE | Subscriptions | Subscriptions | | Tooling | Massive ecosystem | Growing ecosystem | TypeScript ecosystem | | Over-fetching | Common problem | Solved by design | Solved by design | | Team familiarity | Universal | Requires training | Requires TS fluency |
API versioning: keep it simple
We've tried URL versioning (/api/v1/, /api/v2/), header versioning, and content negotiation. Our recommendation:
Use URL versioning. It's the most explicit, most debuggable, and easiest to understand:
/api/v1/orders # Current stable version
/api/v2/orders # New version with breaking changes
We maintain at most two versions simultaneously. When v2 is stable and clients have migrated, v1 gets a sunset date and eventually returns 410 Gone.
Version your APIs from day one. Adding versioning later to an unversioned API is painful for everyone.
Authentication patterns
Every API we build uses one of two authentication strategies:
For server-to-server: API keys with scoped permissions, rotated quarterly, transmitted via the Authorization header.
For user-facing: Short-lived JWTs (15 minutes) with refresh tokens (7 days). The JWT contains minimal claims — just the user ID and roles. We fetch full user data from the database on each request rather than stuffing it into the token.
// Middleware for JWT validation
import { verifyToken } from "./auth";
export async function authMiddleware(req: Request) {
const token = req.headers.get("Authorization")?.replace("Bearer ", "");
if (!token) {
return { error: "UNAUTHORIZED", status: 401 };
}
try {
const payload = await verifyToken(token);
const user = await db.user.findUnique({
where: { id: payload.sub },
select: { id: true, email: true, role: true },
});
if (!user) {
return { error: "USER_NOT_FOUND", status: 401 };
}
return { user };
} catch {
return { error: "INVALID_TOKEN", status: 401 };
}
}
Rate limiting: protect your API
Every public API needs rate limiting. We implement it at two levels:
- Infrastructure level: AWS WAF or API Gateway throttling for broad protection against abuse.
- Application level: Token bucket algorithm per API key or user, with different tiers:
const rateLimits = {
free: { requests: 100, window: "1h" },
pro: { requests: 1000, window: "1h" },
enterprise: { requests: 10000, window: "1h" },
};
Always return 429 Too Many Requests with a Retry-After header. Clients need to know when they can try again.
Lessons from production
After building and maintaining APIs across many projects, these are the patterns we always apply:
-
Log every request with a correlation ID. When something breaks, you need to trace the full request lifecycle across services.
-
Use input validation at the boundary. Zod for TypeScript, Pydantic for Python. Validate everything that comes in from the outside world before it touches your business logic.
-
Design for backward compatibility. Adding fields is fine. Removing or renaming fields is a breaking change. Plan for it.
-
Write contract tests. Consumer-driven contract tests (using tools like Pact) catch breaking changes before they reach production.
-
Document with OpenAPI. Even for internal APIs. Future you and future teammates will thank present you.
Building an API for your product or platform? Let's talk about designing an API architecture that serves your users and your team.