CQRS Without the Complexity — A Practical Implementation Guide
The Overengineered Version
Most articles about CQRS (Command Query Responsibility Segregation) make it sound like you need event sourcing, Apache Kafka, separate read/write databases, eventual consistency, and a team of distributed systems engineers.
That's the enterprise version. It's appropriate for banks and stock exchanges. It's wildly inappropriate for your B2B SaaS platform with 500 users.
But the core idea of CQRS is genuinely useful, and you can implement it without any of that complexity.
What CQRS Actually Means
At its simplest, CQRS means: use different models for reading data and writing data. That's it.
Traditional CRUD:
Create/Read/Update/Delete all use the same model
→ Same database tables, same queries, same data shapes
CQRS:
Commands (writes): Optimized for validation and business rules
Queries (reads): Optimized for the data shape the UI needs
→ Can use the same database, just different access patterns
You don't need separate databases. You don't need event sourcing. You don't need a message bus. You just need to stop pretending that reading data and writing data have the same requirements.
The Problem CQRS Solves
Here's a real example. A SaaS dashboard needs to show an order summary:
// What the UI needs (read model):
interface OrderSummary {
orderId: string;
customerName: string;
totalAmount: number;
itemCount: number;
status: string;
lastUpdated: string;
shippingAddress: string;
}
// What the database stores (write model):
// orders table → order_items table → customers table →
// addresses table → shipping_records table → status_history table
// (6 joins to build one UI card)Without CQRS, you either:
- Do 6 JOINs on every page load — slow, expensive, doesn't scale
- Denormalize everything — fast reads, but writes become a nightmare of keeping denormalized data in sync
- Cache aggressively — adds complexity, stale data bugs, cache invalidation headaches
With CQRS, you separate the concerns cleanly.
Pragmatic CQRS: Same Database, Different Access Patterns
Step 1: Separate Your Commands
Commands handle writes with full validation and business logic:
// commands/createOrder.ts
export async function createOrder(input: CreateOrderInput) {
// Full validation
const customer = await db.customers.findOrThrow(input.customerId);
const items = await validateItems(input.items);
const total = calculateTotal(items, input.discount);
// Business rules
if (customer.creditLimit && total > customer.creditLimit) {
throw new BusinessError("Order exceeds credit limit");
}
// Write to normalized tables (optimized for data integrity)
const order = await db.transaction(async (tx) => {
const order = await tx.orders.create({
customerId: customer.id,
status: "pending",
total,
});
await tx.orderItems.createMany(
items.map((item) => ({
orderId: order.id,
productId: item.productId,
quantity: item.quantity,
price: item.price,
}))
);
// Update the read model (materialized view or denormalized table)
await tx.orderSummaries.upsert({
orderId: order.id,
customerName: customer.name,
totalAmount: total,
itemCount: items.length,
status: "pending",
lastUpdated: new Date(),
});
return order;
});
return order;
}Step 2: Separate Your Queries
Queries read from optimized structures — no joins, no business logic:
// queries/getOrderSummaries.ts
export async function getOrderSummaries(filters: OrderFilters) {
// Read from the denormalized read model — single table, no joins
return db.orderSummaries.findMany({
where: {
status: filters.status,
customerName: filters.search
? { contains: filters.search, mode: "insensitive" }
: undefined,
},
orderBy: { lastUpdated: "desc" },
take: filters.limit || 50,
});
}
// This query hits ONE table. No joins. Sub-millisecond response.Step 3: Keep Read Models in Sync
The simplest approach — update the read model in the same transaction as the write:
// This is "synchronous CQRS" — no event bus needed
async function updateOrderStatus(orderId: string, status: string) {
await db.transaction(async (tx) => {
// Update the write model (normalized)
await tx.orders.update({ where: { id: orderId }, data: { status } });
// Update the read model (denormalized) in the SAME transaction
await tx.orderSummaries.update({
where: { orderId },
data: { status, lastUpdated: new Date() },
});
});
}No eventual consistency. No event sourcing. No message bus. The read model is always in sync because it's updated in the same transaction.
When to Add Complexity
Start simple and add layers only when you hit specific problems:
| Problem | Solution | Complexity |
|---|---|---|
| Read model updates are slow | Use database triggers instead of application code | Low |
| Need multiple read models | Add a lightweight event emitter (in-process) | Medium |
| Read/write load is vastly different | Use read replicas for queries | Medium |
| Need audit trail of all changes | Add event sourcing for specific aggregates | High |
| Need real-time updates across services | Add a message bus (Kafka, RabbitMQ) | High |
Most applications never need to go beyond "Medium." If you're reaching for Kafka before you've tried database triggers, you're overengineering.
The Results
A client's dashboard went from 3-second load times to 50ms after implementing pragmatic CQRS. The write path got simpler because commands could focus on validation without worrying about query optimization. The read path got faster because queries hit pre-computed data.
Total implementation time: 2 weeks. No new infrastructure. Same PostgreSQL database. Just better separation of concerns.
That's the real lesson of CQRS. It's not about technology — it's about recognizing that reads and writes are different problems that deserve different solutions. Start simple. Add complexity only when the data forces you to.