ScaledByDesign/Insights
ServicesPricingAboutContact
Book a Call
Scaled By Design

Fractional CTO + execution partner for revenue-critical systems.

Company

  • About
  • Services
  • Contact

Resources

  • Insights
  • Pricing
  • FAQ

Legal

  • Privacy Policy
  • Terms of Service

© 2026 ScaledByDesign. All rights reserved.

contact@scaledbydesign.com

On This Page

The Overengineered VersionWhat CQRS Actually MeansThe Problem CQRS SolvesPragmatic CQRS: Same Database, Different Access PatternsStep 1: Separate Your CommandsStep 2: Separate Your QueriesStep 3: Keep Read Models in SyncWhen to Add ComplexityThe Results
  1. Insights
  2. Architecture
  3. CQRS Without the Complexity — A Practical Implementation Guide

CQRS Without the Complexity — A Practical Implementation Guide

February 27, 2026·ScaledByDesign·
cqrsarchitecturedatabaseapi-designpatterns

The Overengineered Version

Most articles about CQRS (Command Query Responsibility Segregation) make it sound like you need event sourcing, Apache Kafka, separate read/write databases, eventual consistency, and a team of distributed systems engineers.

That's the enterprise version. It's appropriate for banks and stock exchanges. It's wildly inappropriate for your B2B SaaS platform with 500 users.

But the core idea of CQRS is genuinely useful, and you can implement it without any of that complexity.

What CQRS Actually Means

At its simplest, CQRS means: use different models for reading data and writing data. That's it.

Traditional CRUD:
  Create/Read/Update/Delete all use the same model
  → Same database tables, same queries, same data shapes

CQRS:
  Commands (writes): Optimized for validation and business rules
  Queries (reads): Optimized for the data shape the UI needs
  → Can use the same database, just different access patterns

You don't need separate databases. You don't need event sourcing. You don't need a message bus. You just need to stop pretending that reading data and writing data have the same requirements.

The Problem CQRS Solves

Here's a real example. A SaaS dashboard needs to show an order summary:

// What the UI needs (read model):
interface OrderSummary {
  orderId: string;
  customerName: string;
  totalAmount: number;
  itemCount: number;
  status: string;
  lastUpdated: string;
  shippingAddress: string;
}
 
// What the database stores (write model):
// orders table → order_items table → customers table →
// addresses table → shipping_records table → status_history table
// (6 joins to build one UI card)

Without CQRS, you either:

  1. Do 6 JOINs on every page load — slow, expensive, doesn't scale
  2. Denormalize everything — fast reads, but writes become a nightmare of keeping denormalized data in sync
  3. Cache aggressively — adds complexity, stale data bugs, cache invalidation headaches

With CQRS, you separate the concerns cleanly.

Pragmatic CQRS: Same Database, Different Access Patterns

Step 1: Separate Your Commands

Commands handle writes with full validation and business logic:

// commands/createOrder.ts
export async function createOrder(input: CreateOrderInput) {
  // Full validation
  const customer = await db.customers.findOrThrow(input.customerId);
  const items = await validateItems(input.items);
  const total = calculateTotal(items, input.discount);
 
  // Business rules
  if (customer.creditLimit && total > customer.creditLimit) {
    throw new BusinessError("Order exceeds credit limit");
  }
 
  // Write to normalized tables (optimized for data integrity)
  const order = await db.transaction(async (tx) => {
    const order = await tx.orders.create({
      customerId: customer.id,
      status: "pending",
      total,
    });
 
    await tx.orderItems.createMany(
      items.map((item) => ({
        orderId: order.id,
        productId: item.productId,
        quantity: item.quantity,
        price: item.price,
      }))
    );
 
    // Update the read model (materialized view or denormalized table)
    await tx.orderSummaries.upsert({
      orderId: order.id,
      customerName: customer.name,
      totalAmount: total,
      itemCount: items.length,
      status: "pending",
      lastUpdated: new Date(),
    });
 
    return order;
  });
 
  return order;
}

Step 2: Separate Your Queries

Queries read from optimized structures — no joins, no business logic:

// queries/getOrderSummaries.ts
export async function getOrderSummaries(filters: OrderFilters) {
  // Read from the denormalized read model — single table, no joins
  return db.orderSummaries.findMany({
    where: {
      status: filters.status,
      customerName: filters.search
        ? { contains: filters.search, mode: "insensitive" }
        : undefined,
    },
    orderBy: { lastUpdated: "desc" },
    take: filters.limit || 50,
  });
}
// This query hits ONE table. No joins. Sub-millisecond response.

Step 3: Keep Read Models in Sync

The simplest approach — update the read model in the same transaction as the write:

// This is "synchronous CQRS" — no event bus needed
async function updateOrderStatus(orderId: string, status: string) {
  await db.transaction(async (tx) => {
    // Update the write model (normalized)
    await tx.orders.update({ where: { id: orderId }, data: { status } });
 
    // Update the read model (denormalized) in the SAME transaction
    await tx.orderSummaries.update({
      where: { orderId },
      data: { status, lastUpdated: new Date() },
    });
  });
}

No eventual consistency. No event sourcing. No message bus. The read model is always in sync because it's updated in the same transaction.

When to Add Complexity

Start simple and add layers only when you hit specific problems:

ProblemSolutionComplexity
Read model updates are slowUse database triggers instead of application codeLow
Need multiple read modelsAdd a lightweight event emitter (in-process)Medium
Read/write load is vastly differentUse read replicas for queriesMedium
Need audit trail of all changesAdd event sourcing for specific aggregatesHigh
Need real-time updates across servicesAdd a message bus (Kafka, RabbitMQ)High

Most applications never need to go beyond "Medium." If you're reaching for Kafka before you've tried database triggers, you're overengineering.

The Results

A client's dashboard went from 3-second load times to 50ms after implementing pragmatic CQRS. The write path got simpler because commands could focus on validation without worrying about query optimization. The read path got faster because queries hit pre-computed data.

Total implementation time: 2 weeks. No new infrastructure. Same PostgreSQL database. Just better separation of concerns.

That's the real lesson of CQRS. It's not about technology — it's about recognizing that reads and writes are different problems that deserve different solutions. Start simple. Add complexity only when the data forces you to.

Previous
We Built an AI Code Review Bot — Here's What It Actually Catches (And What It Misses)
Insights
CQRS Without the Complexity — A Practical Implementation GuideThe Strangler Fig Migration That Saved a 10-Year-Old MonolithWhy You Should Start With a MonolithEvent-Driven Architecture for the Rest of UsThe Real Cost of Microservices at Your ScaleThe Caching Strategy That Cut Our Client's AWS Bill by 60%API Design Mistakes That Will Haunt You for YearsMulti-Tenant Architecture: The Decisions You Can't UndoCI/CD Pipelines That Actually Make You FasterThe Rate Limiting Strategy That Saved Our Client's APIWhen to Rewrite vs Refactor: The Decision Framework

Ready to Ship?

Let's talk about your engineering challenges and how we can help.

Book a Call