Skip to content

Chapter 5: Multi-Environment Setup - Local, Preview, Production

Theoretical Foundations

In the previous chapter, we established the concept of Headless Inference, defining it as the execution of machine learning models in non-graphical environments like Node.js processes or Web Workers. This is the computational engine of our application. However, an engine is useless without a chassis, transmission, and wheels—essentially, an infrastructure that supports the engine's operation across different terrains. This brings us to the core concept of Multi-Environment Setup.

A multi-environment setup is the architectural blueprint for running your application in distinct, isolated contexts that mimic the lifecycle of a software product: from local creation to collaborative preview and finally to public production. It is not merely about having different servers; it is about managing state, configuration, and security in a way that ensures consistency and reliability at every stage.

The Analogy: The Automotive Manufacturing Plant

To understand this concept deeply, let us use an analogy of an automotive manufacturing plant.

  1. Local Environment (The R&D Lab): This is where engineers (developers) build and test individual components. It is a controlled, sandboxed space on a developer's machine (e.g., a Docker container). If an engine explodes here, it only affects the lab, not the customer. In our context, this is where we run docker-compose up to spin up a local PostgreSQL database with vector support and a local authentication server.
  2. Preview Environment (The Wind Tunnel & Crash Test): Before a car is mass-produced, it undergoes rigorous testing in simulated environments. This is analogous to a Preview Branch on Vercel or Netlify. When a developer pushes code to a feature branch, the CI/CD pipeline automatically builds a temporary, live URL. Stakeholders (designers, product managers) can interact with the "car" (the app) without it being on the main showroom floor. It is a "what-if" scenario—what happens if we change the suspension (database schema) or the engine tuning (model parameters)?
  3. Production Environment (The Showroom & Highway): This is the final, hardened product delivered to the customer. It must be reliable, secure, and scalable. In our analogy, this is the AWS or GCP infrastructure. It uses real payment keys (not test credit cards), real user data, and has redundancy (load balancers, auto-scaling groups). Just as a car on the highway cannot have a loose bolt, production code cannot have exposed secrets or unstable dependencies.

Why This Matters: The Configuration Matrix

The primary challenge in a multi-environment setup is Configuration Management. In a single-environment app, you might hardcode values. In a multi-environment app, hardcoding is catastrophic.

Consider the Authentication module we built earlier. In the Local environment, you might use a dummy JWT secret (e.g., local-secret-123). In Preview, you might use a shared team secret. In Production, you must use a cryptographically secure key stored in a Hardware Security Module (HSM) or a cloud secret manager (like AWS Secrets Manager).

If you accidentally use the local-secret-123 in Production, user sessions become vulnerable to forgery. The Multi-Environment Setup acts as a firewall against such errors by strictly segregating these configurations.

The Three Pillars of Isolation

To achieve this separation, we rely on three pillars:

  1. Infrastructure Isolation: Each environment runs on physically or logically separate resources.

    • Local: Docker containers on a developer's laptop.
    • Preview: Ephemeral containers on Vercel's edge network.
    • Production: Dedicated Virtual Private Clouds (VPCs) on AWS/GCP.
  2. Data Isolation: Data is the lifeblood of an AI SaaS. You cannot mix test data with real user data.

    • Local: A local .sqlite file or a Dockerized PostgreSQL instance.
    • Preview: A separate database instance or schema that resets on every deployment.
    • Production: A managed database (e.g., RDS) with automated backups and point-in-time recovery.
  3. Security Isolation (The "Air Gap"): Secrets must never be committed to Git.

    • Local: .env files (gitignored).
    • Preview: Environment variables injected by the CI/CD runner.
    • Production: Injected at runtime from a secure vault.

The Role of Docker Compose: The Universal Blueprint

In the context of our AI-Ready SaaS, Docker Compose serves as the "Universal Blueprint" for the Local and Preview environments. It allows us to define the entire stack—Database, Auth Service, and the Inference Engine—as code.

While Production on AWS/GCP might use Kubernetes or ECS for scaling, Docker Compose provides a consistent definition of the services that compose the application. This ensures that the vector database (e.g., pgvector) running locally is structurally identical to the one running in production, minimizing the "it works on my machine" syndrome.

Visualizing the Flow

The following diagram illustrates the flow of code and configuration through the three environments, highlighting the gatekeepers (CI/CD and IaC) that ensure security and consistency.

The diagram visually maps the identical local, staging, and production environments, connected by CI/CD and IaC gatekeepers, to demonstrate how consistency prevents the it works on my machine syndrome.
Hold "Ctrl" to enable pan & zoom

The diagram visually maps the identical local, staging, and production environments, connected by CI/CD and IaC gatekeepers, to demonstrate how consistency prevents the it works on my machine syndrome.

Deep Dive: The "Under the Hood" Mechanics

To truly understand the "how," we must look at the mechanics of how an application consumes these environments.

In a Node.js/TypeScript application, we typically use a configuration object that detects the environment at runtime.

// config.ts
// This file defines the schema for our environment variables.
// It does NOT contain the values; it only defines the structure and validation.

import { z } from 'zod';

// We define a schema for each environment.
// Notice how 'required' fields change based on the environment.
const BaseConfigSchema = z.object({
  NODE_ENV: z.enum(['development', 'preview', 'production']),
  DATABASE_URL: z.string().url(),
});

const DevelopmentConfigSchema = BaseConfigSchema.extend({
  // Local dev allows for relaxed security for ease of use
  JWT_SECRET: z.string().default('dev-secret-do-not-use-in-prod'),
  // Local vector DB port
  VECTOR_DB_URL: z.string().default('postgresql://localhost:5432/vectors'),
});

const ProductionConfigSchema = BaseConfigSchema.extend({
  // Production requires strict secrets managed externally
  JWT_SECRET: z.string().min(32), // Enforce strong secrets
  VECTOR_DB_URL: z.string().url(), // Must be a valid managed URL
  PAYMENT_WEBHOOK_SECRET: z.string(), // Critical for revenue
});

// The loader function acts as the "Gatekeeper"
export const loadConfig = () => {
  const env = process.env.NODE_ENV || 'development';

  // 1. Load variables from .env files (handled by dotenv)
  // 2. Validate against the schema for the current environment
  try {
    if (env === 'production') {
      return ProductionConfigSchema.parse(process.env);
    }
    // Fallback to development schema for local and preview
    return DevelopmentConfigSchema.parse(process.env);
  } catch (error) {
    // If validation fails (e.g., missing JWT_SECRET in prod), crash immediately.
    // This is the "Fail Fast" principle.
    console.error('Invalid configuration:', error);
    process.exit(1);
  }
};

The "Why" of this code: This TypeScript code exemplifies Configuration Validation. In a multi-environment setup, errors often occur not because the code is wrong, but because the environment is misconfigured. By using a schema validator like Zod, we ensure that the application refuses to start if the environment variables (secrets, URLs) do not match the requirements of the specific environment (Local vs. Production).

The Infrastructure-as-Code (IaC) Connection

While Docker Compose handles the application services, Infrastructure-as-Code (IaC) handles the platform those services run on in Production.

In the Local environment, the "infrastructure" is just your laptop's resources. In Production, the infrastructure is complex: VPCs, subnets, security groups, and managed database instances.

IaC tools (like Terraform or AWS CDK) allow us to define the Production infrastructure in code, just like we define our application logic. This ensures that if we need to spin up a new "Staging" environment (a clone of Production for testing), we can do so with a single command, guaranteeing that the environment is identical to Production in every aspect except scale.

Theoretical Foundations

The Multi-Environment Setup is not just a convenience; it is a discipline. It enforces a separation of concerns that mirrors the software development lifecycle. By treating Local, Preview, and Production as distinct "territories" with their own rules (configuration) and resources (infrastructure), we build a robust safety net. This allows us to experiment fearlessly in the Lab (Local), validate collaboratively in the Wind Tunnel (Preview), and deploy confidently to the Highway (Production).

Basic Code Example

In a SaaS boilerplate, managing configuration across Local, Preview, and Production environments is the cornerstone of stability. Hardcoding secrets or database URLs is a critical security risk and leads to "works on my machine" syndrome. We will demonstrate a robust pattern using Docker Compose for local orchestration, TypeScript for type-safe configuration, and Environment Variables to inject context-specific values.

This example focuses on the "Local" environment setup, which is the foundation for the other stages. We will simulate a SaaS stack consisting of a Next.js API server, a PostgreSQL database (with vector extension support for AI features), and a Redis instance for caching.

The Architecture

We will create a docker-compose.yml file that defines our services. The application code will read from a config.ts module that validates and exposes environment variables. This separation ensures that the application logic remains agnostic of the environment it runs in.

A diagram showing the application logic isolated from the environment by a dedicated TypeScript validation module, ensuring the core code remains agnostic to where it runs.
Hold "Ctrl" to enable pan & zoom

A diagram showing the application logic isolated from the environment by a dedicated TypeScript validation module, ensuring the core code remains agnostic to where it runs.

1. The Docker Compose Configuration

This file orchestrates the local infrastructure. It uses bind mounts (volumes) for hot-reloading the application code and manages network isolation.

# docker-compose.yml
version: '3.8'

services:
  # 1. The Application Service
  app:
    build:
      context: .
      dockerfile: Dockerfile.local # Specific Dockerfile for local dev with hot-reload
    ports:

      - "3000:3000" # Map host port 3000 to container port 3000
    environment:

      - NODE_ENV=development
      - DATABASE_URL=postgresql://user:password@db:5432/saas_db
      - REDIS_URL=redis://redis:6379
    volumes:

      - .:/app # Mount current directory to /app inside container
      - /app/node_modules # Anonymous volume to prevent host node_modules from overwriting container's
    depends_on:

      - db
      - redis
    command: npm run dev # Start Next.js in dev mode

  # 2. The Database Service (PostgreSQL + pgvector)
  db:
    image: pgvector/pgvector:pg15 # Image pre-installed with vector extension
    environment:
      POSTGRES_USER: user
      POSTGRES_PASSWORD: password
      POSTGRES_DB: saas_db
    ports:

      - "5432:5432" # Expose port for external DB clients (e.g., TablePlus, Prisma Studio)
    volumes:

      - db_data:/var/lib/postgresql/data # Persist data across restarts

  # 3. The Cache Service
  redis:
    image: redis:alpine
    ports:

      - "6379:6379"
    volumes:

      - redis_data:/data

volumes:
  db_data:
  redis_data:

2. The TypeScript Configuration Loader

To prevent "undefined" variables and ensure type safety, we use a dedicated config file. This acts as the single source of truth for environment variables.

// src/config.ts
import { z } from 'zod';

/**

 * Schema definition for environment variables using Zod.
 * This validates the presence and format of required variables at runtime.
 */
const envSchema = z.object({
  NODE_ENV: z.enum(['development', 'test', 'production']).default('development'),
  DATABASE_URL: z.string().url(),
  REDIS_URL: z.string().url(),
  // Example of a secret that might be missing in local dev but required in production
  STRIPE_SECRET_KEY: z.string().optional(),
});

/**

 * Parsed and validated environment configuration.
 * Throws an error if required variables are missing.
 */
export const config = envSchema.parse(process.env);

/**

 * Helper to check if we are in development mode.
 */
export const isDev = () => config.NODE_ENV === 'development';

3. The Application Entry Point

This is a simplified Next.js API route (or server logic) that consumes the configuration to connect to the database.

// src/app/api/health/route.ts
import { NextResponse } from 'next/server';
import { config, isDev } from '@/config';

/**

 * API Route: GET /api/health
 * Used to verify the environment configuration and database connectivity.
 */
export async function GET() {
  try {
    // In a real app, you would initialize your Prisma/Drizzle client here
    // using the config.DATABASE_URL.

    const healthStatus = {
      status: 'healthy',
      environment: config.NODE_ENV,
      timestamp: new Date().toISOString(),
      // Only expose DB host in dev for debugging, never in production!
      debugInfo: isDev() 
        ? { databaseHost: new URL(config.DATABASE_URL).host } 
        : undefined,
    };

    return NextResponse.json(healthStatus);
  } catch (error) {
    console.error('Health check failed:', error);
    return NextResponse.json(
      { status: 'error', message: 'Configuration or connection failed' },
      { status: 500 }
    );
  }
}

4. Dockerfile for Local Development

Unlike the production Dockerfile, the local version includes build tools and supports hot-reloading.

# Dockerfile.local
FROM node:20-alpine

# Set working directory
WORKDIR /app

# Copy package files
COPY package*.json ./

# Install dependencies (including devDependencies for local dev)
RUN npm ci

# Copy the rest of the application code
COPY . .

# Expose the port the app runs on
EXPOSE 3000

# The command is overridden in docker-compose.yml, but this is a fallback
CMD ["npm", "run", "dev"]

Line-by-Line Explanation

1. Docker Compose (docker-compose.yml)

  • version: '3.8': Specifies the Docker Compose file version. Version 3 is standard for production-like setups, and 3.8 includes specific network and volume features.
  • services:: Defines the containers that make up your application stack.
  • app:: The definition for our Next.js application.
    • build: context: .: Tells Docker to build the image using the current directory as the context.
    • dockerfile: Dockerfile.local: Explicitly points to our local-optimized Dockerfile.
    • ports: "3000:3000": Maps port 3000 on your host machine to port 3000 in the container. This allows you to access http://localhost:3000 in your browser.
    • environment:: Injects environment variables into the container at runtime.
      • NODE_ENV=development: Crucial for Next.js to enable features like hot module replacement (HMR).
      • DATABASE_URL: Constructed using the service name db. Docker's internal DNS resolves db to the IP address of the database container. Note the port is 5432 (the internal container port), not the host-mapped port.
    • volumes:: Syncs your local code changes to the container without rebuilding the image.
      • . (current dir) -> /app: Changes made locally are immediately reflected inside the container.
      • /app/node_modules: An anonymous volume prevents the host's node_modules (which might be for a different OS/architecture) from overwriting the container's installed packages.
    • depends_on:: Defines startup order. Docker will start db and redis before app, though it does not wait for them to be "ready" (just started). For production, you need health check scripts.
    • command:: Overrides the default CMD in the Dockerfile to run npm run dev, enabling Next.js's development server with hot reloading.
  • db::
    • image: pgvector/pgvector:pg15: Uses a specialized image that includes the pgvector extension, essential for storing AI embeddings (vector data) in PostgreSQL.
    • volumes: db_data:/var/lib/postgresql/data: Maps a named volume (db_data) to the container's data directory. This ensures data persists even if you stop and remove the container.
  • volumes: (Bottom level): Declares the named volumes used by the services to persist data.

2. TypeScript Config (src/config.ts)

  • import { z } from 'zod';: Imports Zod, a TypeScript-first schema validation library. This is critical for preventing runtime errors caused by missing or malformed environment variables.
  • const envSchema = z.object({ ... }): Defines the shape of the expected environment variables.
    • NODE_ENV: z.enum(['development', 'test', 'production']): Restricts the value to specific strings, preventing typos like "dev" or "prod".
    • DATABASE_URL: z.string().url(): Ensures the variable is a string and validates that it is a valid URL format.
    • STRIPE_SECRET_KEY: z.string().optional(): Marks this key as optional. This is useful for local development where you might not have payment keys set up yet, but it will be required in production.
  • export const config = envSchema.parse(process.env): This is the core logic. It takes process.env (the environment variables provided by Node.js/Docker) and runs them through the schema.
    • If validation fails (e.g., a required variable is missing), Zod throws a descriptive error, crashing the app immediately. This is a "fail-fast" mechanism, which is safer than running with invalid config.
    • If validation passes, it returns a typed object. TypeScript now knows exactly what properties exist on config and their types.
  • export const isDev = () => ...: A simple boolean helper to check the environment, keeping logic clean elsewhere in the app.

3. API Route (src/app/api/health/route.ts)

  • import { config, isDev } from '@/config';: Imports the validated configuration.
  • export async function GET() { ... }: Defines a standard Next.js App Router GET endpoint.
  • const healthStatus = { ... }: Constructs the response object.
    • environment: config.NODE_ENV: Returns the validated environment (e.g., "development").
    • debugInfo: isDev() ? ... : undefined: Demonstrates conditional logic based on the environment. In development, we might expose the database host for debugging. In production, we return undefined to avoid leaking infrastructure details.
  • try { ... } catch (error) { ... }: Wraps the logic in error handling. If the config import had failed (due to Zod validation), the app would have crashed before reaching this point. This catch block handles runtime errors like database connection failures.

4. Dockerfile (Dockerfile.local)

  • FROM node:20-alpine: Uses a lightweight Node.js 20 image based on Alpine Linux to keep the image size small.
  • WORKDIR /app: Sets the working directory inside the container.
  • COPY package*.json ./: Copies package definitions first. This allows Docker to cache the npm ci layer. If you change your code but not dependencies, Docker won't re-install npm packages, speeding up builds.
  • RUN npm ci: Installs dependencies. ci (Clean Install) is preferred over install in CI/CD and Docker as it strictly follows the package-lock.json.
  • COPY . .: Copies the rest of the application source code.
  • EXPOSE 3000: Informs Docker that the container listens on port 3000. It doesn't actually publish the port; that's done in docker-compose.yml.

Common Pitfalls

1. The "Ghost" Database (Async/Await & Startup Timing)

Issue: In docker-compose, depends_on only waits for the container to start, not for the database service to be ready to accept connections. Your Node.js app might crash immediately on startup with ECONNREFUSED because it tries to connect before Postgres has initialized. Solution: Implement a "wait-for-it" script or use a health check in Docker Compose.

# Example healthcheck in docker-compose.yml
healthcheck:
  test: ["CMD-SHELL", "pg_isready -U user"]
  interval: 10s
  timeout: 5s
  retries: 5
And in your app startup logic, use a loop with setTimeout or a library like wait-port before initiating the database connection.

2. Environment Variable Hallucination

Issue: Developers often forget to add .env to .gitignore or accidentally commit secrets. Conversely, they might rely on variables that exist in their local terminal but are missing in the Docker container context. Solution:

  1. Never commit .env files. Use .env.example with placeholder values.
  2. Explicitly pass variables in docker-compose.yml. Do not rely on Docker picking up your host machine's .env file automatically unless you use the env_file directive.
  3. Use Zod validation. As shown in config.ts, forcing a parse check at startup ensures you know immediately if a variable is missing, rather than discovering it 3 hours into debugging.

3. Volume Mounting Overwrites Node Modules

Issue: Mapping the host directory (.) to /app in the container often overwrites the /app/node_modules directory created during docker build. If your host OS is Windows/Mac and the container is Linux, the binary modules (like bcrypt or sharp) will be incompatible, causing runtime errors. Solution: Use an anonymous volume to mask the node_modules directory inside the container, as shown in the example:

volumes:

  - .:/app
  - /app/node_modules # This line prevents overwriting

4. Vercel/Netlify Preview Limits

Issue: When moving from Local to Preview environments, developers often hit serverless function timeouts (10s on Vercel Hobby) or payload size limits (4.5MB for Vercel Serverless Functions). Solution:

  • Local Simulation: Use vercel dev locally to test against the actual serverless environment constraints, not just standard Node.js.
  • Streaming: For AI-heavy apps (relevant to Book 6), use Next.js Streaming (React Suspense) or Edge Runtime to handle long-running vector generation tasks without timing out the HTTP request.
  • Offloading: Never run heavy vector calculations inside a serverless function if possible. Push tasks to a background queue (e.g., AWS SQS or Redis) and poll for results.

5. Type Mismatches between Local and CI

Issue: Your TypeScript code compiles locally but fails in the CI pipeline (e.g., Vercel build). This usually happens because node_modules types are different or tsconfig.json is not strict enough. Solution: Ensure your package.json uses exact versions (no ^ or ~) for critical dependencies in a boilerplate. Use a shared tsconfig.json and run tsc --noEmit in your CI pipeline to catch type errors before deployment.

The chapter continues with advanced code, exercises and solutions with analysis, you can find them on the ebook on Leanpub.com or Amazon


Loading knowledge check...



Code License: All code examples are released under the MIT License. Github repo.

Content Copyright: Copyright © 2026 Edgar Milvus | Privacy & Cookie Policy. All rights reserved.

All textual explanations, original diagrams, and illustrations are the intellectual property of the author. To support the maintenance of this site via AdSense, please read this content exclusively online. Copying, redistribution, or reproduction is strictly prohibited.