Skip to content

Chapter 19: Preparing for AGI - Code that Scales

Theoretical Foundations

The foundational challenge in preparing for Artificial General Intelligence (AGI) is not merely building a single, powerful model, but engineering a software ecosystem that can gracefully absorb exponential increases in model size, data volume, and computational demand. AGI will not be a monolithic entity but a distributed, asynchronous network of specialized models, agents, and data stores. The code we write today must be a scaffold, not a cage—flexible enough to accommodate future breakthroughs without requiring a complete rewrite. This section focuses on the theoretical underpinnings of such scalable architectures, specifically targeting the interplay between memory management, data retrieval, and asynchronous processing in a Node.js environment.

To understand this, we must first internalize the concept of the Event Loop, a concept introduced in Chapter 18. The Event Loop is the central nervous system of Node.js. Imagine a busy restaurant kitchen. The head chef (the main thread) doesn't personally cook every dish. Instead, they delegate tasks to specialized stations (asynchronous workers)—the grill, the fryer, the salad station. The chef places an order ticket (a task) on the rail (the Task Queue) and immediately moves on to the next order. The Event Loop is the expediter, constantly checking if any station has completed its dish (the I/O operation is complete) and, if so, plating it and sending it out (executing the callback). This non-blocking, event-driven model is what allows a single Node.js process to handle thousands of concurrent connections. For AGI, where we will be orchestrating hundreds of simultaneous embedding generations, vector database queries, and model inference calls, mastering this pattern is non-negotiable. A blocking operation—like synchronously waiting for a large model to generate an embedding—would be like the head chef staring at the grill, refusing to take new orders until the steak is done, bringing the entire kitchen to a halt.

The Data Bottleneck: From Raw Text to Vector Space

Before we can even think about inference, we face a data ingestion and retrieval problem of unprecedented scale. An AGI system will need to reason over vast corpora of information. Storing and searching this information in a traditional relational database using keyword matching (like SQL's LIKE '%query%') is computationally infeasible and semantically inadequate. We need a way to represent meaning numerically. This is where Embedding Generation comes into play.

An embedding is a dense vector of numbers (e.g., [0.12, -0.45, 0.88, ...]) that captures the semantic essence of a piece of text. Think of it as a set of coordinates in a high-dimensional "meaning space." Texts with similar meanings will have vectors that are close to each other in this space, regardless of whether they share the same keywords. For example, "The cat sat on the mat" and "The feline rested on the rug" would have very similar vector representations.

The process of generating these embeddings is a perfect example of a task for our asynchronous kitchen. We don't want to block our main application thread while waiting for a potentially slow API call (to a service like OpenAI or a local model) to return a vector. Instead, we fire off the request and let the Event Loop handle other tasks while it's processing.

Here is a conceptual TypeScript representation of what this asynchronous, non-blocking embedding generation might look like. Note that this is purely theoretical and focuses on the pattern, not a specific implementation.

// A theoretical interface for an embedding service.
// This abstracts away the underlying model (e.g., a local Ollama model or a remote API).
interface EmbeddingService {
    // The generate method returns a Promise, signifying an asynchronous operation.
    // This is crucial for non-blocking behavior within the Node.js Event Loop.
    generate(text: string): Promise<number[]>;
}

// A mock implementation for demonstration purposes.
class LocalModelEmbeddingService implements EmbeddingService {
    async generate(text: string): Promise<number[]> {
        // In a real scenario, this would be an API call to a local model (e.g., via Ollama's API)
        // or a more complex computation. The key is that it's an I/O-bound operation.
        // We simulate a delay to represent network latency or model inference time.
        return new Promise(resolve => {
            setTimeout(() => {
                // A dummy 384-dimensional vector for demonstration.
                const vector: number[] = new Array(384).fill(0).map(() => Math.random());
                console.log(`Generated embedding for text of length ${text.length}`);
                resolve(vector);
            }, 100); // Simulates 100ms latency
        });
    }
}

// --- Usage in a larger application ---
const embeddingService = new LocalModelEmbeddingService();
const textChunk = "AGI architectures must be designed for scalability and fault tolerance.";

// We "fire and forget" the embedding generation, allowing the main thread to continue.
// The Promise is handled by the Event Loop.
const embeddingPromise: Promise<number[]> = embeddingService.generate(textChunk);

// We can attach .then() to handle the result when it's ready, without blocking.
embeddingPromise.then(vector => {
    console.log("Embedding is ready! Vector dimension:", vector.length);
    // This callback will be placed on the Task Queue and executed by the Event Loop
    // once the embedding generation is complete.
});

// The main thread is free to continue executing other code immediately.
console.log("Main thread is not blocked. Continuing with other tasks...");

Efficient Retrieval: The Role of Vector Databases

Once we have these numerical representations, we need a specialized system to store and query them efficiently. This is the role of a Vector Database. A traditional database indexes data for fast lookup by exact matches (e.g., an ID or a timestamp). A vector database, however, is optimized for Approximate Nearest Neighbor (ANN) search. It's built to answer the question: "Given a query vector, which of the millions of stored vectors are closest to it in meaning space?"

Analogy: The Library vs. The Concierge

  • Traditional Keyword Search (The Library): You go to a library and use a card catalog (an index) to find books with specific keywords in the title. If you're looking for "stories about courage," you might miss a book titled "The Brave Little Tailor" because the keyword "courage" isn't present. This is slow and imprecise.
  • Vector Search (The Concierge): You describe the feeling or concept you're looking for to a knowledgeable concierge. You say, "I want a story that feels inspiring and makes me feel brave." The concierge, having read every book and internalized its "vibe" (its vector representation), immediately points you to "The Brave Little Tailor" and other thematically similar books, even if the word "courage" never appears. The vector database is this superhuman concierge, using mathematical similarity to find conceptually relevant information with incredible speed.

This process is critical for AGI because it allows the system to retrieve relevant context from a massive knowledge base before attempting to generate a response. This technique, known as Retrieval-Augmented Generation (RAG), prevents the model from "hallucinating" and grounds its responses in factual data.

Type Narrowing for Robust Data Pipelines

In a complex, distributed system like the one we're designing, data flows through many stages: ingestion, chunking, embedding, storage, retrieval, and inference. The data structures at each stage can be complex and sometimes ambiguous. This is where Type Narrowing in TypeScript becomes an essential tool for building robust, fault-tolerant systems.

Type Narrowing is the process of telling the TypeScript compiler more specific information about a variable's type within a certain scope. It's the compiler's way of understanding that after a runtime check, the set of possible types for a variable has been reduced.

Analogy: The Airport Security Line

Imagine you are an airport security scanner (the TypeScript compiler). You see a bag (a variable) that could contain anything: liquids, electronics, metal objects (string | number | boolean). You can't make a decision about it yet. Then, the bag goes through the X-ray machine (a runtime check, like typeof or an instanceof check). The X-ray reveals it's a laptop. Now, you have narrowed the possibilities. You know it's an electronic device and must be handled as such. You no longer treat it as a potential liquid. This is Type Narrowing. You've used a runtime check to provide type information to the compiler, allowing it to enforce more specific rules and prevent errors.

In our AGI pipeline, we might receive a data chunk that could be either a raw text string or an object containing metadata and text. We need to narrow the type before we can safely process it.

// Define possible types for a data chunk in our pipeline.
type TextChunk = string;
interface RichTextChunk {
    content: string;
    metadata: {
        source: string;
        chunkId: number;
    };
}

// A variable that could be one of several types.
type IncomingData = TextChunk | RichTextChunk;

function processChunk(data: IncomingData) {
    // At this point, TypeScript only knows `data` is `IncomingData`.
    // We cannot safely access `data.metadata` because it might not exist on a string.

    // --- TYPE NARROWING IN ACTION ---
    // We perform a runtime check using `typeof` and `in`.
    if (typeof data === 'string') {
        // Inside this block, TypeScript has NARROWED the type of `data` to `string`.
        // It knows `data.metadata` is not accessible here.
        const upperCaseContent = data.toUpperCase();
        console.log("Processing simple text chunk:", upperCaseContent);
    } else if ('metadata' in data) {
        // Inside this block, TypeScript has NARROWED the type to `RichTextChunk`.
        // It now allows safe access to `data.metadata` and `data.content`.
        console.log(`Processing rich chunk ${data.metadata.chunkId} from ${data.metadata.source}`);
        const upperCaseContent = data.content.toUpperCase();
    } else {
        // This branch handles any other unexpected shapes, making the system robust.
        console.error("Unknown data format received:", data);
    }
}

// Example usage
processChunk("This is a simple text chunk.");
processChunk({
    content: "This is a rich text chunk.",
    metadata: { source: "document.pdf", chunkId: 42 }
});

By using Type Narrowing, we ensure that our processing functions are robust. They can handle various input types gracefully without crashing, which is paramount in a long-running, fault-tolerant AGI system where data sources may be heterogeneous and unpredictable.

Visualizing the Asynchronous Data Flow

The entire process—from ingesting a document to retrieving relevant chunks for inference—can be visualized as a pipeline of asynchronous tasks, all managed by the Node.js Event Loop.

A diagram illustrating the asynchronous data flow would show a pipeline of sequential and parallel asynchronous tasks—from document ingestion to chunk retrieval—managed by the Node.js Event Loop, highlighting how non-blocking operations allow the system to efficiently process data without waiting for each step to complete.
Hold "Ctrl" to enable pan & zoom

A diagram illustrating the asynchronous data flow would show a pipeline of sequential and parallel asynchronous tasks—from document ingestion to chunk retrieval—managed by the Node.js Event Loop, highlighting how non-blocking operations allow the system to efficiently process data without waiting for each step to complete.

This diagram illustrates the core principle: the main application thread (the left side) is never blocked. It orchestrates tasks and delegates the heavy lifting (embedding, vector search, LLM inference) to asynchronous workers. The Event Loop is the glue that connects these disparate parts, ensuring that the system remains responsive and can scale to handle the immense workload required by AGI. By mastering these theoretical foundations—non-blocking I/O, semantic data representation, and robust type handling—we lay the groundwork for code that truly scales.

Basic Code Example

To illustrate the core principles of preparing for AGI—specifically, modularity, state management, and fault tolerance—we will build a simple "Hello World" style SaaS workflow using LangGraph. This example demonstrates a basic agent that processes user input, maintains state across steps, and handles potential errors gracefully.

The context is a Next.js application where a Client Component (CC) sends a request to a server-side route handler. The route handler uses LangGraph to orchestrate a simple workflow: validating input, processing it through a mock "AI" step, and formatting the output. This separation of concerns (SRP) ensures that the API layer, the graph logic, and the UI are decoupled.

The Architecture

The system consists of three main parts: 1. The State: A shared data structure passed between graph nodes. 2. The Graph: A collection of nodes (functions) and edges (transitions) defining the workflow. 3. The API: A Next.js route handler that executes the graph and returns the result.

Implementation

We will implement the server-side logic first, as it contains the core LangGraph concepts. The client component will be a simple wrapper to demonstrate usage.

File Structure: * app/api/hello-world/route.ts: The API endpoint. * lib/graph.ts: The LangGraph definition (separated for SRP).

1. The Graph Definition (lib/graph.ts)

This file defines the GraphState interface and the workflow nodes. We strictly separate the logic for validation, processing, and formatting.

// lib/graph.ts
import { StateGraph, Annotation, Node } from "@langchain/langgraph";

/**
 * @description Graph State Interface
 * Defines the canonical data structure passed between nodes.
 * This adheres to the "Graph State" definition: singular, canonical, and persistent.
 */
export interface AgentState {
  input: string;
  processedOutput?: string;
  finalOutput?: string;
  error?: string;
}

/**
 * @description State Annotation
 * Defines the reducer for the state. In a complex app, this handles merging updates.
 * For this "Hello World", we simply overwrite keys.
 */
const StateAnnotation = Annotation.Root({
  input: Annotation<string>,
  processedOutput: Annotation<string | undefined>,
  finalOutput: Annotation<string | undefined>,
  error: Annotation<string | undefined>,
});

/**
 * @description Node 1: Input Validator
 * Adheres to SRP: Only concerns itself with validating the input string.
 * @param state - The current graph state
 * @returns - The updated state (or throws an error to trigger graph failure)
 */
const validateInput = async (state: typeof StateAnnotation.State) => {
  console.log("Node: Validating input...");
  if (!state.input || state.input.trim().length === 0) {
    // In LangGraph, throwing an error typically interrupts the graph execution
    // or routes it to an error node if configured.
    throw new Error("Input cannot be empty.");
  }
  // Return state to pass to next node
  return { input: state.input };
};

/**
 * @description Node 2: Mock AI Processor
 * Adheres to SRP: Only concerns itself with the "AI" transformation logic.
 * Simulates an async LLM call.
 * @param state - The current graph state
 * @returns - The updated state with processed output
 */
const processWithAI = async (state: typeof StateAnnotation.State) => {
  console.log("Node: Processing with AI...");
  // Simulate network latency
  await new Promise((resolve) => setTimeout(resolve, 500));

  // Mock transformation
  const processed = `AI Processed: ${state.input.toUpperCase()}`;

  return { processedOutput: processed };
};

/**
 * @description Node 3: Formatter
 * Adheres to SRP: Only concerns itself with formatting the final response.
 * @param state - The current graph state
 * @returns - The updated state with final output
 */
const formatResponse = async (state: typeof StateAnnotation.State) => {
  console.log("Node: Formatting response...");
  if (!state.processedOutput) {
    throw new Error("Missing processed output.");
  }

  const formatted = `Result: ${state.processedOutput} | Timestamp: ${new Date().toISOString()}`;
  return { finalOutput: formatted };
};

/**
 * @description Graph Construction
 * We build the graph using the StateGraph class.
 * This separates the workflow definition from the execution logic.
 */
export const createHelloWorldGraph = () => {
  // Initialize the graph with our State Annotation
  const workflow = new StateGraph(StateAnnotation);

  // Define nodes
  // Note: In newer LangGraph versions, nodes are functions or objects.
  // We wrap our functions in a simple object structure if required, 
  // but function nodes are standard.
  workflow.addNode("validate_node", validateInput);
  workflow.addNode("process_node", processWithAI);
  workflow.addNode("format_node", formatResponse);

  // Define edges (Workflow Logic)
  // Start -> Validate
  workflow.addEdge("__start__", "validate_node");

  // Validate -> Process (if valid)
  workflow.addEdge("validate_node", "process_node");

  // Process -> Format
  workflow.addEdge("process_node", "format_node");

  // Return the compiled graph
  return workflow.compile();
};
2. The API Route (app/api/hello-world/route.ts)

This Next.js App Router endpoint acts as the entry point. It handles the HTTP request, invokes the graph, and manages the response. It demonstrates how to integrate LangGraph into a scalable web architecture.

// app/api/hello-world/route.ts
import { NextRequest, NextResponse } from "next/server";
import { createHelloWorldGraph } from "@/lib/graph";

/**
 * @description POST Handler
 * Handles incoming requests to execute the agent workflow.
 * 
 * @param request - The incoming HTTP request object
 * @returns A JSON response containing the final output or error details
 */
export async function POST(request: NextRequest) {
  try {
    // 1. Parse the incoming JSON body
    const body = await request.json();
    const { input } = body;

    // 2. Initialize the Graph
    // We create a new instance for this request to ensure state isolation.
    const graph = createHelloWorldGraph();

    // 3. Execute the Graph
    // We pass the initial state (input) to the graph's stream method.
    // LangGraph handles the traversal of nodes (validate -> process -> format).
    const stream = await graph.stream({
      input: input,
    });

    // 4. Iterate through the stream to get the final state
    // LangGraph streams updates for every node. We want the last one.
    let finalState = null;
    for await (const step of stream) {
      // The key is the node name, the value is the state update
      const [nodeName, stateUpdate] = Object.entries(step)[0];
      console.log(`Step completed: ${nodeName}`);

      // Update our reference to the latest state
      // Note: In a real scenario, we might merge state more carefully.
      // For this example, we just track the latest object.
      finalState = { ...finalState, ...stateUpdate };
    }

    // 5. Return the result
    if (finalState?.finalOutput) {
      return NextResponse.json(
        { 
          success: true, 
          output: finalState.finalOutput 
        },
        { status: 200 }
      );
    } else {
      // Handle case where graph finished but didn't produce expected output
      return NextResponse.json(
        { 
          success: false, 
          error: "Workflow completed but no output generated." 
        },
        { status: 500 }
      );
    }

  } catch (error) {
    // 6. Error Handling
    // Captures errors from validation or processing nodes.
    console.error("Graph execution failed:", error);

    const errorMessage = error instanceof Error ? error.message : "Unknown error";

    return NextResponse.json(
      { 
        success: false, 
        error: errorMessage 
      },
      { status: 400 } // 400 Bad Request is appropriate for validation errors
    );
  }
}
3. The Client Component (app/page.tsx)

This Client Component (CC) uses React hooks to interact with the API. It demonstrates how the scalable backend code integrates with the frontend.

// app/page.tsx
"use client";

import { useState } from "react";

export default function HelloWorldPage() {
  // State management for the UI
  const [input, setInput] = useState("");
  const [result, setResult] = useState<string | null>(null);
  const [loading, setLoading] = useState(false);
  const [error, setError] = useState<string | null>(null);

  /**
   * @description Handles form submission
   * Sends input to the API route and updates UI state based on response.
   */
  const handleSubmit = async (e: React.FormEvent) => {
    e.preventDefault();
    setLoading(true);
    setError(null);
    setResult(null);

    try {
      const response = await fetch("/api/hello-world", {
        method: "POST",
        headers: {
          "Content-Type": "application/json",
        },
        body: JSON.stringify({ input }),
      });

      const data = await response.json();

      if (!response.ok) {
        throw new Error(data.error || "API request failed");
      }

      setResult(data.output);
    } catch (err) {
      setError(err instanceof Error ? err.message : "An unknown error occurred");
    } finally {
      setLoading(false);
    }
  };

  return (
    <div style={{ padding: "20px", fontFamily: "sans-serif" }}>
      <h1>AGI Scalability Demo</h1>
      <p>Enter text to process through the LangGraph workflow.</p>

      <form onSubmit={handleSubmit} style={{ marginBottom: "20px" }}>
        <input
          type="text"
          value={input}
          onChange={(e) => setInput(e.target.value)}
          placeholder="Hello World..."
          disabled={loading}
          style={{ padding: "8px", marginRight: "10px", width: "300px" }}
        />
        <button type="submit" disabled={loading} style={{ padding: "8px 16px" }}>
          {loading ? "Processing..." : "Run Agent"}
        </button>
      </form>

      {error && (
        <div style={{ color: "red", padding: "10px", background: "#ffebee" }}>
          Error: {error}
        </div>
      )}

      {result && (
        <div style={{ padding: "10px", background: "#e8f5e9", color: "green" }}>
          <strong>Output:</strong> {result}
        </div>
      )}
    </div>
  );
}

Line-by-Line Explanation

1. The Graph Definition (lib/graph.ts)

  • Imports: We import StateGraph and Annotation from @langchain/langgraph. These are the core building blocks for creating stateful workflows.
  • AgentState Interface: This TypeScript interface defines the shape of our data. It is the "Single Source of Truth" for the workflow. By defining this explicitly, we ensure type safety across all nodes.
  • StateAnnotation: This defines how the state is managed. In LangGraph, annotations allow us to specify reducers (functions that merge state updates). Here, we rely on default merging behavior, but in production, you might define custom reducers to handle arrays or complex objects.
  • validateInput Function:
    • Takes the current state as input.
    • Checks if the input string is empty.
    • Error Handling: If invalid, it throws an error. In LangGraph, this propagates up and can be caught by the graph's execution wrapper (as seen in the API route).
    • Return Value: It returns an object containing the updated state properties. If the input is valid, it essentially passes the state through unchanged.
  • processWithAI Function:
    • Async Simulation: Uses setTimeout wrapped in a Promise to simulate the latency of an LLM API call.
    • Transformation: Converts the input string to uppercase and prepends a label.
    • Return Value: Returns a new object merging the processedOutput into the state.
  • formatResponse Function:
    • Dependency Check: Verifies that processedOutput exists. This enforces workflow integrity (node B only runs if node A succeeded).
    • Formatting: Adds a timestamp to the processed string to demonstrate finalization logic.
  • createHelloWorldGraph Function:
    • Initialization: new StateGraph(StateAnnotation) creates the graph instance bound to our specific state shape.
    • Nodes: workflow.addNode registers the functions we defined earlier. The first argument is a unique string ID for the node.
    • Edges: workflow.addEdge defines the control flow.
      • __start__ is a special reserved key representing the entry point.
      • We chain validate_node -> process_node -> format_node linearly.
    • Compilation: workflow.compile() turns the definition into an executable object.

2. The API Route (app/api/hello-world/route.ts)

  • POST Function: Standard Next.js App Router handler. It receives a NextRequest object.
  • Body Parsing: await request.json() extracts the JSON payload. We destructure to get the input field.
  • Graph Instantiation: createHelloWorldGraph() is called to get a fresh, clean graph instance. This is crucial for state isolation between concurrent requests.
  • Execution (graph.stream):
    • We pass the initial state { input }.
    • LangGraph returns a stream (AsyncIterator). This allows us to monitor progress or handle long-running tasks without blocking the response until the very end.
  • Stream Iteration:
    • for await (const step of stream) loops through the execution steps.
    • LangGraph yields an object where the key is the node name and the value is the state update at that step.
    • We accumulate these updates into finalState.
  • Response Construction:
    • We check if finalState.finalOutput exists. If so, we return a 200 OK with the JSON payload.
    • If the graph finishes but the final field is missing (logic error), we return a 500 error.
  • Error Handling (try/catch):
    • Wraps the entire execution.
    • If graph.stream or any node throws an error (e.g., validation failure), it is caught here.
    • We return a 400 status code if it's a validation error, which is semantically correct for bad user input.

3. The Client Component (app/page.tsx)

  • "use client": Directive telling Next.js this component must be rendered in the browser (Client Component) to use React hooks like useState.
  • State Hooks:
    • input: Tracks the text field value.
    • loading: Tracks the async request status to disable the button and show feedback.
    • result / error: Stores the API response.
  • handleSubmit:
    • Prevents default form submission (page reload).
    • Uses the browser's fetch API to hit our /api/hello-world endpoint.
    • Async/Await: Waits for the response, parses the JSON, and updates the UI state accordingly.
    • Error Boundaries: Catches network errors or API errors (non-200 responses) and updates the error state.
  • JSX: Standard HTML elements with inline styles for simplicity. It conditionally renders the result or error messages based on the state.

Common Pitfalls

When implementing scalable AI workflows in TypeScript/Next.js, specific issues often arise that break the "preparing for AGI" principles of resilience and modularity.

  1. Vercel/AWS Lambda Timeouts (The "Long-Running Task" Trap)

    • Issue: Serverless functions (like Vercel Edge or standard Lambdas) have strict timeouts (e.g., 10s on Hobby plans, 60s on Pro). LLM inference is slow.
    • Why it happens: The code above uses graph.stream(), which is synchronous in the sense that the request holds open until the stream finishes. If the AI step takes 15 seconds, the serverless function times out, returning a 504 error.
    • Solution: For production AGI apps, you must decouple execution from the request/response cycle.
      • Step 1: API receives request -> pushes job to a queue (e.g., Redis, AWS SQS) -> immediately returns 202 Accepted with a job ID.
      • Step 2: A separate worker process (e.g., a long-running container or serverless function with a longer timeout) picks up the job and runs the graph.
      • Step 3: Client polls an endpoint or uses WebSockets to check job status.
  2. Hallucinated JSON / Schema Drift

    • Issue: When LLMs are used inside graph nodes (not just mocked like in our example), they often return unstructured text instead of valid JSON, or JSON that doesn't match the AgentState interface.
    • Why it happens: LLMs are probabilistic. If a node is supposed to return { processedOutput: "..." }, the LLM might return just "Processed: ...".
    • Solution:
      • Strict Zod Schemas: Use libraries like zod to validate the output of every node before passing it to the next node.
      • Output Parsers: Use LangChain's OutputFixingParser or StructuredOutputParser to force the LLM to adhere to a schema.
      • Defensive Coding: In the processWithAI function, never assume the data exists. Always check if (!state.processedOutput) throw new Error(...).
  3. Async/Await Loops and Memory Leaks

    • Issue: In complex graphs, circular dependencies or unhandled promises in loops can cause memory leaks or unbounded concurrency.
    • Why it happens: If you use Promise.all() inside a node to process a list of items, and that list is infinite or very large, you will exhaust server memory.
    • Solution:
      • Batching: Process large arrays in chunks.
      • Stream Processing: Use LangGraph's streaming capabilities or Node.js streams to process data incrementally rather than loading everything into memory at once.
      • Explicit Cleanup: Ensure any external connections (database clients, WebSocket connections) are closed in finally blocks or graph end hooks.
  4. State Mutation Side Effects

    • Issue: Modifying the state object directly (e.g., state.input = "new value") instead of returning a new object.
    • Why it happens: JavaScript objects are passed by reference. If you mutate the state in one node, it can unpredictably affect other nodes running in parallel or subsequent steps, leading to race conditions.
    • Solution: Always treat the state as immutable. Return a new object or a spread of the existing state with updated properties: return { ...state, processedOutput: "new value" }. LangGraph handles the merging, but adhering to immutability prevents bugs.

The chapter continues with advanced code, exercises and solutions with analysis, you can find them on the ebook on Leanpub.com or Amazon


Loading knowledge check...



Code License: All code examples are released under the MIT License. Github repo.

Content Copyright: Copyright © 2026 Edgar Milvus | Privacy & Cookie Policy. All rights reserved.

All textual explanations, original diagrams, and illustrations are the intellectual property of the author. To support the maintenance of this site via AdSense, please read this content exclusively online. Copying, redistribution, or reproduction is strictly prohibited.