Skip to content

Chapter 19: Analyzing Bundle Size

Theoretical Foundations

In the previous chapter, we explored the Consensus Mechanism, a pattern where multiple specialized worker agents tackle a problem, and a supervisor synthesizes their outputs. This pattern, while powerful, introduces complexity and latency. Similarly, in modern web development, frameworks like tRPC, GraphQL, and even REST over HTTP/2 provide powerful abstractions that simplify data fetching and state management. However, every layer of abstraction carries a hidden cost: the bundle size.

Bundle size is the total amount of JavaScript code that must be downloaded, parsed, and executed by the client's browser before the application becomes interactive. When we analyze bundle size, we are essentially performing a forensic audit of our application's dependency graph, identifying heavy libraries, and understanding how our architectural choices—specifically the choice between tRPC, REST, and GraphQL—impact the final payload delivered to the user.

The Web Development Analogy: The Traveling Salesperson's Luggage

Imagine a salesperson (the Client Browser) traveling to a city to conduct business (render the UI and handle user interactions). The salesperson can choose different modes of transportation (API protocols) and packing strategies (bundling).

  1. REST (The Fixed Itinerary): The salesperson packs a specific suitcase for each city they visit. If they need to visit New York, they pack a "New York" suitcase. If they visit London, they pack a "London" suitcase. This is predictable but inefficient. If the itinerary changes last minute, they might have to repack or carry unnecessary items. The suitcase (bundle) contains everything needed for that specific endpoint.

  2. GraphQL (The Custom Tailor): The salesperson visits a tailor (the GraphQL Schema) who creates a custom outfit (the Query) for the specific business meeting. The tailor only adds the exact fabric and buttons needed. This is efficient in terms of weight (payload size) but requires a visit to the tailor (network request) and knowledge of the exact requirements (schema definition).

  3. tRPC (The Teleporter with a Personal Assistant): The salesperson has a personal assistant (the tRPC Client) who knows exactly what tools are in the home office (the Backend). When the salesperson needs a tool, they don't pack it beforehand. Instead, they press a button, and the assistant instantly teleports the exact tool to them (End-to-End Type Safety). The "luggage" is minimal—just the teleportation device (the tRPC client wrapper). However, if the assistant needs to run a complex calculation (Server-Side Rendering) before teleporting the result, or if the teleporter is cold and needs to "warm up" (Edge Function Cold Start), there is a delay.

The Problem: While tRPC minimizes the static luggage (client-side code), it shifts the burden. The "teleporter" (the server) must be ready. In the context of Server-Side Rendering (SSR) and Edge Functions, the "teleporter" itself is code that must be bundled, deployed, and initialized.

The Mechanics of Bundle Analysis

To understand the impact of these choices, we visualize the Dependency Graph. This graph maps every module in our application and its imports. Heavy nodes (large libraries) and dense clusters (deeply nested dependencies) increase bundle size.

Visualizing the Dependency Graph

We can visualize the difference in dependency density between a traditional REST/GraphQL client and a tRPC client.

A tRPC client's dependency graph appears lightweight and sparse, while a traditional REST/GraphQL client forms a heavy, densely clustered node structure due to its reliance on large external libraries.
Hold "Ctrl" to enable pan & zoom

A tRPC client's dependency graph appears lightweight and sparse, while a traditional REST/GraphQL client forms a heavy, densely clustered node structure due to its reliance on large external libraries.

Analysis of the Graph: In the REST/GraphQL section (red), we see a proliferation of dependencies: a generic HTTP client (Axios), a GraphQL-specific client (Apollo), and a code generator. Each adds to the bundle size, even if tree-shaking is applied. The types are often generated separately, requiring build-time steps.

In the tRPC section (green), the client bundle is leaner. It relies on the existing TanStack Query (formerly React Query) infrastructure, wrapping it with @trpc/client. The dependency graph is narrower. However, the "Server Bundle" becomes critical. Since tRPC relies on end-to-end type safety, the server code (Procedures) must be accessible during the build process. In an SSR context, this server code is bundled into the Node.js server. In an Edge context, it is bundled into the Edge Function.

Server-Side Rendering (SSR) and the "Hydration" Tax

In Next.js, Server Components (SC) render exclusively on the server, sending zero JavaScript to the client. This is the ultimate optimization—no bundle size impact. However, interactive elements must be Client Components.

When using tRPC with SSR, the data fetching happens on the server. The component renders HTML. But to make the page interactive, the browser must download the JavaScript for the Client Components and "hydrate" the HTML.

The Analogy: Imagine building a house (the UI).

  1. Server Component: The concrete foundation and brick walls. They are heavy and static. You don't carry them in your backpack; they stay on the land (the server).
  2. Client Component: The electrical wiring and light switches. You must carry the wiring (JavaScript) to the site to make the lights work.
  3. tRPC with SSR: You calculate the exact layout of the wiring on the server (using server data) and pour the concrete with channels for the wires. You only ship the wires (Client JS) to the site.

The "Why" of Bundle Size in SSR: If we fetch data using a standard REST fetch inside a Server Component, the data is embedded in the HTML. No JS is needed for that data. However, if we use tRPC inside a Server Component, we are still using the tRPC client utilities. While the data fetching logic runs on the server, the types and wrapper functions are part of the build. If we are not careful, we might accidentally import client-side hooks (like useQuery) into a Server Component, causing build errors or unnecessary code inclusion.

Edge Functions and Cold Starts

Edge Functions are lightweight, serverless functions executed close to the user. They are ideal for dynamic APIs. However, they have a constraint: bundle size directly impacts cold start time.

The Analogy: A food truck (Edge Function) vs. a restaurant (Traditional Server).

  • Restaurant (Traditional SSR): Always open, large kitchen (high memory), slow to start up if shut down, but ready to cook immediately.
  • Food Truck (Edge): Small kitchen (low memory), must drive to the location (network latency), and the engine must start (cold start).

If the food truck is loaded with heavy equipment (large libraries), the engine struggles to start, and the chef takes longer to prepare the first meal.

tRPC on the Edge: tRPC is lightweight, making it excellent for Edge Functions. However, the "Router Procedures" (the logic) are bundled into the function. If a procedure imports a heavy library (e.g., a PDF generation library or a heavy ML model), that library is included in the Edge bundle, increasing the cold start time.

REST/GraphQL on the Edge: A generic REST handler is often smaller than a full tRPC router because it doesn't carry the type-inference overhead or the complex routing logic. However, the lack of type safety increases the risk of runtime errors, which are harder to debug on the Edge.

Code-Splitting Strategies and LLM-Driven Tree-Shaking

To mitigate bundle size, we employ Code Splitting. This is the process of dividing the bundle into smaller chunks that can be loaded on demand.

Analogy: A cookbook. Instead of carrying the entire encyclopedia of recipes (monolithic bundle), you carry a single recipe card (chunk) for the dish you are cooking right now. If you need to cook a dessert later, you fetch the dessert recipe card (lazy loading).

In TypeScript/Next.js, this is often done dynamically:

// Client Component (Lazy Loading a heavy component)
import dynamic from 'next/dynamic';

// The HeavyComponent might contain a large library (e.g., a charting library)
const HeavyChart = dynamic(() => import('../components/HeavyChart'), {
  loading: () => <p>Loading chart...</p>,
  ssr: false // Disable SSR for this component to keep the server bundle small
});

export default function Dashboard() {
  return (
    <div>
      <h1>Analytics</h1>
      {/* The HeavyChart JS is only downloaded when this component renders */}
      <HeavyChart data={largeDataSet} />
    </div>
  );
}

LLM-Driven Tree-Shaking

Traditional tree-shaking (using tools like Webpack or Rollup) removes unused exports from libraries. However, it struggles with dynamic imports or side-effect-heavy libraries.

LLM-Driven Tree-Shaking is an advanced concept where an AI model analyzes the dependency graph and usage patterns to suggest or automatically refactor code for minimal footprint.

The Process:

  1. Analysis: The LLM ingests the import statements and the actual usage of symbols within the codebase.
  2. Pattern Recognition: It identifies patterns where a large library is imported for a small utility. (e.g., importing all of lodash for debounce).
  3. Transformation: It suggests replacing the heavy import with a lightweight alternative or extracting the utility function.

Example of LLM Suggestion:

Original Code (Heavy):

import _ from 'lodash';

function useDebouncedSearch() {
  const [value, setValue] = useState('');
  // Lodash debounce is 20KB+ in the bundle if imported fully
  const debouncedCallback = _.debounce((val) => console.log(val), 300);
  // ...
}

LLM-Refactored Code (Lightweight):

import { useMemo, useRef } from 'react';

// Custom debounce implementation (50 lines of code)
function useDebounce<T extends (...args: any[]) => void>(callback: T, delay: number) {
  const argsRef = useRef<T['arguments']>();
  const timeoutRef = useRef<NodeJS.Timeout>();

  return useMemo(() => {
    const fn = (...args: any[]) => {
      argsRef.current = args;
      if (timeoutRef.current) {
        clearTimeout(timeoutRef.current);
      }
      timeoutRef.current = setTimeout(() => {
        if (argsRef.current) {
          callback(...argsRef.current);
        }
      }, delay);
    };
    return fn;
  }, [callback, delay]);
}

function useDebouncedSearch() {
  const [value, setValue] = useState('');
  const debouncedCallback = useDebounce((val) => console.log(val), 300);
  // ...
}

Why this matters for tRPC: tRPC relies on @tanstack/react-query. While powerful, it adds to the bundle. By analyzing the bundle, we might find that we are using heavy features of TanStack Query that are unnecessary for simple fetches. An LLM could suggest using lighter alternatives or custom hooks for specific use cases, reducing the overall footprint of the tRPC integration.

Theoretical Foundations

The analysis of bundle size is not merely about counting kilobytes; it is about understanding the trade-offs between developer experience (DX) and user experience (UX).

  • tRPC optimizes for DX and type safety, resulting in a lean client bundle but shifting complexity to the server build.
  • REST/GraphQL offer flexibility but often result in larger client bundles due to generic clients and code generation.
  • SSR separates static content (Server Components) from interactive content (Client Components), minimizing the JS sent to the browser.
  • Edge Functions require minimal bundles to ensure fast cold starts.
  • Code Splitting and LLM-Driven Analysis are essential tools to surgically remove dead code and optimize dependencies.

By mastering these concepts, we ensure that our "Intelligent APIs" are not only smart in logic but also efficient in delivery, providing the fastest possible experience to the end-user.

Basic Code Example

This example demonstrates a fundamental approach to analyzing the bundle size impact of a tRPC client within a Next.js application. We will create a minimal setup with a tRPC client, a data-fetching component, and a script to analyze the client-side bundle. The goal is to visualize how tRPC dependencies are included in the browser bundle, which is critical for optimizing performance in Server-Side Rendering (SSR) and Edge Function environments.

The Code

// File: src/app/_components/HelloWorld.tsx
// Purpose: A client component that consumes a tRPC query.
// This component will be hydrated on the client, bringing tRPC dependencies into the bundle.

"use client";

import { api } from "~/utils/api";

/**

 * @component HelloWorld
 * @description A simple component that fetches and displays a message from tRPC.
 * This demonstrates the client-side footprint of tRPC.
 */
export default function HelloWorld() {
  // 1. Trigger a tRPC query. This hook is responsible for client-side data fetching.
  // It contains logic for caching, retries, and state management.
  const helloQuery = api.hello.sayHello.useQuery({ name: "World" });

  // 2. Handle loading state
  if (helloQuery.isLoading) {
    return <div>Loading...</div>;
  }

  // 3. Handle error state
  if (helloQuery.isError) {
    return <div>Error: {helloQuery.error.message}</div>;
  }

  // 4. Render the data
  return (
    <main>
      <h1>tRPC Bundle Analysis Example</h1>
      <p>Message from server: {helloQuery.data?.greeting}</p>
    </main>
  );
}
// File: src/utils/api.ts
// Purpose: The tRPC client configuration.
// This file defines the router types and the API client instance.

import { createTRPCReact } from "@trpc/react-query";

// 1. Initialize the tRPC client for React.
// This creates a set of hooks (e.g., useQuery, useMutation) specific to our router.
export const api = createTRPCReact<AppRouter>();

// 2. Define the AppRouter type.
// In a real app, this would be imported from the server router definition.
// For this standalone example, we mock the type structure.
import type { inferRouterInputs, inferRouterOutputs } from "@trpc/server";

type AppRouter = {
  hello: {
    sayHello: {
      input: { name: string };
      output: { greeting: string };
    };
  };
};

// 3. Export types for usage in components (optional but good practice).
export type RouterInput = inferRouterInputs<AppRouter>;
export type RouterOutput = inferRouterOutputs<AppRouter>;
// File: src/app/page.tsx
// Purpose: The main page component (Server Component) that renders the client component.
// In Next.js App Router, Server Components do not send JavaScript to the client by default.

import HelloWorld from "./_components/HelloWorld";

/**

 * @page
 * @description The home page of the application.
 * This is a Server Component, so it executes only on the server.
 * It imports a Client Component (`HelloWorld`) which will be hydrated in the browser.
 */
export default function HomePage() {
  return (
    <div>
      <HelloWorld />
    </div>
  );
}
// File: analyze-bundle.ts
// Purpose: A Node.js script to generate a bundle analysis report.
// This script uses `esbuild` to bundle the client component and `esbuild-visualizer` to visualize it.

import * as esbuild from "esbuild";
import { visualizer } from "esbuild-visualizer";

/**

 * @function analyzeBundle
 * @description Builds the HelloWorld component and generates an HTML visualization of the bundle.
 * This mimics the build process of a Next.js app to isolate the client bundle size.
 */
async function analyzeBundle() {
  console.log("🚀 Starting bundle analysis...");

  // 1. Define the entry point (the client component).
  const entryPoint = "./src/app/_components/HelloWorld.tsx";

  // 2. Configure the esbuild context.
  // We target 'esnext' to see the full dependency tree without polyfills.
  const ctx = await esbuild.context({
    entryPoints: [entryPoint],
    bundle: true,
    outdir: "./dist-analysis",
    format: "esm",
    target: "esnext",
    // We need to define globals for React, as esbuild won't resolve them automatically in this standalone script.
    // In a real Next.js build, these are handled by the framework.
    define: {
      "process.env.NODE_ENV": '"production"',
    },
    // 3. Add the visualizer plugin.
    plugins: [
      visualizer({
        filename: "./dist-analysis/bundle-report.html",
        title: "tRPC Client Bundle Analysis",
        // This option allows us to see the size of specific modules.
        template: "treemap", 
      }),
    ],
    // 4. Mark external dependencies to simulate a browser environment.
    // In a real build, React and Next.js are externalized.
    external: ["react", "react-dom", "@tanstack/react-query"],
  });

  // 5. Rebuild the bundle.
  await ctx.rebuild();

  // 6. Dispose of the context.
  ctx.dispose();

  console.log("✅ Analysis complete. Open 'dist-analysis/bundle-report.html' in your browser.");
}

analyzeBundle().catch((err) => {
  console.error("❌ Bundle analysis failed:", err);
  process.exit(1);
});

Detailed Explanation

1. The Client Component (HelloWorld.tsx)

This component is the core of our bundle analysis. It is marked as a Client Component ("use client"), meaning it will be executed in the browser. Inside, it uses the api.hello.sayHello.useQuery hook provided by tRPC. This hook is not a simple function; it pulls in the entire @tanstack/react-query library (for state management) and the tRPC client runtime (for serialization, request handling, and error parsing). When this component is rendered, all these dependencies become part of the client-side JavaScript bundle.

2. The tRPC Client Configuration (api.ts)

This file sets up the tRPC client for React. The createTRPCReact function generates a set of hooks typed to our AppRouter. While this file itself is lightweight, it acts as the entry point for the tRPC runtime. The type definitions (AppRouter) are purely compile-time and do not affect the bundle size, but the runtime code imported from @trpc/react-query does.

3. The Server Component (page.tsx)

In Next.js App Router, Server Components are the default. They render exclusively on the server and send zero JavaScript to the client. This HomePage imports the HelloWorld component, which is a Client Component. The Server Component itself has no bundle size impact, but it acts as the container that triggers the hydration of the Client Component in the browser.

4. The Bundle Analysis Script (analyze-bundle.ts)

This script simulates the build process to analyze the bundle size. We use esbuild for its speed and the esbuild-visualizer plugin to generate a visual report.

  • Entry Point: We point to HelloWorld.tsx, the component that consumes tRPC.
  • Bundle Configuration: We set bundle: true to resolve all imports and create a single file. We mark react and react-dom as external because in a Next.js app, these are typically served via a CDN or framework-optimized chunks.
  • Visualization: The visualizer plugin generates an HTML file (treemap) showing the size of each module. This allows us to see that @trpc/react-query and @tanstack/react-query are significant contributors to the bundle size.

Common Pitfalls

  1. Accidental Client-Side Imports: In Next.js, importing a tRPC utility (like api) into a Server Component can inadvertently pull client-side dependencies into the server bundle or cause hydration errors. Always ensure tRPC hooks (useQuery) are strictly within Client Components.
  2. Vercel/Edge Function Timeouts: When analyzing bundles for Edge Functions, be aware that large dependencies (like @trpc/react-query) can increase cold start times. If the bundle exceeds the Edge Function size limit (typically 4-5MB uncompressed), deployment will fail.
  3. Async/Await Loops in Data Fetching: While not directly a bundle size issue, improper use of async/await in tRPC procedures can lead to blocking server operations. This is critical in SSR, where slow data fetching delays the Time to First Byte (TTFB).
  4. Hallucinated JSON in LLM Outputs: When using LLMs to generate tRPC routers or client code, they may produce invalid JSON schemas or incorrect type definitions. Always validate the generated schemas against a strict type checker (like Zod) before integrating.

Logical Breakdown

  1. Define the Client Entry Point: Identify the component that initiates the tRPC request (HelloWorld.tsx).
  2. Configure tRPC Client: Set up the createTRPCReact instance and type definitions (api.ts).
  3. Wrap in Server Component: Use a Server Component (page.tsx) to render the client component, adhering to Next.js App Router patterns.
  4. Simulate Build Process: Use esbuild to bundle the client entry point, excluding external dependencies like React.
  5. Visualize Dependencies: Apply the esbuild-visualizer plugin to generate a treemap report, identifying heavy libraries.
  6. Interpret Results: Analyze the report to see the exact size contribution of tRPC and React Query, informing optimization strategies like code-splitting or edge deployment adjustments.

Visualization: Dependency Graph

The following diagram illustrates the flow of dependencies from the browser's perspective. The HelloWorld component is the entry point that pulls in the tRPC client and React Query.

The diagram visually maps the dependency chain, showing the HelloWorld component as the central entry point that imports and relies on both the tRPC client and React Query to function.
Hold "Ctrl" to enable pan & zoom

The diagram visually maps the dependency chain, showing the `HelloWorld` component as the central entry point that imports and relies on both the tRPC client and React Query to function.

The chapter continues with advanced code, exercises and solutions with analysis, you can find them on the ebook on Leanpub.com or Amazon


Loading knowledge check...



Code License: All code examples are released under the MIT License. Github repo.

Content Copyright: Copyright © 2026 Edgar Milvus | Privacy & Cookie Policy. All rights reserved.

All textual explanations, original diagrams, and illustrations are the intellectual property of the author. To support the maintenance of this site via AdSense, please read this content exclusively online. Copying, redistribution, or reproduction is strictly prohibited.