Chapter 8: Function Calling (Tools) with TypeScript
Theoretical Foundations
In the previous chapter, we explored how to structure the LLM's raw text output into a predictable format using JSON Schema and Zod. This was our first step in taming the non-deterministic nature of LLMs, transforming a stream of text into a structured object we could programmatically rely on. However, that approach was fundamentally passive: we asked the model to describe something, and we parsed the description.
Function Calling (or Tools) represents the next evolutionary leap: moving from passive description to active execution. Instead of merely describing a data structure, the LLM is now empowered to request the execution of a specific function that performs an action in the real world. This transforms the LLM from a conversational text generator into a reasoning engine that can interact with external systems, databases, APIs, or internal logic.
The Web Development Analogy: The API Gateway and Microservices
To understand this shift, let's draw a parallel to a modern microservices architecture. Imagine you are building a large e-commerce platform.
-
The Old Way (Pure Text Generation): You have a frontend client (the user). When the user asks, "What's the status of my order?" the frontend sends a request to a monolithic backend. The backend processes the request, queries the database, and returns a block of HTML text describing the order status. The frontend has no structured data; it just has a string of text to display.
-
The New Way (Function Calling / Tools): Now, imagine the backend exposes a set of strictly typed API endpoints (e.g.,
GET /orders/{id},POST /cart/add). The frontend doesn't just ask for text; it asks the backend to call a specific endpoint with specific parameters.In our LLM context: * The LLM is the "Backend Orchestrator." It has access to a "registry" of available API endpoints (our Tools). * The User is the "Frontend Client" making a request. * The Tools are the microservices (or API endpoints) that perform specific tasks:
getUserProfile,calculateShippingCost,executeDatabaseQuery. * The Function Call is the LLM deciding which API endpoint to hit and what parameters to send.
When the user asks, "What's the total cost to ship a 5kg package from New York to London?", the LLM doesn't just generate a guess. It recognizes that it needs to:
1. Identify the relevant tool: calculateShippingCost.
2. Extract the required parameters: weight: 5, origin: "New York", destination: "London".
3. Format this as a structured request (a JSON object adhering to a schema).
4. Hand this request off to the system, which executes the actual logic.
This is the essence of Function Calling: Structured Request Generation for External Logic Execution.
The Role of Zod: The Contract Enforcer
In the previous chapter, we used Zod to parse the LLM's output. Here, we use Zod to define the contract that the LLM must adhere to before it even generates the output. This is a subtle but critical distinction.
Zod schemas serve as the "Interface Definition Language" (IDL) for our tools. Just as TypeScript interfaces define the shape of objects for compile-time safety, Zod schemas define the shape of the function arguments for runtime safety and LLM comprehension.
Why is this necessary? LLMs do not natively understand TypeScript types. They understand natural language and, to some extent, JSON Schema. Zod acts as a translator. We define our tool's parameters using Zod, and behind the scenes, we convert that Zod schema into a JSON Schema that the LLM can ingest.
Analogy: The Restaurant Menu Think of a restaurant menu. The menu doesn't show the kitchen's internal code; it shows a structured list of dishes (functions) with a description of what they contain (parameters). * Zod Schema: The recipe card in the kitchen. It strictly defines the ingredients (parameters) needed, their types (e.g., "2 eggs, not 3"), and their format. * JSON Schema (for LLM): The printed menu given to the customer. It's a translated, user-friendly version of the recipe card that tells the customer exactly what they can order and what information they need to provide. * LLM: The waiter. The waiter reads the menu (JSON Schema) and takes the customer's order (User Query), ensuring all required fields are filled out correctly before sending it to the kitchen (the Tool Execution Engine).
Without Zod, we would be relying on loose text descriptions, leading to the "Hallucination of Parameters"—where the LLM might invent a parameter that doesn't exist or provide a string where a number is required.
The Mechanics: Type Inference and Generics as the Glue
This is where TypeScript's advanced type system becomes the backbone of a robust agent architecture. We are not just passing data around; we are maintaining a chain of type safety from the tool definition all the way to the execution handler.
1. Type Inference: The Silent Assistant
When we define a tool using a Zod schema, we want TypeScript to automatically know what the input arguments look like without us manually duplicating interfaces.
Example Scenario:
We define a fetchWeather tool. The input is { city: string }.
If we were to write the execution handler manually, we would have to define the argument type twice: once for the Zod schema and once for the function signature.
// The "Bad" Way: Manual Duplication
const fetchWeatherSchema = z.object({
city: z.string(),
});
// We have to manually type this again!
function fetchWeather(args: { city: string }) {
// ...
}
Type Inference allows us to do this:
// The "Good" Way: Inferred Types
const fetchWeatherSchema = z.object({
city: z.string(),
});
// TypeScript infers the argument type from the schema automatically
function fetchWeather(args: z.infer<typeof fetchWeatherSchema>) {
// ...
}
Under the Hood:
When z.infer<typeof fetchWeatherSchema> is evaluated, the TypeScript compiler looks at the structure of the Zod object and generates a mapped type on the fly: { city: string }. This ensures that if you change the schema (e.g., add country: z.string()), the function signature updates automatically. This prevents the "drift" between the schema definition and the implementation, a common source of bugs in dynamic systems.
2. Generics: The Polymorphic Tool Runner
In a complex agent, we have multiple tools. We don't want to write a separate execution logic for every single tool. We want a generic "Tool Runner" that can handle any tool, provided it adheres to a specific contract.
This is where Generics (<T>) shine. We define a generic interface for a Tool, where T represents the specific Zod schema for that tool.
Analogy: The Universal Remote
Imagine a universal remote control. It doesn't care if it's controlling a TV, a soundbar, or a smart light. It has a generic "Send Command" button. The specific command (e.g., "Volume Up") is passed as a parameter.
* Generic Type <T>: The type of device being controlled.
* The Function: The "Send Command" button.
In TypeScript, we can define a generic Tool interface:
import { z } from 'zod';
// T is a generic type parameter representing the Zod schema
interface Tool<T extends z.ZodTypeAny> {
name: string;
description: string;
schema: T;
// The execute function takes an object matching the schema's input
execute: (input: z.infer<T>) => Promise<any>;
}
Now, we can create specific tools:
// Specific Tool 1
const weatherTool: Tool<typeof fetchWeatherSchema> = {
name: 'fetch_weather',
description: 'Fetches current weather for a city',
schema: fetchWeatherSchema,
execute: async (input) => {
// input is automatically typed as { city: string }
return `Weather in ${input.city}: Sunny`;
},
};
// Specific Tool 2
const dbToolSchema = z.object({ query: z.string() });
const dbTool: Tool<typeof dbToolSchema> = {
name: 'query_database',
description: 'Queries the database',
schema: dbToolSchema,
execute: async (input) => {
// input is automatically typed as { query: string }
return `Result for ${input.query}`;
},
};
The Power of Generics in Multi-Turn Conversations:
When an LLM requests a tool call, it outputs a structured object (e.g., { "tool": "fetch_weather", "input": { "city": "Paris" } }). Our system needs to route this to the correct handler.
Without generics, we would have to use messy type assertions (as any). With generics, we can create a type-safe registry:
type ToolRegistry = {
fetch_weather: Tool<typeof fetchWeatherSchema>;
query_database: Tool<typeof dbToolSchema>;
};
// A function that accepts the registry and executes the correct tool
async function executeTool<K extends keyof ToolRegistry>(
toolName: K,
input: Parameters<ToolRegistry[K]['execute']>[0]
) {
const tool = registry[toolName];
// TypeScript knows exactly what 'input' should be based on the toolName
return tool.execute(input);
}
Visualizing the Flow: From Query to Execution
The following diagram illustrates the lifecycle of a function call within the LangChain.js ecosystem (or a custom implementation). It highlights the transition from natural language to structured data, and finally to execution.
The "Why": Reliability and Composability
The shift to function calling is not just a technical gimmick; it solves fundamental problems in building intelligent applications:
-
Reliability (The "Ground Truth"): Pure text generation is prone to hallucination. If you ask an LLM for the current stock price of Apple, it might guess based on its training data (which is outdated). By giving it a
getStockPricetool, you force it to access a real-time API. The tool provides the "ground truth," eliminating hallucinations for factual queries. -
Composability (The "Lego Block" Analogy): A single tool is useful, but the real power lies in chaining them. An agent can use a
searchWebtool to find information, then pass that information to asummarizeTexttool, and finally pass the summary to asendEmailtool. This is analogous to Unix pipes (|) or functional composition. Each tool is a pure function (input -> output), and the agent orchestrates the pipeline. -
Safety and Sandboxing: By forcing the LLM to call tools rather than generating arbitrary code, we create a sandboxed environment. The LLM cannot execute
rm -rf /or access unauthorized system resources unless we explicitly create a tool for it. We control the execution surface area.
Theoretical Foundations
In summary, Function Calling with TypeScript and Zod represents a paradigm shift: * From unstructured text generation. * To structured, schema-validated request generation. * From static parsing of outputs. * To dynamic execution of external logic. * From manual type duplication. * To inferred and generic type safety.
It bridges the gap between the probabilistic world of LLMs and the deterministic world of software engineering, allowing us to build agents that are not just smart, but reliable and scalable.
Basic Code Example
This example demonstrates a minimal, self-contained TypeScript application that simulates a SaaS backend where an LLM (Large Language Model) can invoke a tool to fetch weather data. We will use the zod library for runtime validation of the tool's arguments and define a simple function handler to execute the tool. This mimics a real-world scenario where an AI assistant needs to query external data sources to answer user questions.
The flow is straightforward:
1. Define the Tool: We create a strongly-typed schema for the get_weather tool using Zod.
2. Simulate the LLM: We mock the LLM's response, which includes a structured tool call request.
3. Validate & Execute: We validate the incoming arguments against our schema and execute the logic.
4. Return Result: The result is formatted and sent back.
The Code
// main.ts
// ------------------------------------------------------------
// 1. IMPORTS
// ------------------------------------------------------------
import { z } from 'zod';
// ------------------------------------------------------------
// 2. TOOL DEFINITION (SCHEMA)
// ------------------------------------------------------------
/**
* Defines the structure of the arguments required for the weather tool.
* We use Zod for runtime validation, ensuring the data is safe before processing.
*
* Schema: { city: string }
*/
const WeatherToolSchema = z.object({
city: z.string().min(1, "City name cannot be empty"),
});
// Infer the TypeScript type from the Zod schema for type safety
type WeatherToolInput = z.infer<typeof WeatherToolSchema>;
// ------------------------------------------------------------
// 3. TOOL EXECUTION HANDLER
// ------------------------------------------------------------
/**
* Simulates fetching weather data from an external API.
* In a real app, this would call a service like OpenWeatherMap.
*
* @param input - The validated arguments from the LLM
* @returns - A string containing the weather report
*/
async function getWeather(input: WeatherToolInput): Promise<string> {
// Simulate an async network call
await new Promise(resolve => setTimeout(resolve, 100));
// Mock logic: Random weather for demonstration
const conditions = ['Sunny', 'Cloudy', 'Rainy', 'Stormy'];
const temp = Math.floor(Math.random() * 30) + 5; // 5 to 35 degrees
const condition = conditions[Math.floor(Math.random() * conditions.length)];
return `Current weather in ${input.city}: ${condition}, ${temp}°C.`;
}
// ------------------------------------------------------------
// 4. MAIN ORCHESTRATION LOGIC
// ------------------------------------------------------------
/**
* Simulates the main SaaS backend loop.
* 1. Receives a user query.
* 2. Simulates LLM deciding to call a tool.
* 3. Validates and executes the tool.
* 4. Returns the final response.
*/
async function main() {
console.log('--- SaaS Weather Agent Starting ---\n');
// Step A: Simulate an incoming request from a user
const userQuery = "What's the weather in Tokyo?";
console.log(`User Query: "${userQuery}"`);
// Step B: Simulate the LLM response (Tool Call)
// In a real scenario, this comes from the OpenAI API response object
// We are mocking the structure: { tool_calls: [{ name, arguments }] }
const llmResponse = {
tool_calls: [
{
name: "get_weather",
// The LLM returns arguments as a JSON string
arguments: JSON.stringify({ city: "Tokyo" })
}
]
};
console.log('\nLLM decided to call a tool...\n');
// Step C: Process Tool Calls
if (llmResponse.tool_calls && llmResponse.tool_calls.length > 0) {
for (const toolCall of llmResponse.tool_calls) {
const { name, arguments: argsString } = toolCall;
console.log(`-> Executing Tool: "${name}"`);
console.log(`-> Raw Arguments: ${argsString}`);
try {
// 1. Parse JSON (Runtime check)
const rawArgs = JSON.parse(argsString);
// 2. Validate against Zod Schema (Runtime Validation)
// If validation fails, it throws a ZodError
const validatedArgs = WeatherToolSchema.parse(rawArgs);
// 3. Execute the specific tool handler
let result: string;
if (name === "get_weather") {
result = await getWeather(validatedArgs);
} else {
throw new Error(`Unknown tool: ${name}`);
}
// Step D: Return Tool Output
console.log(`-> Tool Output: ${result}`);
console.log('\n--- Agent Finished ---');
} catch (error) {
if (error instanceof z.ZodError) {
console.error(`Validation Error for tool "${name}":`, error.errors);
} else if (error instanceof SyntaxError) {
console.error('JSON Parsing Error: LLM returned invalid JSON.');
} else {
console.error('Unexpected Error:', error);
}
}
}
}
}
// Execute the main function
main();
Detailed Line-by-Line Explanation
1. Imports and Schema Definition
import { z } from 'zod';: Imports the Zod library. Zod is essential for Runtime Validation. While TypeScript checks types at compile time, those checks disappear in the JavaScript that runs on the server. Zod ensures that data coming from the LLM (which is untrusted external input) conforms to our expected structure during execution.const WeatherToolSchema = z.object({...}): Defines the schema for the arguments our tool accepts.city: z.string().min(1): Ensures thecityproperty exists, is a string, and is not an empty string. If the LLM hallucinates an empty string or a number, Zod will catch this immediately.
type WeatherToolInput = z.infer<typeof WeatherToolSchema>: This is a powerful TypeScript feature. We don't need to manually write the interface for the arguments; Zod infers it automatically. This keeps the runtime validation and compile-time types perfectly in sync.
2. The Tool Handler
async function getWeather(...): This is the actual business logic. It accepts the validated arguments (typed asWeatherToolInput).await new Promise(...): Simulates network latency. Real-world tools usually involve I/O operations (database queries, API calls), which are asynchronous.return ...: The tool returns a simple string. In a more complex agent, this might return a JSON object or a complex data structure to be used by the LLM in the next step.
3. The Main Orchestration Loop
const llmResponse = {...}: In a real application, this object comes from the OpenAI Chat Completions API. We are mocking thetool_callsarray to demonstrate the flow.- Critical Detail: The
argumentsfield in an LLM response is always a JSON string, not a JavaScript object. This is a common source of bugs.
- Critical Detail: The
JSON.parse(argsString): We must parse the string into a JavaScript object before passing it to Zod. This is where Runtime Validation starts.WeatherToolSchema.parse(rawArgs): This is the core safety check.- If
rawArgsmatches the schema, it returns the typed object. - If it fails (e.g., missing
city), it throws aZodError. We catch this specifically to provide helpful error messages to the user or developer.
- If
if (name === "get_weather"): Since an LLM can call multiple different tools, we need a router to decide which function to execute based on the tool name.
Common Pitfalls
When implementing function calling in TypeScript SaaS applications, watch out for these specific issues:
-
Hallucinated JSON Formatting
- The Issue: LLMs often return JSON strings with syntax errors (trailing commas, unquoted keys) or mismatched types (passing a number instead of a string for a city ID).
- The Fix: Never trust the raw output. Always wrap
JSON.parsein atry/catchblock and immediately validate the result with Zod. Zod's.transform()can also be used to coerce types (e.g.,z.string().transform(Number)) if the LLM is inconsistent.
-
Vercel/AWS Lambda Timeouts
- The Issue: Serverless functions (like Vercel Edge or AWS Lambda) have strict execution time limits (e.g., 10 seconds). If your tool handler performs a heavy computation or a slow database query, the function will time out before the LLM receives the result.
- The Fix:
- Move heavy tool execution to background jobs (e.g., using Inngest or AWS Step Functions).
- If the tool must run in the request context, ensure you handle the timeout gracefully and return a "Tool timed out" message to the LLM so it can adjust its response.
-
Async/Await Loops in Streaming
- The Issue: When using streaming responses (Server-Sent Events or WebSocket), awaiting a tool call inside a loop can block the stream, causing the UI to freeze until the tool finishes.
- The Fix: If you are streaming tokens to the client, use the "Function Call Streaming" pattern supported by OpenAI. Send the partial JSON delta as it arrives, but do not execute the tool until the entire JSON object is received and validated.
-
Type Safety Drift
- The Issue: Manually updating the TypeScript interface for a tool's arguments but forgetting to update the Zod schema (or vice versa). This leads to runtime errors where the type says one thing, but the validation expects another.
- The Fix: Always use
z.infer(as shown in the code) to derive your TypeScript types directly from the Zod schema. This ensures a single source of truth.
Visualization of the Data Flow
The following diagram illustrates the lifecycle of a function call within the TypeScript runtime.
The chapter continues with advanced code, exercises and solutions with analysis, you can find them on the ebook on Leanpub.com or Amazon
Loading knowledge check...
Code License: All code examples are released under the MIT License. Github repo.
Content Copyright: Copyright © 2026 Edgar Milvus | Privacy & Cookie Policy. All rights reserved.
All textual explanations, original diagrams, and illustrations are the intellectual property of the author. To support the maintenance of this site via AdSense, please read this content exclusively online. Copying, redistribution, or reproduction is strictly prohibited.