Skip to content

Chapter 3: Prompt Templates and Semantic Functions

Theoretical Foundations

At the heart of AI engineering lies the fundamental challenge of translating human intent into machine-executable logic. While raw Large Language Models (LLMs) are powerful, they are stateless and probabilistic. To build reliable, deterministic systems, we must wrap these models in structured code. In Microsoft Semantic Kernel, this structure is built upon two pillars: Prompt Templates and Semantic Functions.

These concepts are not merely syntactic sugar; they represent a paradigm shift in how we view software development. We are moving from writing explicit algorithms to defining intent-based contracts. To understand this, we must first look at the "What" (the mechanics) and the "Why" (the architectural necessity).

The Anatomy of Intent: Prompt Templates

A Prompt Template is the blueprint of communication with an LLM. It is a string of text containing static instructions, dynamic variables, and contextual data. In isolation, a prompt is just a string. However, when treated as a template, it becomes a parameterized query, allowing the same logic to adapt to different inputs.

The "Why": Determinism in a Probabilistic World

LLMs are inherently non-deterministic. If you ask a model "Summarize this text" without providing the text, the result is undefined. If you provide the text but no constraints, the summary length and style will vary.

Prompt templates solve this by enforcing a strict schema. They act as a contract between your C# code and the AI model. By defining variables (e.g., {{$input}} or {{style}}), you ensure that every execution of the function has the necessary data to produce a consistent result.

The Analogy: The Mad Libs of Code

Consider the childhood game Mad Libs. You have a story with blank spaces (variables) labeled "Noun," "Verb," or "Adjective." The story structure (the template) is fixed, but the content changes based on user input.

Static Text: The narrative structure and instructions (e.g., "You are a helpful assistant."). * Variables: The blanks to be filled (e.g., {{$user_query}}). * The Result: A fully formed prompt ready for the LLM.

Without templates, you would be writing a new story from scratch every time. With templates, you reuse the structure and only inject the data.

The Templating Engine: Handlebars

While Semantic Kernel supports multiple templating engines, Handlebars is the standard for complex logic. Handlebars is a logic-less templating engine, meaning it separates presentation (the prompt structure) from logic (variable resolution and formatting).

Why "Logic-Less" Matters in AI

In traditional software, embedding complex logic inside a string (like a SQL query) is a security risk (SQL injection) and a maintenance nightmare. In AI, the "logic" is the reasoning capability of the LLM itself. Handlebars allows us to preprocess data before it reaches the LLM, keeping the prompt clean and focused on the task.

Variable Substitution: Using {{variable}} to inject values. 2. Helpers: Built-in functions for string manipulation, date formatting, and conditional logic. 3. Iteration: Looping over collections to format lists or examples.

The Analogy: The Restaurant Kitchen Ticket

The Order Ticket (The Prompt): It contains the raw request. * The Chef (The LLM): Needs a standardized ticket to cook efficiently. * The Expediter (The Handlebars Engine): Before the ticket reaches the chef, the expediter takes the raw order, formats it, calculates cooking times (logic), and ensures all ingredients (variables) are present.

If the expediter (Handlebars) fails to format the ticket, the chef (LLM) gets confused, leading to errors (hallucinations) or wasted time (high token usage).

Semantic Functions: Encapsulation of Intent

A Semantic Function is the container that binds a Prompt Template to an LLM configuration. It is a first-class citizen in the Kernel, treated exactly the same way as a native C# function.

The "What": A Declarative Interface

In traditional programming, a function is defined by its signature:

public string Summarize(string text, int length);
In Semantic Kernel, a Semantic Function is defined by its configuration (a .json file) and its template (a .txt file). The Kernel dynamically loads these files and exposes them as invokable functions.

The "Why": Decoupling Logic from Implementation

This decoupling is crucial for Interoperability. As discussed in Book 2: Kernel Architecture, the Kernel acts as a dependency injection container. By defining logic as Semantic Functions, you decouple the intent (summarize text) from the implementation (which model to use, what temperature, what stop sequences).

Architectural Implication: If you hardcode prompts into C# strings, switching from OpenAI to a local Llama model requires rewriting code. If you use Semantic Functions, you simply change the Kernel configuration, and the same function routes to the new model automatically.

The Execution Flow: Function Calling

The true power of Semantic Kernel is revealed in the Function Calling Flow. This is the mechanism by which the Kernel orchestrates the execution of Semantic Functions.

Registration: A developer registers a Semantic Function with the Kernel. 2. Invocation: The application calls kernel.RunAsync(function, context). 3. Planning (Implicit or Explicit): The Kernel analyzes the request. If the function is purely semantic, it prepares the prompt. 4. Renderization: The Handlebars engine processes the Prompt Template, injecting variables and executing helpers to produce the final prompt string. 5. Execution: The Kernel sends this rendered prompt to the configured AI Service (e.g., Azure OpenAI). 6. Post-Processing: The LLM returns a completion. The Kernel captures this output, updates the Context Variables, and returns control to the application.

The Analogy: The Universal Translator

The Request: The diplomat receives a request in Code: "Translate 'Hello' to French." 2. The Lookup: The diplomat checks their playbook (Semantic Function Registry) for a "Translation" protocol. 3. The Preparation: The diplomat fills out a standardized form (Handlebars Template) with the specific details ("Hello", "French"). 4. The Transmission: The diplomat hands the form to the LLM (The Translator). 5. The Response: The LLM returns the result ("Bonjour"). 6. The Delivery: The diplomat hands the result back to the code.

Without the Kernel (the diplomat), the code would have to know how to format requests, handle network connections, and parse raw text responses for every single AI interaction.

Visualizing the Architecture

The following diagram illustrates the flow from code definition to LLM execution. Note how the Semantic Function acts as a bridge between the static template and the dynamic context.

This diagram illustrates how a Semantic Function acts as a bridge, transforming static C# code definitions into dynamic prompts that guide the Large Language Model's execution.
Hold "Ctrl" to enable pan & zoom

This diagram illustrates how a Semantic Function acts as a bridge, transforming static C# code definitions into dynamic prompts that guide the Large Language Model's execution.

Deep Dive: Few-Shot Examples and System Messages

To achieve high reliability, we rarely rely on zero-shot prompts (instructions without examples). We utilize Few-Shot Learning—providing the model with input-output pairs to demonstrate the desired behavior.

Structuring Prompts with Few-Shot Examples

In a Semantic Function, few-shot examples are embedded directly into the template. This is where the Handlebars engine shines, allowing us to iterate over a list of examples stored in context variables.

The Analogy: The Apprentice and the Master A zero-shot prompt is like telling an apprentice to "paint a landscape." They might do it, but the style is unpredictable. A few-shot prompt is like showing the apprentice three paintings you did and saying, "Paint like this." The Semantic Function is the gallery where these reference paintings are displayed.

System Messages vs. User Messages

System Message: Usually the first part of the template, defining the persona (e.g., "You are a helpful coding assistant"). * User Message: The input variables (e.g., {{$input}}). * Assistant Message: The expected output (used in few-shot examples).

By encapsulating this in a Semantic Function, we ensure that every call to the function includes the correct system message, preventing "persona drift" where the AI might forget its role during a long conversation.

Token Limits: Handlebars helpers must be used carefully. If a helper generates a string that exceeds the model's context window (e.g., 4096 tokens), the execution will fail. The Kernel does not automatically truncate; this responsibility lies in the template logic or native middleware. 2. Variable Shadowing: Context variables are a flat dictionary. If a variable named input is passed, it overrides the default {{$input}}. This is powerful but can lead to bugs if not managed. 3. Non-Deterministic Helpers: While Handlebars is logic-less, custom C# functions injected into the Kernel as native functions can introduce side effects. A Semantic Function should remain pure; side effects belong in native functions orchestrated by the Kernel.

Summary

Prompt Templates provide the structure and variable substitution (via Handlebars) to ensure consistency. * Semantic Functions encapsulate this structure, allowing it to be treated as a standard software function. * Function Calling is the flow that binds these elements together, routing intent through the Kernel to the LLM and back.

By mastering these concepts, we stop writing brittle strings of text and start engineering robust, intent-based systems.

Basic Code Example

// ==========================================================
// File: BasicSemanticFunctionExample.cs
// Objective: Demonstrate creating and invoking a basic 
//            Semantic Function using Microsoft Semantic Kernel.
// ==========================================================

// 1. Import the core Semantic Kernel orchestration library.
//    This namespace contains the 'Kernel' class and function-related primitives.
using Microsoft.SemanticKernel;
// 2. Import the necessary runtime services.
//    We need this to enable dependency injection for the Kernel configuration.
using Microsoft.Extensions.DependencyInjection;
// 3. Import the logging abstractions.
//    Essential for debugging the internal workings of the Kernel.
using Microsoft.Extensions.Logging;
// 4. Import the console logger implementation.
using Microsoft.Extensions.Logging.Console;

// 5. Define the program entry point.
//    We use 'async Task' to support asynchronous API calls to the LLM.
public class Program
{
    public static async Task Main(string[] args)
    {
        // 6. Define the path where our prompt definition file exists.
        //    In a real app, this might be a secure configuration or environment variable.
        //    For this standalone example, we will write the file to disk first to ensure it runs.
        string pluginDirectory = Path.Combine(Directory.GetCurrentDirectory(), "Plugins");
        string promptPath = Path.Combine(pluginDirectory, "Summarize", "skprompt.txt");

        // 7. Ensure the directory structure exists.
        Directory.CreateDirectory(Path.GetDirectoryName(promptPath)!);

        // 8. Create the prompt file content.
        //    This is a "Zero-Shot" prompt. We are giving the AI a role and a task without examples.
        string promptContent = """
            You are a helpful assistant that summarizes text into a single, concise sentence.

            Input: {{$input}}
            Summary:
            """;

        // 9. Write the prompt to the file system so the Kernel can load it.
        await File.WriteAllTextAsync(promptPath, promptContent);

        // 10. Configure the Kernel Builder.
        //     This is the modern 'Expert Mode' approach using dependency injection.
        IKernelBuilder kernelBuilder = Kernel.CreateBuilder();

        // 11. Add Logging (Console).
        //     Critical for seeing internal Kernel events, token usage, and errors.
        kernelBuilder.Services.AddLogging(builder => builder
            .AddConsole()
            .SetMinimumLevel(LogLevel.Debug));

        // 12. Add the AI Service (Azure OpenAI used here as the example).
        //     NOTE: You must replace these placeholders with valid credentials or use OpenAI directly.
        kernelBuilder.AddAzureOpenAIChatCompletion(
            deploymentName: "gpt-4o-mini", // Your model deployment name
            endpoint: "https://your-resource.openai.azure.com/", // Your endpoint
            apiKey: "your-api-key" // Your API Key
        );

        // 13. Build the Kernel instance.
        //     The Kernel is the central nervous system of your AI application.
        IKernel kernel = kernelBuilder.Build();

        // 14. Define the Plugin Name.
        //     Plugins are logical groupings of functions (like a class in OOP).
        string pluginName = "MyTextPlugin";

        // 15. Import the Semantic Function from the file system.
        //     The Kernel automatically parses the 'skprompt.txt' and configures the execution settings.
        //     'promptDirectory' tells the kernel where to look for 'config.json' and 'skprompt.txt'.
        var summarizeFunction = kernel.ImportPluginFromPromptDirectory(pluginDirectory, pluginName)["Summarize"];

        // 16. Define the input data.
        //     In Semantic Kernel, inputs are passed as a dictionary of key-value pairs.
        //     The key '$input' is the default parameter name if not specified otherwise.
        string longText = "Microsoft Semantic Kernel is an open-source SDK that lets you easily build AI applications " +
                          "that can call your existing code. It provides a modern, composable architecture for " +
                          "orchestrating AI models and plugins to create powerful, context-aware applications.";

        // 17. Create the Kernel Arguments object.
        //     This is the modern replacement for the older 'ContextVariables' class.
        KernelArguments arguments = new()
        {
            ["input"] = longText
        };

        // 18. Invoke the function.
        //     This triggers the 'Function Calling Flow':
        //     1. Kernel prepares the prompt using the arguments.
        //     2. Kernel sends the request to the configured AI service.
        //     3. Kernel receives the response.
        FunctionResult result = await kernel.InvokeAsync(summarizeFunction, arguments);

        // 19. Extract and display the result.
        //     The result object contains metadata, usage statistics, and the actual content.
        Console.WriteLine("--------------------------------------------------");
        Console.WriteLine($"Original Length: {longText.Length} chars");
        Console.WriteLine($"Summary: {result}");
        Console.WriteLine("--------------------------------------------------");
    }
}

using Microsoft.SemanticKernel;: This imports the fundamental types required to interact with the SDK, specifically the Kernel class and function definitions. 2. using Microsoft.Extensions.DependencyInjection;: Semantic Kernel v1.0+ is built on the standard .NET dependency injection (DI) container. We need this to configure services like the AI connector and logging. 3. using Microsoft.Extensions.Logging;: Importing the logging abstraction allows us to inject a logger into the Kernel to observe its internal state. 4. public class Program { ... }: The standard C# entry point wrapper. 5. public static async Task Main(string[] args): The entry point. It is async because we will be making a network call to an AI service. 6. string pluginDirectory = ...: We define where our "Plugin" (a collection of prompts) will live. In Semantic Kernel, a Plugin is often a directory containing subdirectories for each function. 7. string promptPath = ...: Constructing the specific file path for the prompt definition file skprompt.txt. 8. Directory.CreateDirectory(...): Ensures the folder structure exists before we try to write to it. 9. string promptContent = """ ... """: We define the prompt text using a C# 11+ Raw String Literal ("""). This prompt defines the AI's persona ("Helpful assistant") and the input variable {{$input}}. This variable will be replaced at runtime. 10. await File.WriteAllTextAsync(...): We persist the prompt to the disk. This simulates the development workflow where developers craft prompts in files separate from code. 11. IKernelBuilder kernelBuilder = Kernel.CreateBuilder();: This initializes the modern builder pattern. It provides a fluent API to configure the kernel before it is built. 12. kernelBuilder.Services.AddLogging(...): We configure the .NET logging pipeline. We set the minimum level to Debug to see everything the Kernel does internally (like token counting or HTTP requests). 13. kernelBuilder.AddAzureOpenAIChatCompletion(...): CRITICAL STEP. We register the AI model connector. Semantic Kernel is provider-agnostic, but here we specifically configure it for Azure OpenAI. Note: In a real scenario, you would load these secrets from appsettings.json or Azure Key Vault. 14. IKernel kernel = kernelBuilder.Build();: This finalizes the configuration. The IKernel instance is now a fully configured dependency injection container ready to execute logic. 15. string pluginName = "MyTextPlugin";: We name our logical grouping of functions. 16. var summarizeFunction = kernel.ImportPluginFromPromptDirectory(...);: This is the "Magic" moment. The Kernel scans the directory, finds skprompt.txt, and automatically creates a function object. We access it via ["Summarize"] (the directory name). 17. string longText = ...: The data we want the AI to process. 18. KernelArguments arguments = new() { ["input"] = longText };: We create the arguments payload. The key "input" matches the variable {{$input}} defined in our prompt file. 19. FunctionResult result = await kernel.InvokeAsync(...): The Kernel performs the "Function Calling Flow": * It renders the prompt: You are a helpful assistant... Input: Microsoft Semantic Kernel... Summary: * It sends this string to the Azure OpenAI endpoint. * It waits for the response. * It wraps the response in a FunctionResult object. 20. Console.WriteLine(...): We output the result to demonstrate the successful transformation of the long text into a summary.

Common Pitfalls

1. Missing config.json or Incorrect skprompt.txt Syntax Fix: Always create a config.json alongside your skprompt.txt to explicitly define model behavior:

{
  "schema": 1,
  "description": "Summarizes text",
  "execution_settings": {
    "default": {
      "max_tokens": 100,
      "temperature": 0.1
    }
  }
}

2. Using the Wrong Variable Syntax Fix: Ensure the keys in KernelArguments match the variable names in the prompt file exactly (case-sensitive).

3. Forgetting to Register the AI Service Fix: Verify builder.Add... is called before Build().

Visualizing the Flow

The diagram illustrates the sequential flow where Initialize() is called first to set up the environment, followed immediately by Build() to construct the final output.
Hold "Ctrl" to enable pan & zoom

The diagram illustrates the sequential flow where `Initialize()` is called first to set up the environment, followed immediately by `Build()` to construct the final output.

The chapter continues with advanced code, exercises and solutions with analysis, you can find them on the ebook on Leanpub.com or Amazon


Loading knowledge check...



Code License: All code examples are released under the MIT License. Github repo.

Content Copyright: Copyright © 2026 Edgar Milvus | Privacy & Cookie Policy. All rights reserved.

All textual explanations, original diagrams, and illustrations are the intellectual property of the author. To support the maintenance of this site via AdSense, please read this content exclusively online. Copying, redistribution, or reproduction is strictly prohibited.