Skip to content

Chapter 5: Connector Handling and Dependency Injection

Theoretical Foundations

The architectural integrity of any sophisticated software system, particularly one powered by AI, rests upon two pillars: how it connects to the world of services (connectors) and how it manages the lifecycle of those connections (dependency injection). In the context of Microsoft Semantic Kernel (SK), these are not merely implementation details; they are the foundational mechanisms that allow an AI application to be modular, testable, and scalable. Without a robust connector pattern, the Kernel becomes a monolith, brittle and resistant to change. Without a dependency injection container, the system devolves into a tangled web of tightly coupled components, impossible to manage or test in isolation.

The Connector Pattern: Abstraction as the Universal Adapter

At its core, the Connector Pattern is an architectural design that decouples the core logic of an application from the specific implementations of external services. In the realm of AI engineering, this is paramount. The landscape of Large Language Models (LLMs) and AI services is in constant flux. A model that is state-of-the-art today may be obsolete tomorrow. A service hosted on Azure OpenAI might need to be swapped for a local model running on Ollama for data privacy reasons, or for a different provider like Anthropic for cost optimization.

The Analogy: The Universal Power Strip

Imagine you are building a complex home entertainment system. You have a television, a gaming console, a soundbar, and a streaming device. Each device has a different power plug: a two-prong US plug, a three-prong UK plug, and a USB-C connector. If you were to wire each device directly into the wall, you would have a chaotic, inflexible, and dangerous mess. To change one device, you might have to rewire the entire room.

The Connector Pattern is the universal power strip with interchangeable adapters. The power strip itself represents the standard interface (e.g., ITextGenerationService). It defines a common set of "sockets" (methods like GetTextAsync). The individual devices (the AI models) are the adapters (e.g., AzureOpenAITextGenerationService, HuggingFaceTextGenerationService). You can plug in any compatible adapter, and the entire system works without knowing or caring about the underlying wall socket's specific wiring. This abstraction allows you to unplug the Azure service and plug in the local Llama service with zero changes to your television (your core application logic).

Why This is Critical for AI Applications

Latency: Routing simple queries to a faster, cheaper model while reserving powerful models for complex reasoning. 2. Cost: Using a fine-tuned, smaller model for specific tasks to reduce token costs. 3. Compliance: Ensuring data never leaves a specific geographic boundary by using a local or sovereign cloud model. 4. Resilience: Having a fallback model in case the primary provider experiences an outage.

Without the connector pattern, every if/else statement checking which model to use would be embedded deep within your business logic, creating an unmaintainable "spaghetti code" monster. The connector pattern externalizes this decision, typically to the configuration layer, making the system's behavior a matter of deployment configuration rather than code modification.

The Role of Interfaces in the Connector Pattern

The mechanism that enables this "plug-and-play" capability in C# is the interface. An interface is a contract. It defines a set of public members (methods, properties, events) that a implementing class must provide, but it specifies no implementation. In Semantic Kernel, the core AI functionalities are exposed through well-defined interfaces.

ITextGenerationService: For completing text prompts. * IChatCompletionService: For conversational, multi-turn dialogue. * IEmbeddingGenerationService<TValue, TEmbedding>: For converting text into vector representations.

When you build an AI application, your core business logic should only depend on these interfaces, never on a concrete class like AzureOpenAITextGenerationService.

// Core application logic depends on the ABSTRACTION (the interface).
// This is the essence of the Connector Pattern.
public class StoryGenerator
{
    private readonly ITextGenerationService _textGenerator;

    // The constructor accepts the interface, not a concrete class.
    // This is where Dependency Injection will provide the implementation.
    public StoryGenerator(ITextGenerationService textGenerator)
    {
        _textGenerator = textGenerator;
    }

    public async Task<string> GenerateFantasyStoryAsync(string prompt)
    {
        // The logic is agnostic to the underlying model.
        // It could be GPT-4, a local Llama 2, or a mock for testing.
        var result = await _textGenerator.GetTextAsync(prompt);
        return result;
    }
}

This design is crucial because it directly enables the scenarios described earlier. If you need to switch from Azure OpenAI to a local model, you do not touch the StoryGenerator class. You simply change the implementation that is "plugged in" at the composition root (the entry point of your application).

Dependency Injection (DI): The Orchestrator

While the Connector Pattern provides the abstractions, Dependency Injection (DI) is the mechanism that wires these abstractions to their concrete implementations. DI is a design pattern where the creation and lifetime management of an object's dependencies are delegated to an external container.

The Analogy: The Restaurant Kitchen

Think of a professional kitchen. The head chef (your main application logic) needs various tools to prepare a dish: a specific knife, a blender, a sauté pan. In a poorly designed kitchen (a system without DI), the chef would have to forge their own knife, build their own blender, and find raw materials for the pan every time they started cooking. This is inefficient, error-prone, and makes the chef's primary job—cooking—impossible.

In a well-designed kitchen (a system with DI), the tools are prepared and maintained by specialized staff (the DI container) and delivered to the chef's station exactly when needed. The chef doesn't care who made the knife or how the blender was assembled; they only care that it's a sharp knife and a powerful blender. The DI container is the kitchen manager who knows which tools are needed, ensures they are clean and available (managing their lifecycle), and places them in the chef's hands.

In Semantic Kernel, the Kernel itself acts as a sophisticated DI container. When you configure the kernel, you are essentially telling the "kitchen manager" which specific AI services to use for each required interface.

The Kernel as a DI Container

The Semantic Kernel Kernel is not just a simple executor of prompts; it is a composite object that aggregates plugins, functions, and, critically, AI services. It is built upon the standard .NET IServiceCollection and IServiceProvider abstractions, meaning it integrates seamlessly with the existing .NET DI ecosystem.

When you instantiate a Kernel, you provide it with a KernelBuilder. This builder is responsible for configuring the DI container that lives inside the kernel.

// This is a conceptual example of configuring the kernel.
// It demonstrates how the DI container is populated.
var builder = Kernel.CreateBuilder();

// Here, we are registering a concrete implementation (AzureOpenAITextGenerationService)
// against an interface (ITextGenerationService).
// We are telling the DI container: "When someone asks for an ITextGenerationService,
// give them an instance of AzureOpenAITextGenerationService."
builder.Services.AddAzureOpenAIChatCompletion(
    deploymentName: "gpt-4",
    endpoint: "https://your-endpoint.openai.azure.com/",
    apiKey: "your-api-key");

// The kernel is built, and the DI container is now sealed for this instance.
Kernel kernel = builder.Build();

When you later ask the kernel to execute a function, it internally resolves the required services from its container. If a plugin function requires an IChatCompletionService, the kernel automatically provides the configured AzureOpenAIChatCompletionService instance.

Service Lifetimes: The Lifecycle of Dependencies

Transient: A new instance of the service is created every time it is requested from the container. This is suitable for lightweight, stateless services that are cheap to create. 2. Scoped: A single instance is created once per client request or scope. In a web application, this typically means one instance per HTTP request. This is useful for services that need to maintain state during a single operation but can be discarded afterward. 3. Singleton: A single instance is created for the entire lifetime of the application. This instance is shared by all clients and all scopes. This is ideal for expensive-to-create services or services that hold shared state, like configuration objects or, in our case, the underlying HTTP clients used by AI connectors.

Why Lifetimes Matter for AI Connectors

HTTP Clients: They manage HttpClient instances, which are designed to be reused for performance and to prevent socket exhaustion. * Authentication Tokens: They may handle authentication tokens that need to be cached and refreshed. * Connection Pooling: They may maintain connections to the AI service endpoint.

Creating a new instance of an AI connector for every single API call (Transient lifetime) would be disastrous for performance. It would lead to constant socket creation and teardown, authentication handshakes on every request, and high memory allocation overhead.

Conversely, using a Singleton lifetime is often the correct choice. A single, shared instance of the AzureOpenAITextGenerationService can efficiently manage its underlying HttpClient, reuse TCP connections, and cache authentication tokens, leading to significant performance gains and stability.

Semantic Kernel's default service registrations typically use the Singleton lifetime for these connector services for this exact reason. The DI container ensures that the same, efficiently managed instance is provided throughout the application's lifecycle.

The Power of Composition: A Unified View

The true power of this architecture emerges when the Connector Pattern and DI are composed within the Semantic Kernel. This composition creates a system that is both flexible and robust.

Registration: During application startup, you use the KernelBuilder to register a concrete AI connector (e.g., AzureOpenAIChatCompletionService) as a singleton service against the IChatCompletionService interface. 2. Resolution: At runtime, your application logic (e.g., a plugin or a custom function) requests an IChatCompletionService from the kernel's DI container. 3. Provisioning: The kernel's container, understanding the singleton lifetime, provides the single, pre-configured instance of AzureOpenAIChatCompletionService. 4. Execution: The logic uses the service to interact with the AI model, completely unaware of the underlying HTTP calls, authentication, or model specifics.

This entire process is visualized below, showing how the application's core logic is insulated from the concrete implementations by the layers of abstraction and the DI container.

The diagram illustrates how the application's core logic remains insulated from concrete implementations by layers of abstraction and a DI container, allowing it to interact with the AI model without awareness of underlying HTTP calls, authentication, or model specifics.
Hold "Ctrl" to enable pan & zoom

The diagram illustrates how the application's core logic remains insulated from concrete implementations by layers of abstraction and a DI container, allowing it to interact with the AI model without awareness of underlying HTTP calls, authentication, or model specifics.

Extensibility and Custom Connectors

The true test of an architectural pattern is its extensibility. The Connector Pattern in SK is not a closed system. What if you need to integrate with a new, cutting-edge AI service that doesn't have a pre-built connector? The pattern provides a clear path forward.

Because the system is built on interfaces, you can create your own class that implements, for example, IChatCompletionService. This custom class would contain all the logic for communicating with your new service's API—handling authentication, constructing the HTTP request, and parsing the response.

Once you have your custom implementation, you simply register it with the kernel's DI container, just as you would with a built-in connector.

// A custom connector for a hypothetical "SuperAI" service.
public class SuperAIChatService : IChatCompletionService
{
    // Implementation details for SuperAI API...
    public Task<IChatResult> GetChatCompletionAsync(
        ChatHistory chat, 
        AIRequestSettings? requestSettings = null, 
        CancellationToken cancellationToken = default)
    {
        // Logic to call SuperAI API
        throw new NotImplementedException();
    }

    // Other interface members...
}

// In your application's composition root:
var builder = Kernel.CreateBuilder();

// Register your custom service as a singleton.
builder.Services.AddSingleton<IChatCompletionService, SuperAIChatService>();

Kernel kernel = builder.Build();

No other part of your application needs to change. The Plugin that uses IChatCompletionService will now, by virtue of DI, receive an instance of your SuperAIChatService. This demonstrates the profound decoupling achieved by combining the Connector Pattern with Dependency Injection. It allows your AI application to evolve with the rapidly changing AI landscape without requiring fundamental architectural rewrites.

Connecting to Previous Concepts

This architecture is the practical realization of the principles of abstraction and loose coupling that were introduced in earlier discussions on AI function calling. In Book 7, we explored how AI models can call external tools (functions). The design of those tools benefited immensely from being defined as simple C# methods, which the kernel could then wrap and expose to the model.

Here, we see the same principle applied at a higher level of abstraction. Instead of abstracting individual functions, we are abstracting entire services. The kernel acts as the central orchestrator, managing both the tools (plugins) and the services (connectors) through a unified DI container. This creates a consistent, predictable, and testable environment for building complex, agentic AI applications. The ability to mock the IChatCompletionService interface, for instance, allows for unit testing your AI plugins without ever making a real API call, a critical capability for professional software engineering.

Basic Code Example

Here is a self-contained C# console application demonstrating the core concepts of Dependency Injection and Connector Handling within the Semantic Kernel.

The Real-World Context

Imagine you are building a modular financial analysis tool. You want the core logic (the "Brain") to remain agnostic of the specific AI provider. You might start with Azure OpenAI, but later need to switch to a local model or another cloud provider. Furthermore, you want to ensure that your application is testable and that heavy resources (like HTTP clients) are managed efficiently.

This example solves the problem of decoupling the AI service from the application logic, allowing you to inject the specific connector implementation at runtime.

The Code Example

using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.SemanticKernel.Connectors.OpenAI;
using System;
using System.ComponentModel;
using System.Threading.Tasks;

namespace SemanticKernelDiDemo
{
    // 1. THE CORE LOGIC (AGNOSTIC OF AI PROVIDER)
    // This class represents a business service that needs AI capabilities.
    // It does not know if the AI comes from Azure, OpenAI, or a local file.
    public class FinancialReportGenerator
    {
        private readonly Kernel _kernel;

        // DEPENDENCY INJECTION: We inject the Kernel (the orchestrator) via the constructor.
        // This adheres to the "Inversion of Control" principle.
        public FinancialReportGenerator(Kernel kernel)
        {
            _kernel = kernel;
        }

        // A simple plugin method that the AI will call.
        // We use [Description] to help the AI understand what this function does.
        [Description("Calculates the total revenue by summing up individual sales figures.")]
        public double CalculateTotalRevenue(double[] salesFigures)
        {
            double total = 0;
            foreach (var figure in salesFigures)
            {
                total += figure;
            }
            return total;
        }

        public async Task<string> GenerateAnalysisAsync(string prompt)
        {
            // We add the plugin dynamically to the kernel instance.
            // In a larger app, plugins are usually registered in the DI container.
            _kernel.Plugins.AddFromObject(this, "FinancialTools");

            // Define execution settings specifying the function calling behavior.
            var executionSettings = new OpenAIPromptExecutionSettings
            {
                FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
            };

            // Invoke the kernel with the prompt.
            var result = await _kernel.InvokePromptAsync(prompt, executionSettings);
            return result.ToString();
        }
    }

    class Program
    {
        static async Task Main(string[] args)
        {
            // 2. CONFIGURING THE SERVICE CONTAINER (DI)
            var services = new ServiceCollection();

            // Configure Logging (Essential for debugging AI interactions)
            services.AddLogging(builder => builder
                .AddConsole()
                .SetMinimumLevel(LogLevel.Warning)); // Suppress verbose Semantic Kernel logs for clarity

            // 3. CONNECTOR HANDLING
            // Here we register the specific AI connector.
            // In a real scenario, API_KEY would come from Azure Key Vault or User Secrets.
            // NOTE: This code will throw an exception if the key is invalid, 
            // but the architecture remains valid.
            string apiKey = Environment.GetEnvironmentVariable("AZURE_OPENAI_API_KEY") ?? "YOUR_API_KEY_HERE";
            string deploymentName = "gpt-4o-mini"; // Or your specific deployment name
            string endpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT") ?? "https://your-endpoint.openai.azure.com/";

            // Register the Kernel with a Singleton lifetime.
            // The Kernel is expensive to create, so we reuse it.
            services.AddSingleton<Kernel>(sp =>
            {
                // Create a builder to configure the Kernel.
                var builder = Kernel.CreateBuilder();

                // Add the Azure OpenAI Connector (Chat Completion Service).
                // This is where the "Connector Pattern" is applied.
                builder.AddAzureOpenAIChatCompletion(
                    deploymentName: deploymentName,
                    endpoint: endpoint,
                    apiKey: apiKey);

                // Build the kernel.
                return builder.Build();
            });

            // Register our business service (FinancialReportGenerator) as a Transient service.
            // A new instance is created every time it is requested (good for stateless logic).
            services.AddTransient<FinancialReportGenerator>();

            // 4. BUILDING THE SERVICE PROVIDER
            // This creates the container that will manage our dependencies.
            var serviceProvider = services.BuildServiceProvider();

            // 5. RESOLVING DEPENDENCIES
            // We request the FinancialReportGenerator from the container.
            // The container automatically resolves its dependency (Kernel) and injects it.
            var generator = serviceProvider.GetRequiredService<FinancialReportGenerator>();

            Console.WriteLine("--- Starting Financial Analysis ---");

            try
            {
                // Prepare a prompt that requires both reasoning and tool usage.
                string prompt = "Analyze the sales data [100, 250, 300]. Calculate the total revenue and provide a brief summary.";

                // Execute the logic.
                string analysis = await generator.GenerateAnalysisAsync(prompt);

                Console.WriteLine($"\nAI Analysis Result:\n{analysis}");
            }
            catch (Exception ex)
            {
                Console.WriteLine($"\nError: {ex.Message}");
                Console.WriteLine("(Note: If you see an authentication error, ensure your API Key is set correctly.)");
            }
            finally
            {
                // In a web app, the container is disposed automatically.
                // In a console app, we ensure disposal to release resources.
                if (serviceProvider is IDisposable disposable)
                {
                    disposable.Dispose();
                }
            }
        }
    }
}

Detailed Line-by-Line Explanation

public class FinancialReportGenerator: This class encapsulates the application's domain logic. It represents a "Service" in a standard Dependency Injection architecture. * private readonly Kernel _kernel;: The class holds a reference to the Semantic Kernel. It does not hold a reference to OpenAIClient or AzureOpenAISettings. This is the essence of the Connector Pattern—the business logic depends on the abstraction (Kernel), not the concrete implementation. * public FinancialReportGenerator(Kernel kernel): This is the Constructor Injection. The DI container will automatically pass the configured Kernel instance when creating this class. This ensures the class is always in a valid state with its dependencies available. * [Description("...")]: This attribute is metadata. When we register this class as a plugin, the Semantic Kernel reads this description to decide when to use this function based on the user's prompt. * CalculateTotalRevenue: A pure C# method. It doesn't know about AI. It just performs math. * _kernel.Plugins.AddFromObject(this, "FinancialTools"): We expose our C# method to the AI by adding it as a plugin to the Kernel's plugin collection. * OpenAIPromptExecutionSettings: This configures how the AI model behaves. FunctionChoiceBehavior.Auto() tells the AI it is allowed to call our CalculateTotalRevenue function if it determines the math is necessary. * _kernel.InvokePromptAsync: This is the trigger. The Kernel orchestrates the flow: send prompt -> AI decides to call function -> Kernel executes C# method -> AI generates final response.

var services = new ServiceCollection();: Initializes the DI container builder. This is standard Microsoft.Extensions.DependencyInjection. * services.AddLogging(...): Configures logging. Semantic Kernel is verbose; setting LogLevel.Warning reduces noise while still showing errors. * services.AddSingleton<Kernel>(sp => ...): This is the critical registration. * Kernel.CreateBuilder(): Prepares the fluent API for building the Kernel. * builder.AddAzureOpenAIChatCompletion(...): This is where the Connector is plugged in. We are telling the Kernel to use the Azure OpenAI implementation for chat completion services. * return builder.Build(): Finalizes the Kernel configuration and returns the instance. * Lifetime: Singleton ensures the Kernel (and its underlying HTTP clients) is created once and reused, preventing socket exhaustion. * services.AddTransient<FinancialReportGenerator>(): Registers our business logic. Transient means a new instance is created every time it is requested. Since it holds the Kernel (Singleton), this is safe and keeps the business logic stateless.

var serviceProvider = services.BuildServiceProvider();: This "bakes" the configuration into a container ready for use. * var generator = serviceProvider.GetRequiredService<FinancialReportGenerator>();: We ask the container for a FinancialReportGenerator. * The container sees the constructor requires a Kernel. * It looks up the Kernel registration, sees it's a Singleton, creates (or retrieves) it, and passes it into the FinancialReportGenerator constructor. * The result is a fully wired-up object graph. * generator.GenerateAnalysisAsync(...): We trigger the logic. The DI container's job is done; the application logic takes over.

Visualization of Dependency Flow

The following diagram illustrates how the dependencies flow from the DI Container to the execution context.

Diagram: G
Hold "Ctrl" to enable pan & zoom

Incorrect Service Lifetime Management: * Mistake: Registering the Kernel as Scoped or Transient in a high-throughput application (like a web API). * Consequence: The Semantic Kernel internally manages an HttpClient. If you dispose and recreate the Kernel for every request, you will rapidly exhaust the available sockets on the machine, leading to SocketException errors and severe performance degradation. * Fix: Always register the Kernel as a Singleton.

  1. Missing using Directives for Connectors:

    • Mistake: Forgetting to include Microsoft.SemanticKernel.Connectors.OpenAI (or the specific provider package).
    • Consequence: The extension methods like .AddAzureOpenAIChatCompletion() will not be available, causing compilation errors.
    • Fix: Ensure the specific NuGet package for your chosen AI provider is installed (e.g., Microsoft.SemanticKernel.Connectors.AzureOpenAI).
  2. Hardcoding Secrets:

    • Mistake: Placing API keys directly in the AddAzureOpenAIChatCompletion call as string literals.
    • Consequence: Security risks and difficulty managing different environments (Dev vs. Prod).
    • Fix: Use IConfiguration (from Microsoft.Extensions.Configuration) injected into the setup delegate, or environment variables (as shown in the example for demonstration purposes).
  3. Not Registering Plugins in DI:

    • Mistake: Creating plugins inside the method where they are used rather than registering them.
    • Consequence: Redundant code and inability to mock plugins for unit testing.
    • Fix: If a plugin is complex, register it as a service and inject it into the class that adds it to the Kernel. For simple plugins (like the FinancialReportGenerator itself), adding it via AddFromObject in the execution path is acceptable, but ensure the class itself is managed by DI.

The chapter continues with advanced code, exercises and solutions with analysis, you can find them on the ebook on Leanpub.com or Amazon


Loading knowledge check...



Code License: All code examples are released under the MIT License. Github repo.

Content Copyright: Copyright © 2026 Edgar Milvus | Privacy & Cookie Policy. All rights reserved.

All textual explanations, original diagrams, and illustrations are the intellectual property of the author. To support the maintenance of this site via AdSense, please read this content exclusively online. Copying, redistribution, or reproduction is strictly prohibited.