Chapter 13: The Agent's Toolbox: Integrating External Services with MCP and .NET
Theoretical Foundations
In the landscape of cloud-native AI, the transition from monolithic scripts to distributed microservices is not merely a deployment preference; it is a fundamental architectural necessity driven by the physics of modern hardware and the economics of inference scaling. The theoretical foundation of this transition rests on the principle of decoupling the agent's cognitive loop from its execution environment, allowing for independent scaling, fault isolation, and technological heterogeneity. This section dissects the architectural patterns that enable this decoupling, focusing on the containerization of agent runtimes, the orchestration of state, and the mechanics of high-throughput inference.
The Agent as a Microservice: Containerization and Runtime Isolation
Traditionally, AI agents were implemented as monolithic Python scripts running on a single GPU. This approach suffers from the "noisy neighbor" problem, where a single memory leak or an infinite loop in one agent can destabilize the entire system. In a cloud-native paradigm, we treat an Agent not as a script, but as a stateful microservice.
The containerization of the agent runtime is the first step toward isolation. A container encapsulates the agent's code, dependencies, and the execution environment (e.g., the .NET runtime, Python sidecars, or ONNX runtimes). This ensures that the agent behaves identically whether deployed on a developer's laptop or a distributed Kubernetes cluster.
Why C# is pivotal here: C# and the .NET runtime offer superior performance characteristics for agent orchestration compared to interpreted languages like Python. While Python remains dominant for model training and inference, the orchestration layer—the "brain" that manages the agent's lifecycle, tool usage, and inter-agent communication—benefits from C#'s strong typing, async/await primitives, and memory efficiency.
Consider the IAgent interface. By defining a strict contract, we decouple the agent's logic from its hosting environment. This allows us to swap implementations—for example, switching from a local agent running on a CPU to a cloud-hosted agent running on a GPU cluster—without altering the orchestration logic.
using System;
using System.Threading.Tasks;
using Microsoft.Extensions.AI; // Hypothetical namespace for AI primitives
// The contract defining an agent's behavior.
// This abstraction is crucial for swapping between different model providers (OpenAI, Local Llama, Azure AI).
public interface IAgent
{
string Id { get; }
Task<AgentResponse> RespondAsync(ChatMessage[] context);
}
// A concrete implementation representing a containerized agent.
public class ContainerizedAgent : IAgent
{
private readonly IModelClient _modelClient;
public ContainerizedAgent(IModelClient client)
{
_modelClient = client;
}
public string Id => Guid.NewGuid().ToString();
public async Task<AgentResponse> RespondAsync(ChatMessage[] context)
{
// The agent delegates the heavy lifting to the model client,
// which might be communicating with a GPU-optimized sidecar container.
return await _modelClient.CompleteAsync(context);
}
}
Analogy: The Restaurant Kitchen
Imagine a restaurant kitchen. In a monolithic architecture (a food truck), one chef does everything: prep, cooking, plating, and payment. If the chef gets sick, the truck closes. In a microservices architecture (a large restaurant), you have specialized stations: the grill (GPU inference), the salad bar (preprocessing), and the expediter (orchestration). The container is the physical station where a specific task occurs. If the salad station has an issue, the grill continues to work. The IAgent interface is the standardized recipe card that ensures any chef can step into the station and know exactly how to perform the task.
State Management and Memory in Distributed Systems
One of the most challenging aspects of distributed AI agents is managing state. In a monolithic script, state (conversation history, tool call results) is stored in local variables. In a microservices environment, agents are ephemeral; they can be killed, restarted, or scaled horizontally. Therefore, the agent must be stateless regarding its long-term memory.
The theoretical solution is the Command Query Responsibility Segregation (CQRS) pattern applied to agent memory. The agent's "working memory" (short-term context) is kept in memory for the duration of a session, while "long-term memory" (facts, preferences, past interactions) is persisted in an external store (e.g., Redis, PostgreSQL, or a Vector Database).
Reference to Previous Concepts: In Book 6, we discussed the "Vector Embedding Pipeline." We now apply that knowledge here. When an agent needs to recall a fact, it does not scan its local RAM; it queries the vector store. This separation allows the agent to scale infinitely. Adding more agent instances does not increase the complexity of memory synchronization because memory is centralized and shared.
C# excels in this domain through its robust support for Dependency Injection (DI) and Generic Host patterns. By injecting the memory store as a service, the agent remains agnostic to the storage medium.
using System.Collections.Generic;
using System.Threading.Tasks;
// The abstraction for memory, decoupling the agent from the storage implementation.
public interface IMemoryStore
{
Task<string> RetrieveAsync(string query);
Task StoreAsync(string key, string value);
}
// The agent depends on the abstraction, not the concrete class.
public class StatefulAgent : IAgent
{
private readonly IMemoryStore _memory;
private readonly IModelClient _modelClient;
public StatefulAgent(IModelClient modelClient, IMemoryStore memory)
{
_modelClient = modelClient;
_memory = memory;
}
public string Id => "Stateful-Agent-v1";
public async Task<AgentResponse> RespondAsync(ChatMessage[] context)
{
// 1. Retrieve relevant context from long-term memory
var historicalContext = await _memory.RetrieveAsync(context[0].Text);
// 2. Augment the prompt with this context
var augmentedPrompt = $"{historicalContext}\nUser: {context[0].Text}";
// 3. Generate response
var response = await _modelClient.CompleteAsync(new[] { new ChatMessage { Role = "User", Content = augmentedPrompt } });
// 4. Store new information (if applicable)
await _memory.StoreAsync(context[0].Text, response.Content);
return response;
}
}
Analogy: The Librarian vs. The Student An agent with local-only memory is like a student trying to memorize an entire library before an exam. It is inefficient and fragile. A distributed agent is a student who visits a librarian (the Vector Store). The student asks a question, the librarian retrieves the relevant book (context), and the student answers based on that information. The student (Agent) is lightweight and can be replaced without losing the library's knowledge.
Tool Integration and Function Calling
Modern AI agents are not just text generators; they are executors of logic. They interact with external systems—APIs, databases, or other microservices. This is known as Tool Integration or Function Calling.
The theoretical architecture for tool integration relies on OpenAPI/Swagger specifications or gRPC contracts. When an agent decides to use a tool, it does not execute the code directly. Instead, it generates a structured request (JSON) that is validated and routed to the appropriate microservice.
In C#, this is powerfully modeled using System.Text.Json for serialization and IHttpClientFactory for resilient HTTP communication. The agent's "tool belt" is a collection of typed clients.
using System.Net.Http;
using System.Text.Json;
using System.Threading.Tasks;
// A tool definition representing an external API call.
public interface ITool
{
string Name { get; }
Task<string> ExecuteAsync(string parameters);
}
// Example: A tool to fetch weather data.
public class WeatherTool : ITool
{
private readonly HttpClient _httpClient;
public WeatherTool(HttpClient httpClient)
{
_httpClient = httpClient;
}
public string Name => "GetWeather";
public async Task<string> ExecuteAsync(string parameters)
{
// Deserialize parameters (e.g., {"city": "Seattle"})
var request = JsonSerializer.Deserialize<WeatherRequest>(parameters);
// Call the external microservice
var response = await _httpClient.GetAsync($"https://api.weather.com/v1/{request.City}");
return await response.Content.ReadAsStringAsync();
}
}
Analogy: The Swiss Army Knife vs. The Specialist Toolbox A monolithic agent tries to be a Swiss Army Knife—everything is built-in. If you need a new tool, you have to recompile the knife. In a microservices architecture, the agent is a handle (the orchestrator) that holds specialized tools (microservices). To add a new capability (e.g., booking a flight), you simply snap in a new tool (a new microservice) without modifying the handle.
Inference Scaling: The Physics of Throughput
The final pillar of this theoretical foundation is inference scaling. AI models, particularly Large Language Models (LLMs), are computationally expensive. They are memory-bandwidth bound and GPU-intensive. Running them on a single instance creates a bottleneck.
We must distinguish between two types of scaling:
- Vertical Scaling (Scaling Up): Adding more GPUs to a single machine.
- Horizontal Scaling (Scaling Out): Adding more instances of the agent service.
Load Balancing and GPU Allocation: Standard round-robin load balancing is insufficient for AI workloads. Because inference requests vary wildly in latency (a simple query vs. a long chain-of-thought reasoning), we need Intelligent Routing. This involves:
- Model Sharding: Splitting a massive model across multiple GPUs (Tensor Parallelism).
- Request Batching: Grouping multiple user requests into a single batch to maximize GPU utilization.
In a cloud-native environment, we use Kubernetes Custom Resource Definitions (CRDs) like the NVIDIA GPU Operator to expose GPUs to pods. The orchestration layer (C#) must be aware of these resources.
Asynchronous Processing: To handle high-throughput workloads, we must decouple the request reception from the inference execution. When a user sends a message, the API should immediately acknowledge receipt (HTTP 202 Accepted) and push the request into a message queue (e.g., Azure Service Bus or RabbitMQ). Worker services pick up these messages, perform the inference, and notify the user via WebSockets or polling.
C#'s async/await and Task<T> are the bedrock of this non-blocking architecture. They allow the thread pool to handle thousands of concurrent requests without blocking threads waiting for GPU responses.
using System.Threading.Channels;
using System.Threading.Tasks;
// The entry point for high-throughput ingestion.
public class InferenceOrchestrator
{
private readonly Channel<InferenceRequest> _queue;
public InferenceOrchestrator()
{
// Bounded channel prevents memory overflow under backpressure.
_queue = Channel.CreateBounded<InferenceRequest>(1000);
}
public async Task SubmitRequestAsync(InferenceRequest request)
{
// Non-blocking write to the queue.
await _queue.Writer.WriteAsync(request);
}
public async Task ProcessQueueAsync()
{
// Worker loop consuming the queue.
await foreach (var request in _queue.Reader.ReadAllAsync())
{
// Delegate to the GPU-bound worker.
await ProcessInferenceAsync(request);
}
}
private async Task ProcessInferenceAsync(InferenceRequest request)
{
// Simulate GPU inference latency.
await Task.Delay(1000);
// Result is pushed to a notification service.
}
}
Analogy: The Highway Toll Booths Imagine a single toll booth (a monolithic GPU server). Cars (inference requests) line up. If a car breaks down or takes a long time to pay, the whole line stops. Now, imagine a modern highway system (cloud-native scaling). There are multiple toll booths (GPU instances), and cars are routed to the shortest line (intelligent load balancing). Furthermore, cars are grouped into "platoons" (batching) to pass through the gate more efficiently. If one booth closes (pod failure), traffic is automatically rerouted to others.
Architectural Implications and Failure Modes
The shift to this architecture introduces specific failure modes that must be theoretically understood:
- Cascading Failures: If the Vector Store (Memory) becomes slow, the agents will hold connections open, exhausting the thread pool. This is mitigated by Circuit Breakers (a concept from the Polly library in C#). If the failure rate exceeds a threshold, the circuit opens, and the agent fails fast (e.g., "I cannot recall that right now") rather than hanging.
- Data Consistency: In a distributed system, we often trade strict consistency for availability (CAP Theorem). For agent memory, Eventual Consistency is usually acceptable. An agent might not see a fact stored by another agent immediately, but it will eventually synchronize.
- Cold Starts: Containerized agents that scale to zero to save costs suffer from cold starts (loading the model into GPU memory takes time). Strategies like Pre-warming or Sticky Sessions (routing a user to a warm instance) are essential.
Visualization of the Architecture
The following diagram illustrates the flow of data and control in a containerized agent system. Note the separation of the Orchestration Layer (C#) from the Inference Layer (GPU/Model).
Summary
The theoretical foundation of cloud-native AI agents is built on the rigorous application of microservices principles to the cognitive domain. By containerizing agents, we achieve isolation and portability. By externalizing state, we achieve scalability and resilience. By leveraging C#'s asynchronous capabilities and strong typing, we build a robust orchestration layer that can manage complex, distributed workflows. This architecture transforms AI agents from brittle scripts into resilient, enterprise-grade services capable of handling the demands of modern, high-throughput applications.
Basic Code Example
Consider a real-world scenario: you are building a customer support chatbot. A user asks, "What is the status of my order #12345?". A simple chatbot might retrieve a static FAQ. A sophisticated, cloud-native agent needs to perform a sequence of actions: understand the intent, call an external API to fetch order details, and then formulate a response based on that live data. This requires an agent that can not only process language but also execute tools.
This example demonstrates how to containerize a simple AI agent that decides between two actions: retrieving a "knowledge base" or calling a "mock order API". We will use modern .NET 8, the official Microsoft.Extensions.AI library for standardized AI interactions, and the official McpDotNet library to integrate with an MCP (Model Context Protocol) server that exposes our tools. This pattern decouples the agent's logic from the specific implementation of its tools, a core tenet of microservices architecture.
using System.Text.Json;
using System.Threading.Channels;
using McpDotNet.Client;
using McpDotNet.Configuration;
using McpDotNet.Protocol.Transport;
using McpDotNet.Protocol.Messages;
using Microsoft.Extensions.AI;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.DependencyInjection;
// 1. Define the Data Transfer Objects (DTOs) for our tools.
// In a microservices environment, these contracts are shared via NuGet packages
// or interface definitions to ensure type safety across service boundaries.
public record OrderStatusRequest(string OrderId);
public record OrderStatusResponse(string OrderId, string Status, string EstimatedDelivery);
public record KnowledgeBaseRequest(string Query);
public record KnowledgeBaseResponse(string Answer);
public class Program
{
public static async Task Main(string[] args)
{
// 2. Setup Dependency Injection and Logging (Standard .NET Host pattern)
var services = new ServiceCollection();
services.AddLogging(builder => builder.AddConsole().SetMinimumLevel(LogLevel.Warning));
// 3. Configure the LLM (Large Language Model) Client.
// We use the IChatClient abstraction from Microsoft.Extensions.AI.
// This allows swapping providers (OpenAI, Azure, Local) without changing agent logic.
// For this demo, we use a simple Echo client to simulate an LLM without needing API keys.
services.AddSingleton<IChatClient, DemoEchoChatClient>();
// 4. Configure the MCP (Model Context Protocol) Client.
// MCP is the USB-C port for AI applications. It allows agents to connect to
// external tools (servers) dynamically.
services.AddMcpClient(options =>
{
options.Id = "demo-agent-client";
// In a real scenario, this would point to a deployed microservice URL.
// We will launch a local mock server in the next step.
options.ServerEndpoint = new Uri("http://localhost:5000");
options.TransportType = TransportType.ServerSentEvents;
});
var serviceProvider = services.BuildServiceProvider();
// 5. Initialize the MCP Client and Connect to the Tool Server.
var mcpClient = serviceProvider.GetRequiredService<IMcpClient>();
var logger = serviceProvider.GetRequiredService<ILogger<Program>>();
try
{
// Establishes the connection and discovers available tools.
await mcpClient.ConnectAsync();
logger.LogWarning("Connected to MCP Tool Server.");
}
catch (Exception ex)
{
logger.LogError($"Failed to connect to MCP Server: {ex.Message}");
logger.LogWarning("Ensure the mock server is running on port 5000.");
return;
}
// 6. Bridge MCP Tools to the LLM Interface.
// The IChatClient expects tools in a specific format. We adapt the
// MCP-discovered tools so the LLM can see and call them.
var chatClient = serviceProvider.GetRequiredService<IChatClient>();
var tools = new List<AIFunction>();
foreach (var tool in mcpClient.Tools)
{
// Capture the tool reference for the closure
var currentTool = tool;
tools.Add(AIFunctionFactory.Create(
async (object? args) =>
{
// Dynamically invoke the tool on the remote MCP server
var result = await mcpClient.CallToolAsync(currentTool.Name, args);
return result.Content; // Return the raw content back to the LLM
},
currentTool.Name,
currentTool.Description
));
}
// 7. Define the User Query
// The agent needs to decide: Is this a general question (KB) or specific order lookup (API)?
var userPrompt = "What is the status of order #67890?";
// 8. Execute the Agent Loop
Console.WriteLine($"[User]: {userPrompt}");
var chatOptions = new ChatOptions
{
Tools = tools, // Inject the discovered tools into the LLM context
Temperature = 0.1f
};
// The LLM analyzes the prompt, sees the tools, and decides to call the 'get_order_status' tool.
// It returns a request to call the tool, not the final answer yet.
var response = await chatClient.GetResponseAsync(userPrompt, chatOptions);
// 9. Handle Tool Calls (The "Agentic" Part)
// In a complex flow, the LLM might return a request to call a tool.
// Microsoft.Extensions.AI handles the automatic execution if we configure it,
// but for explicit control in microservices, we often inspect the response.
// For this specific implementation, we check if the response contains a tool call request.
// Note: The DemoEchoChatClient simulates the LLM requesting the 'get_order_status' tool.
// If this was a real LLM (like GPT-4), it would internally decide to call the tool
// and return the result, or return a ToolCallRequest object.
// Let's simulate the LLM actually executing the tool and getting the result:
if (response.Text.Contains("get_order_status"))
{
// In a real flow, the library handles this. Here we demonstrate the logic manually
// to show what happens under the hood.
Console.WriteLine("\n[Agent]: I need to check the order database...");
// Simulate calling the tool via the MCP client
var toolResult = await mcpClient.CallToolAsync("get_order_status", new { OrderId = "67890" });
// Feed the tool result back to the LLM to generate the natural language response
var finalPrompt = $"User asked: '{userPrompt}'. Tool result: {JsonSerializer.Serialize(toolResult.Content)}. Formulate a helpful response.";
var finalResponse = await chatClient.GetResponseAsync(finalPrompt, chatOptions);
Console.WriteLine($"\n[Agent]: {finalResponse.Text}");
}
else
{
Console.WriteLine($"\n[Agent]: {response.Text}");
}
await mcpClient.DisposeAsync();
}
}
// --- MOCK INFRASTRUCTURE (To make this example runnable without external dependencies) ---
/// <summary>
/// A Mock MCP Server. In production, this would be a separate microservice (e.g., Python or Node.js)
/// running on Kubernetes, exposing tools via SSE or Stdio transport.
/// </summary>
public class MockMcpServer
{
private readonly Channel<string> _outputChannel;
public MockMcpServer(Channel<string> outputChannel) => _outputChannel = outputChannel;
public async Task StartAsync()
{
// Simulate the MCP Protocol handshake and Tool Definition
var initMessage = @"{""jsonrpc"":""2.0"",""id"":1,""result"":{""protocolVersion"":""2024-11-05"",""serverInfo"":{""name"":""OrderService"",""version"":""1.0""},""capabilities"":{}}}";
await _outputChannel.Writer.WriteAsync(initMessage);
var toolsList = @"{""jsonrpc"":""2.0"",""id"":2,""result"":{""tools"":[{""name"":""get_order_status"",""description"":""Retrieves the status of a specific order by ID."",""inputSchema"":{""type"":""object"",""properties"":{""OrderId"":{""type"":""string""}}}}]}}";
await _outputChannel.Writer.WriteAsync(toolsList);
// Listen for tool calls (Simulated via a simple read loop in a real app)
// This part is simplified for the single-file example context.
}
}
/// <summary>
/// A mock IChatClient that simulates an LLM's behavior:
/// 1. Recognizes the intent to check an order.
/// 2. Requests to call the 'get_order_status' tool.
/// </summary>
public class DemoEchoChatClient : IChatClient
{
public Task<ChatResponse> GetResponseAsync(string prompt, ChatOptions? options = null, CancellationToken cancellationToken = default)
{
// Logic: If the prompt asks about an order, simulate the LLM deciding to use the tool.
// In a real LLM, this decision is made by the model weights.
if (prompt.Contains("order") && !prompt.Contains("Tool result:"))
{
// The LLM technically returns a request to call the tool.
// The Microsoft.Extensions.AI library usually handles the execution,
// but we are demonstrating the flow here.
return Task.FromResult(new ChatResponse(
new ChatMessage(ChatRole.Assistant, "I need to use the get_order_status tool to help you.")
));
}
// If we feed the tool result back, the LLM summarizes it.
if (prompt.Contains("Tool result:"))
{
return Task.FromResult(new ChatResponse(
new ChatMessage(ChatRole.Assistant, "Your order #67890 is currently 'Shipped'. It will arrive by tomorrow.")
));
}
return Task.FromResult(new ChatResponse(new ChatMessage(ChatRole.Assistant, "I don't understand.")));
}
public async Task<ChatResponse<T>> GetResponseAsync<T>(string prompt, ChatOptions? options = null, CancellationToken cancellationToken = default)
{
var response = await GetResponseAsync(prompt, options, cancellationToken);
return new ChatResponse<T>(response.Messages, response.FinishReason);
}
public IAsyncEnumerable<StreamingChatResponseUpdate> GetStreamingResponseAsync(string prompt, ChatOptions? options = null, CancellationToken cancellationToken = default)
{
throw new NotImplementedException();
}
public object? GetService(Type serviceType, object? serviceKey = null) => null;
public void Dispose() { }
}
Detailed Line-by-Line Explanation
1. Imports and Definitions
using ...: We import standard libraries for JSON handling and threading, plusMicrosoft.Extensions.AI(the standard interface for AI in .NET) andMcpDotNet(the client library for MCP).public record ...: We define C# records. These are immutable data types used as Data Transfer Objects (DTOs). In a microservices architecture, these define the "contract" for our tools. If the Order Service expects anOrderId, it strictly expects this shape of data.
2. The Main Entry Point
var services = new ServiceCollection();: Initializes the Dependency Injection (DI) container. DI is crucial for cloud-native apps to manage dependencies (like database connections or API clients) and enable testing.services.AddLogging(...): Configures logging. In a container, logs are written to standard output (stdout) and collected by the orchestrator (e.g., Kubernetes logging driver).services.AddSingleton<IChatClient, DemoEchoChatClient>();: Registers the AI client. In production, this would beservices.AddOpenAIChatClient(...)orservices.AddAzureOpenAIChatClient(...). We use a mock here to make the code runnable without API keys.
3. MCP Client Configuration
services.AddMcpClient(...): This is the core setup for the "Toolbox". It registers the MCP client which will look for a server exposing tools.options.ServerEndpoint: Defines where the tools live. In a real deployment, this URL would be a Kubernetes Service DNS name (e.g.,http://order-service:5000).TransportType.ServerSentEvents: MCP supports multiple transport layers. SSE (Server-Sent Events) is standard for web-based microservices communication.
4. Connection and Tool Discovery
await mcpClient.ConnectAsync();: This performs the handshake. It sends aninitializerequest to the server and waits for a response confirming protocol compatibility.foreach (var tool in mcpClient.Tools): Once connected, the client retrieves a list of available tools (e.g.,get_order_status). This is dynamic discovery—the agent doesn't need to be recompiled to know about new tools added to the server; it just sees them at runtime.
5. Bridging Tools to the LLM
AIFunctionFactory.Create(...): This is a powerful modern .NET feature. It takes a standard C# method (or lambda) and wraps it so theIChatClient(the LLM) can understand and call it.await mcpClient.CallToolAsync(...): Inside the wrapper, we forward the call to the actual MCP server. This is the "glue" that connects the LLM's abstract desire to "get order status" to the concrete network call to the microservice.
6. The Agent Loop
var response = await chatClient.GetResponseAsync(...): We send the user's prompt. The LLM analyzes the prompt and the available tools.- The Decision: The LLM looks at the tool description ("Retrieves the status of a specific order") and the user prompt ("status of order #67890"). It determines that calling this tool is the best action.
- The Execution: In the
DemoEchoChatClient, we simulate this decision. In a real LLM (like GPT-4), the library handles the tool call execution automatically if configured to do so. If not, the LLM returns a specific object indicating a tool call is requested.
7. Handling the Result
- The agent receives the raw data from the tool (e.g.,
{ "OrderId": "67890", "Status": "Shipped" }). - It feeds this data back into the LLM context.
- The LLM generates the final, human-readable response: "Your order #67890 is currently 'Shipped'."
Common Pitfalls
1. Forgetting to Register Tools in the ChatOptions
A frequent mistake is connecting the MCP client but failing to pass the discovered tools to the ChatOptions when making the request.
// WRONG
await chatClient.GetResponseAsync(prompt);
// CORRECT
var chatOptions = new ChatOptions { Tools = tools };
await chatClient.GetResponseAsync(prompt, chatOptions);
2. Synchronous Blocking in Async Pipelines
Microservices rely heavily on non-blocking I/O. If you block a thread while waiting for a tool response (e.g., using .Result or .Wait()), you can cause thread pool starvation, especially under high load. Always use async/await all the way down, as demonstrated in the example.
3. Treating MCP Tools as Local Functions Developers often try to call the tool logic directly inside the agent code.
- Why this is bad: This tightly couples the agent to the tool implementation. If the tool logic changes (e.g., moved to a different database), you have to redeploy the agent.
- The correct way: The agent should only know the interface (the tool name and description). The actual execution should happen remotely via the MCP protocol, allowing the Order Service team to update their service without affecting the Agent team.
The chapter continues with advanced code, exercises and solutions with analysis, you can find them on the ebook on Leanpub.com or Amazon
Loading knowledge check...
Code License: All code examples are released under the MIT License. Github repo.
Content Copyright: Copyright © 2026 Edgar Milvus | Privacy & Cookie Policy. All rights reserved.
All textual explanations, original diagrams, and illustrations are the intellectual property of the author. To support the maintenance of this site via AdSense, please read this content exclusively online. Copying, redistribution, or reproduction is strictly prohibited.