Skip to content

Chapter 9: The 'fixed' Statement - Pinning Managed Memory for Native Interop

Theoretical Foundations

The memory model of the .NET runtime is a marvel of engineering, designed to balance developer productivity with raw execution speed. At the heart of this model lies the Garbage Collector (GC), a background process that automatically reclaims memory that is no longer in use. To achieve this, the GC is not a passive observer; it is an active manager. It compacts the heap, shuffling live objects to eliminate fragmentation and improve cache locality. This compaction is transparent to the developer but introduces a critical constraint: objects can move. While the runtime ensures that references remain valid during a collection cycle, this mobility is fundamentally incompatible with scenarios requiring a fixed memory address. This is where the fixed statement becomes essential.

The Illusion of Stability: The Volatile Heap

Imagine a high-density urban apartment complex where the management (the GC) periodically rearranges furniture between apartments to optimize space and reduce clutter. While you are inside your apartment, you can rely on your chair being where you left it. However, if you need to lend your chair to a neighbor in a different building for a specific task—say, to use as a ladder—you must ensure it doesn't get moved while the neighbor is using it. If the management moves the chair to the basement while the neighbor is standing on it, a catastrophic failure occurs.

In .NET, the managed heap is this apartment complex. Most of the time, you don't need to know the physical address of your objects. But when you interact with native code (C/C++), use low-level SIMD intrinsics, or perform direct memory manipulation, you need a guarantee that the memory address of your object will not change. The fixed statement is the mechanism that tells the GC: "Do not move this object; I am lending its address to an external entity."

This concept is not entirely new to the reader. In Book 9: High-Performance Memory Management, we explored the generational heap and the mechanics of the GC. We learned that objects on the heap are subject to relocation. We also discussed Span<T> and Memory<T> as abstractions over contiguous memory regions. While Span<T> provides a safe view over memory, it does not inherently guarantee that the underlying memory is pinned. If that memory resides on the managed heap, it is still subject to GC compaction unless explicitly pinned. The fixed statement is the tool that bridges the gap between the flexible, movable managed heap and the rigid, fixed requirements of native interop.

The Mechanics of Pinning: Preventing the Tides

The fixed statement operates by modifying the object header or the handle mechanism used by the GC. When an object is pinned, the GC marks it as non-movable. During a compaction phase, the GC will skip over pinned objects, leaving gaps in the heap. This is a crucial trade-off: pinning ensures stability but can lead to heap fragmentation if overused or pinned for extended periods.

Consider a scenario common in AI applications: processing large tensors. A tensor is a multi-dimensional array of numerical data (floats, doubles, etc.). In C#, this might be represented as a float[] or a float[,]. When performing matrix multiplication or applying activation functions, we often rely on highly optimized native libraries (like Intel MKL or NVIDIA cuBLAS) or hardware intrinsics (SIMD) that operate on contiguous blocks of memory.

If we pass a reference to a managed array to a native function, the runtime must ensure that the memory address remains valid throughout the execution of that function. Without pinning, the GC could trigger a collection during the native call, moving the array and causing the native code to access invalid memory—resulting in data corruption or an Access Violation exception.

The fixed Statement Syntax

The fixed statement pins one or more pointers to managed variables. Its scope is critical; once the execution leaves the fixed block, the pointers are no longer guaranteed to be valid, and the GC is free to move the objects again.

// Conceptual example of the fixed statement structure
// (No actual code execution, purely illustrative)
unsafe
{
    byte[] dataBuffer = new byte[1024];

    // The 'fixed' statement pins the 'dataBuffer' array
    // and assigns a pointer to its first element.
    fixed (byte* ptr = dataBuffer)
    {
        // Within this block, 'ptr' points to a fixed location.
        // The GC will not relocate 'dataBuffer'.

        // We can now safely pass 'ptr' to native APIs or 
        // use it for low-level memory manipulation.
    }

    // Outside the block, 'ptr' is invalid.
    // The GC is now free to move 'dataBuffer' during the next collection.
}

The fixed Buffer: Inline Optimization

Beyond pinning existing arrays, C# allows us to declare fixed-size buffers within struct types. This is a powerful feature for creating value types that contain inline, unmanaged data buffers. Unlike a standard array field, which is a reference to an object on the heap, a fixed buffer is embedded directly within the struct's memory layout.

This is analogous to a Swiss Army Knife. A standard array is like carrying a separate toolbox (a reference to a heap object). A fixed buffer is like having the tools folded directly into the handle of the knife. There is no indirection, no separate allocation, and no additional GC pressure.

In the context of AI, consider a Token struct representing a piece of text tokenized by a model. A token often requires metadata (ID, log probability, etc.) alongside the raw data (the token string or embedding vector). Using a fixed buffer allows us to store small embedding vectors directly within the struct, avoiding heap allocations and improving data locality.

// Conceptual structure of a Token with an inline fixed buffer
// Note: This requires the 'unsafe' context.
public unsafe struct Token
{
    public int Id;
    public float Probability;

    // A fixed buffer for an embedding vector of size 128 floats.
    // This memory is inline within the struct instance.
    public fixed float Embedding[128];
}

When we declare an array of Token structs (e.g., Token[] tokens), the Embedding data for each token is stored contiguously in memory, right next to the Id and Probability fields. This is ideal for SIMD operations, which thrive on contiguous data. However, to access the data within the fixed buffer, we must use the fixed statement again, because the struct itself might be on the managed heap (if it's part of an array) or on the stack.

// Accessing the fixed buffer requires pinning the struct
Token[] tokens = new Token[100];
unsafe
{
    // Pin the array of structs
    fixed (Token* tokenPtr = tokens)
    {
        // Access the fixed buffer of the first token
        // tokenPtr->Embedding is a pointer to the 128 floats
        // We can now pass this pointer to a SIMD intrinsic or native library.
    }
}

The GCHandle API: Broader Pinning Scenarios

The fixed statement is syntactic sugar for a specific scope. It is ideal for short-lived pinning within a method. However, there are scenarios where pinning needs to persist beyond a single block or apply to objects that are not easily accessed via pointers (like objects on the LOH or complex object graphs). This is where GCHandle comes into play.

GCHandle provides a mechanism to "handle" a managed object from unmanaged code. It allows us to obtain a stable pointer to an object that remains valid as long as the handle exists. This is similar to renting a parking spot (the handle) for a specific car (the managed object). As long as you hold the rental ticket (the GCHandle), the car won't be towed (moved).

In AI applications, particularly those involving streaming data or long-running inference pipelines, we might need to pin a large buffer for the duration of a processing step that spans multiple method calls or even asynchronous operations. Using GCHandle allows us to pin the buffer at the start of the operation and unpin it only when the operation completes, ensuring stability throughout.

// Conceptual use of GCHandle for extended pinning
using System.Runtime.InteropServices;

// Imagine a large tensor buffer used in a multi-stage AI pipeline
float[] largeTensor = new float[1_000_000];

// Allocate a handle to pin the object
GCHandle handle = GCHandle.Alloc(largeTensor, GCHandleType.Pinned);

try
{
    // Get the stable address
    IntPtr tensorAddress = handle.AddrOfPinnedObject();

    // Pass this address to a native library that performs a long-running computation
    // NativeLibrary.Compute(tensorAddress, largeTensor.Length);
}
finally
{
    // Crucial: Always release the handle to allow GC compaction again
    handle.Free();
}

Theoretical Foundations

To fully understand fixed, we must visualize the memory layout. In .NET, objects are aligned on boundaries (typically 8 bytes). When we pin an object, we are essentially setting a bit in the object header that the GC checks during compaction.

Let's visualize the heap before and after a compaction attempt with a pinned object.

This diagram illustrates how a pinned object (marked with a pushpin) remains fixed in memory while the Garbage Collector compacts the surrounding objects, moving them together to close gaps in the heap.
Hold "Ctrl" to enable pan & zoom

This diagram illustrates how a pinned object (marked with a pushpin) remains fixed in memory while the Garbage Collector compacts the surrounding objects, moving them together to close gaps in the heap.

As shown in the diagram, the GC attempts to compact the heap by sliding objects towards the lower addresses. However, because Object B is pinned, it acts as an immovable anchor. The GC can move Object A, but it cannot move Object B. Consequently, Object C and Object D cannot slide past Object B to fill the gap immediately preceding it. This results in fragmentation.

In high-performance AI scenarios, where we process massive arrays of data, uncontrolled pinning can severely degrade performance by fragmenting the heap. This forces the GC to work harder in subsequent collections and can lead to OutOfMemoryException even when there is technically enough free memory, because the free memory is not contiguous.

AI Application: Zero-Copy Token Processing

In the context of AI, specifically Large Language Models (LLMs), token processing is a bottleneck. A tokenizer converts text into a sequence of integers (tokens). These tokens are then passed through embedding layers, attention mechanisms, and feed-forward networks.

Imagine a scenario where we receive a stream of text tokens from a network source. We want to compute the cosine similarity between these incoming tokens and a set of known "hot" tokens (e.g., keywords for filtering or routing).

  1. Data Representation: We have a ReadOnlyMemory<byte> buffer containing the raw token data. This buffer might be backed by a managed byte array or memory mapped from a file.
  2. Native Interop: We use a highly optimized native library (written in C++ or Rust) to compute the similarity scores. This library expects a pointer to a contiguous block of memory.
  3. The Pinning Challenge: If we simply pass the pointer to the native library, we must ensure the memory doesn't move. If the buffer is on the managed heap, pinning is required.

Using the fixed statement, we can achieve zero-copy interoperability. We avoid copying the data into a separate unmanaged buffer (which would involve allocation and memory transfer overhead). Instead, we pin the existing managed buffer and pass its address directly.

// Conceptual pattern for zero-copy token processing
public unsafe class TokenProcessor
{
    // Native method signature (conceptual)
    // [DllImport("native_simd_lib.dll")]
    // public static extern void ComputeSimilarity(byte* data, int length, float* results);

    public float[] ProcessTokens(ReadOnlyMemory<byte> tokenData)
    {
        // Assume tokenData is backed by a managed array
        if (!MemoryMarshal.TryGetArray(tokenData, out ArraySegment<byte> segment))
            throw new InvalidOperationException("Cannot access underlying array.");

        byte[] array = segment.Array;
        int offset = segment.Offset;
        int count = segment.Count;

        float[] results = new float[count / sizeof(float)]; // Assuming float results

        // Pin the managed array for the duration of the native call
        fixed (byte* ptr = &array[offset])
        fixed (float* resPtr = results)
        {
            // Pass pointers directly to native code
            // ComputeSimilarity(ptr, count, resPtr);
        }

        return results;
    }
}

This pattern is critical for AI services handling high-throughput requests. By pinning memory, we eliminate the overhead of marshaling data copies, allowing the CPU and GPU to process data at maximum bandwidth.

SIMD and the Requirement of Fixed Memory

Single Instruction, Multiple Data (SIMD) instructions allow a CPU to perform the same operation on multiple data points simultaneously. In .NET, this is exposed via System.Numerics.Vector<T> and hardware intrinsics (e.g., Avx2, Sse2).

While Vector<T> can operate on managed arrays, the most efficient low-level access often requires pointers. When using hardware intrinsics, we load data directly from memory into SIMD registers. For this to be efficient and safe, the memory address must be aligned and stable.

Consider a loop that adds two vectors (embeddings) together. In a naive managed loop, the JIT compiler generates bounds checks and standard instructions. Using SIMD intrinsics with fixed pointers, we can bypass bounds checks and use vectorized instructions.

// Conceptual SIMD addition using fixed pointers
public unsafe void AddVectors(float[] a, float[] b, float[] result)
{
    // Pin all arrays to ensure stability during the operation
    fixed (float* aPtr = a)
    fixed (float* bPtr = b)
    fixed (float* rPtr = result)
    {
        int i = 0;
        int length = a.Length;

        // Process in blocks of Vector<float>.Count (e.g., 8 floats for AVX)
        int vectorLength = Vector<float>.Count;
        int lastBlockIndex = length - (length % vectorLength);

        for (; i < lastBlockIndex; i += vectorLength)
        {
            // Load data directly from memory into SIMD registers
            Vector<float> va = Vector.LoadUnsafe(ref aPtr[i]);
            Vector<float> vb = Vector.LoadUnsafe(ref bPtr[i]);

            // Perform vector addition
            Vector<float> vres = va + vb;

            // Store result back to memory
            vres.StoreUnsafe(ref rPtr[i]);
        }

        // Handle remaining elements (tail processing)
        for (; i < length; i++)
        {
            rPtr[i] = aPtr[i] + bPtr[i];
        }
    }
}

Without fixed, the pointers aPtr, bPtr, and rPtr would be invalid inside the loop if a GC collection occurred. The fixed statement guarantees that the memory layout remains static, allowing the CPU to prefetch data and execute vector instructions without interruption.

Advanced Pinning: fixed Buffers in Structs Revisited

Let's revisit the Token struct with the fixed buffer. This structure is a cornerstone of high-performance AI tokenization because it minimizes memory indirection.

When we have an array of Token structs: Token[] tokenStream = new Token[1024];

The memory layout is contiguous: [Token 0][Token 1][Token 2]...

Inside Token 0, the layout is: [Id (4 bytes)][Prob (4 bytes)][Embedding[0] (4 bytes)]...[Embedding[127] (4 bytes)]

This is vastly superior to a class-based approach where each Token object would have a header, and the Embedding would be a separate float[] array on the heap, requiring a pointer dereference to access.

However, accessing the fixed buffer requires pinning. If we want to iterate over the tokenStream and compute a dot product on the embeddings, we pin the array:

// Conceptual iteration over pinned tokens
unsafe
{
    fixed (Token* basePtr = tokenStream)
    {
        for (int i = 0; i < tokenStream.Length; i++)
        {
            Token* currentToken = &basePtr[i];

            // Access the fixed buffer directly
            // The 'Embedding' field is treated as a pointer to float[128]
            float* embeddingPtr = currentToken->Embedding;

            // Perform SIMD operations on the embeddingPtr
            // ...
        }
    }
}

This pattern is essential for building custom tokenizers or embedding generators in C# where managed code performance is critical, but we need to drop down to low-level memory manipulation for the final computation.

Pitfalls and Best Practices

While fixed is powerful, it is a sharp tool.

  1. Pinning Time: Never pin for longer than necessary. Pinning inside a long-lived object (like a static field) or holding a GCHandle for too long can fragment the heap and cause GC pauses.
  2. Exception Safety: If an exception occurs inside a fixed block, the GC will still unpin the memory before the exception propagates up the stack. However, if you are using GCHandle, you must use try-finally to ensure Free() is called.
  3. Pointer Validity: A pointer obtained via fixed is only valid within that block (or until the handle is freed). Passing it outside is undefined behavior.
  4. Blittable Types: The fixed statement works best with blittable types (types that have the same binary layout in managed and unmanaged memory, like byte, int, float). Non-blittable types (like string or bool) require marshaling, which fixed alone does not handle.

Architectural Implications for AI Systems

In modern AI systems, the line between managed and unmanaged code is blurring. We use C# for high-level orchestration, model serving, and business logic, but we rely on native libraries (TensorFlow, PyTorch bindings, ONNX Runtime) for heavy lifting.

The fixed statement is the glue that holds these worlds together. It allows C# to act as a high-performance orchestrator without the overhead of data copying.

Consider a real-time AI inference service:

  1. Request Ingestion: An HTTP request arrives with a JSON payload.
  2. Parsing: The payload is parsed into a managed Request object.
  3. Tokenization: The text is converted to a byte[].
  4. Inference: The byte[] is pinned and passed to a native ONNX Runtime session.
  5. Post-Processing: The results (floats) are unpinned and processed in managed code.

Without fixed, step 4 would require copying the byte[] to an unmanaged buffer, invoking the native function, and copying the results back. This doubles the memory bandwidth requirement and introduces latency. With fixed, we achieve near-native performance while retaining the safety and productivity of C#.

Conclusion

The fixed statement is not merely a syntax for pointer manipulation; it is a fundamental mechanism for controlling the memory lifecycle in a managed environment. It acknowledges that while automatic memory management is convenient, there are times when the developer must take manual control to achieve specific performance goals or compatibility requirements.

In the realm of high-performance C# for AI, understanding fixed is mandatory. It enables zero-copy data transfer to native libraries, efficient SIMD vectorization, and the creation of compact, cache-friendly data structures like fixed buffers. By mastering pinning, developers can build AI systems that are not only robust and scalable but also capable of rivaling the performance of pure native implementations.

As we move forward into the next subsection, we will explore practical patterns for implementing these concepts, including safe wrappers around native libraries and strategies for managing pinning in asynchronous contexts. The theoretical foundation laid here—the understanding of the GC's volatility and the stability provided by fixed—is the bedrock upon which these high-performance patterns are built.

Basic Code Example

using System;
using System.Runtime.InteropServices;

public class PinnedMemoryDemo
{
    public static void Main()
    {
        // REAL-WORLD CONTEXT:
        // Imagine you are building a high-performance AI inference engine in C#.
        // You need to pass a tensor of floating-point values (e.g., 1024 floats)
        // to a native C++ library (like ONNX Runtime or a custom CUDA kernel) for matrix multiplication.
        // Copying this data from the managed heap to an unmanaged buffer is expensive
        // and introduces latency. We want to pass a pointer directly to the memory
        // where the data lives, but we must ensure the Garbage Collector (GC)
        // doesn't move the memory while the native code is accessing it.

        Console.WriteLine("--- Basic 'fixed' Statement Example ---");
        RunFixedStatementExample();

        Console.WriteLine("\n--- Pinned Object Handle (GCHandle) Example ---");
        RunGCHandleExample();
    }

    static void RunFixedStatementExample()
    {
        // 1. Allocate a managed array on the heap.
        // The GC is free to move this array in memory during a collection cycle.
        float[] tensorData = new float[8] { 1.0f, 2.0f, 3.0f, 4.0f, 5.0f, 6.0f, 7.0f, 8.0f };

        // 2. Use the 'fixed' statement.
        // This pins the 'tensorData' array in memory, preventing the GC from relocating it.
        // It returns a 'float*' pointer that is valid only within the fixed block.
        // The 'using' pattern is used here to ensure the pin is released automatically
        // (via the 'fixed' statement's scope exit), even if an exception occurs.
        using (var pinHandle = new PinnedArray<float>(tensorData))
        {
            // 3. Access the pointer.
            // We can now safely pass 'pinHandle.Pointer' to native APIs or perform
            // low-level pointer arithmetic.
            unsafe
            {
                float* ptr = pinHandle.Pointer;

                Console.WriteLine($"Array pinned at address: {(IntPtr)ptr:X}");

                // Simulate native processing: Read values via pointer
                // (e.g., passing 'ptr' to a C++ function).
                for (int i = 0; i < tensorData.Length; i++)
                {
                    // Dereferencing the pointer to read the value.
                    Console.Write(*(ptr + i) + " ");
                }
                Console.WriteLine();
            }
        }
        // 4. The 'using' block ends here.
        // The pin is released. The GC is now free to move 'tensorData' again if needed.
    }

    static void RunGCHandleExample()
    {
        // 1. Create a complex object (struct) that contains a fixed-size buffer.
        // Note: 'fixed' buffers can only be used in 'unsafe' contexts and 'struct' types.
        MyStruct dataStruct = new MyStruct();
        dataStruct.Initialize();

        // 2. Pin the object using GCHandle.
        // This is useful when the object is complex or when you need to pin
        // an object for a longer duration outside of a specific code block scope.
        GCHandle handle = GCHandle.Alloc(dataStruct, GCHandleType.Pinned);

        try
        {
            // 3. Get the address of the struct.
            IntPtr address = handle.AddrOfPinnedObject();

            // 4. Cast to a pointer to access the fixed buffer inside the struct.
            unsafe
            {
                // We know the layout of MyStruct. We access the buffer directly.
                // Note: In a real scenario, we might use Marshal.PtrToStructure or
                // pointer casting if the struct is blittable.
                MyStruct* ptr = (MyStruct*)address;

                Console.WriteLine($"Struct pinned at address: {address:X}");

                // Access the fixed buffer inside the struct
                // The buffer is named 'internalBuffer' in the struct definition.
                for (int i = 0; i < 4; i++)
                {
                    Console.Write(ptr->internalBuffer[i] + " ");
                }
                Console.WriteLine();
            }
        }
        finally
        {
            // 5. CRITICAL: Always free the GCHandle.
            // If you forget this, the object remains pinned permanently (until app exit),
            // causing memory fragmentation and preventing the GC from optimizing memory.
            handle.Free();
        }
    }
}

// Helper wrapper to mimic the 'using' pattern for 'fixed' blocks
// This is a common pattern to make pinning safer and more readable.
public sealed class PinnedArray<T> : IDisposable where T : unmanaged
{
    private GCHandle _handle;
    public unsafe T* Pointer { get; }

    public PinnedArray(T[] array)
    {
        _handle = GCHandle.Alloc(array, GCHandleType.Pinned);
        unsafe
        {
            Pointer = (T*)_handle.AddrOfPinnedObject();
        }
    }

    public void Dispose()
    {
        if (_handle.IsAllocated)
        {
            _handle.Free();
        }
    }
}

// Example struct containing a fixed buffer
// 'unsafe' is required to declare a struct with a fixed buffer field.
public unsafe struct MyStruct
{
    // 'fixed' creates a buffer of a specific size inline within the struct.
    // This is useful for small, fixed-size data blocks required by native APIs.
    public fixed float internalBuffer[4];

    public void Initialize()
    {
        // We cannot use standard array initialization syntax inside a fixed buffer.
        // We must assign values individually or use pointer arithmetic.
        // This method is a helper to populate the buffer for the example.
        fixed (float* ptr = internalBuffer)
        {
            ptr[0] = 10.0f;
            ptr[1] = 20.0f;
            ptr[2] = 30.0f;
            ptr[3] = 40.0f;
        }
    }
}

Detailed Line-by-Line Explanation

1. The Real-World Context

The code is framed around a High-Performance AI scenario. When processing tokens or tensors in AI applications, data is often represented as large arrays of floats or doubles. These calculations are frequently offloaded to native libraries (C++, CUDA, etc.) because they are computationally intensive. To call these libraries efficiently, we need to pass a memory pointer. However, C# runs on a managed memory heap where the Garbage Collector (GC) compacts memory to save space. If the GC moves an array while a native function is reading it, the application will crash or read corrupted data. The fixed statement solves this by "pinning" the object in place.

2. RunFixedStatementExample Breakdown

  1. float[] tensorData = new float[8] { ... };

    • What: Allocates a managed array on the heap.
    • Why: This is standard C# memory allocation. The GC tracks this object and can move it during a garbage collection cycle to defragment the heap.
  2. using (var pinHandle = new PinnedArray<float>(tensorData))

    • What: Instantiates a wrapper class PinnedArray and enters a using block.
    • Why: While C# has a native fixed statement syntax, using a wrapper class (or GCHandle) is often preferred in complex systems to manage the lifetime of the pin explicitly and safely. The using statement guarantees that Dispose() is called at the end of the block, releasing the pin.
  3. Inside PinnedArray<T> Constructor:

    • _handle = GCHandle.Alloc(array, GCHandleType.Pinned);
      • What: Asks the CLR to pin this specific array in memory.
      • Why: GCHandleType.Pinned tells the GC: "Do not move this object in memory until I say otherwise." This is the core mechanism preventing memory relocation.
    • Pointer = (T*)_handle.AddrOfPinnedObject();
      • What: Retrieves the raw memory address of the array's first element and casts it to a typed pointer (float*).
      • Why: This pointer can be passed to native functions expecting a float* or void*.
  4. float* ptr = pinHandle.Pointer;

    • What: Accesses the pointer within the safe context.
    • Why: We are now operating in an unsafe context, allowing pointer arithmetic.
  5. for (int i = 0; i < tensorData.Length; i++) Loop

    • *(ptr + i)
      • What: Dereferences the pointer at the offset i.
      • Why: This demonstrates how to read data directly from the pinned memory location, simulating how a native library would access the data.
  6. End of using Block

    • What: The Dispose() method of PinnedArray is called.
    • Why: _handle.Free() is invoked. This unpin the object. The GC is now free to move the tensorData array again. Failure to do this results in a memory leak where the object remains pinned indefinitely, causing heap fragmentation.

3. RunGCHandleExample Breakdown

  1. MyStruct dataStruct = new MyStruct();

    • What: Creates an instance of a value type (struct).
    • Why: Structs are often used for interop because they are blittable (their memory layout is predictable). This specific struct contains a fixed buffer.
  2. GCHandle handle = GCHandle.Alloc(dataStruct, GCHandleType.Pinned);

    • What: Pins the struct using the raw GCHandle API.
    • Why: This is the lower-level API that the fixed statement uses internally. It is useful when you need to pin an object that isn't a simple array or when you need the pin to live outside a specific lexical scope.
  3. try { ... } finally { handle.Free(); }

    • What: Standard resource management pattern.
    • Why: GCHandle is unmanaged resource. If an exception occurs between Alloc and Free, the pin would persist, leading to a memory leak. The finally block ensures the pin is always released.
  4. MyStruct* ptr = (MyStruct*)address;

    • What: Casts the generic IntPtr to a specific struct pointer.
    • Why: This allows access to the fields of the struct, specifically the internalBuffer.
  5. fixed (float* ptr = internalBuffer) inside Initialize

    • What: Uses the fixed statement specifically on the buffer field within the struct.
    • Why: You cannot assign values to a fixed buffer directly using array syntax (e.g., internalBuffer = {1,2,3} is invalid). You must use a pointer or copy into it. This block demonstrates how to safely populate the fixed buffer.

Common Pitfalls

  1. Forgetting to Unpin (handle.Free() or exiting fixed scope):

    • The Mistake: Pinning an object but failing to release the pin.
    • The Consequence: The object remains pinned forever. The Garbage Collector cannot compact the heap around this object, leading to memory fragmentation. Over time, this can cause OutOfMemoryException even if there is plenty of free memory, because the free memory is scattered in small gaps between pinned objects.
  2. Pinning Large Objects:

    • The Mistake: Pinning very large arrays (e.g., > 85KB) for long periods.
    • The Consequence: Large objects are allocated on the Large Object Heap (LOH). The LOH is not compacted by default. Pinning objects on the LOH is less damaging than on the small heap, but if you pin many large objects, you still waste significant memory and can exhaust the address space.
  3. Pointer Scope Violation:

    • The Mistake: Returning a pointer obtained from a fixed block to a method outside that block.
    • The Consequence: Once the fixed block ends, the pin is released. The GC is free to move the object. Using the pointer outside the block results in undefined behavior (accessing invalid memory).
  4. Pinning Non-Blittable Types:

    • The Mistake: Trying to pin an array of non-blittable types (like string or bool[] in some contexts).
    • The Consequence: Non-blittable types require marshaling (conversion) when passed to unmanaged code. You cannot simply pin them and pass the pointer; the data layout in memory differs between managed and unmanaged contexts.

Visualizing Memory Layout

A diagram illustrating how a non-blittable type (like a string) requires the runtime to copy and convert data between a Managed Heap (with its specific layout) and an Unmanaged Stack (with a different layout), whereas a blittable type (like an int) can be shared directly.
Hold "Ctrl" to enable pan & zoom

A diagram illustrating how a non-blittable type (like a `string`) requires the runtime to copy and convert data between a Managed Heap (with its specific layout) and an Unmanaged Stack (with a different layout), whereas a blittable type (like an `int`) can be shared directly.

Explanation of Diagram:

  1. Managed Heap: Contains objects A, B, and C. There is free space between them.
  2. GC Compaction: Normally, the GC would move Object A to close the gap next to Object C.
  3. Pinning: Because Object B is Pinned, the GC cannot move it. It also cannot move Object A past Object B. The free space remains fragmented.
  4. Native Interop: We pass the address of Object B to the Native Library. Because it is pinned, the address remains valid while the native code reads it.

The chapter continues with advanced code, exercises and solutions with analysis, you can find them on the ebook on Leanpub.com or Amazon


Loading knowledge check...



Code License: All code examples are released under the MIT License. Github repo.

Content Copyright: Copyright © 2026 Edgar Milvus | Privacy & Cookie Policy. All rights reserved.

All textual explanations, original diagrams, and illustrations are the intellectual property of the author. To support the maintenance of this site via AdSense, please read this content exclusively online. Copying, redistribution, or reproduction is strictly prohibited.