Skip to content

Chapter 19: Handling Missing Data in Datasets

Theoretical Foundations

In the high-stakes domain of AI model training and inference, particularly when dealing with massive datasets or tensor buffers, memory allocation is the silent performance killer. When we talk about handling missing data in datasets, we are often dealing with millions of rows. Traditional C# collections like List<T> or arrays allocated on the heap trigger the Garbage Collector (GC), causing unpredictable pauses. For AI applications requiring real-time inference or high-throughput training, these pauses are unacceptable. This is where Span<T> and Memory<T> enter the picture, offering a way to perform zero-allocation slicing and memory manipulation that is crucial for preparing high-dimensional vectors for embeddings.

The Heap vs. The Stack: A Performance Perspective

To understand Span<T>, we must first deeply understand where data lives.

The Heap: When you allocate an object using new, it lives on the managed heap.

  • Characteristics: Flexible size, managed by the Garbage Collector.
  • Cost: Allocation is fast, but deallocation is expensive. The GC must pause execution to clean up unreachable objects. In an AI training loop processing millions of data points, frequent heap allocations for temporary buffers (e.g., a slice of a dataset) will trigger GC cycles, stalling the training process.

The Stack: Local variables inside methods live on the stack.

  • Characteristics: Fixed size, extremely fast allocation/deallocation (just moving a pointer), and thread-safe.
  • Limitation: You cannot allocate large objects (like a 1GB tensor) on the stack; it causes a stack overflow. However, you can allocate small buffers (e.g., 1KB) using stackalloc.

The Analogy: Imagine the Heap as a massive, disorganized warehouse. To store a box (an object), you find an empty spot. To retrieve it, you have to search. When the warehouse gets full, a cleaning crew (GC) arrives, halting all operations to reorganize. The Stack is a single, organized pile of plates at a buffet. You place a plate on top (allocation) and remove the top plate (deallocation). It is instant. But you can only hold a limited number of plates.

Span<T> is the magic tool that lets you hold a "view" of plates from the warehouse without actually moving them to the buffet.

Span<T>: The Zero-Allocation Window

Span<T> is a ref struct. This is a critical architectural constraint: ref struct types can only live on the stack or in registers. They cannot be boxed, they cannot be fields in a class, and they cannot be used in async state machines. This guarantees that Span<T> never allocates on the heap, making it allocation-free.

In the context of AI embeddings, imagine you have a massive 10GB tensor stored in a contiguous memory buffer (an array). To process a specific batch of data (e.g., rows 1000 to 2000), you don't want to create a new array and copy that data. You simply want a "view" or a "window" into that existing memory. Span<T> provides exactly that.

Reference to Previous Concepts: In Book 2, Chapter 12, we discussed IEnumerable<T> and deferred execution. While IEnumerable is excellent for querying databases, it is a terrible choice for high-performance vector math. IEnumerable involves virtual calls and boxing. Span<T> bypasses the abstraction layer, giving you direct memory access similar to C++ pointers, but with C# safety guarantees (bounds checking in debug builds).

Memory<T> and ArrayPool<T>: Managing Large Buffers

While Span<T> is the view, Memory<T> is the ownership. Memory<T> is not a ref struct and can be stored on the heap (e.g., as a field in a class). It represents a buffer that might be on the heap or stack.

ArrayPool: Allocating large arrays (e.g., float[1_000_000]) repeatedly is expensive. ArrayPool<T> is a shared pool of arrays. Instead of new float[], you rent an array from the pool, use it, and return it. This prevents memory fragmentation and reduces GC pressure.

AI Context: When preparing data for an embedding model (like BERT or ResNet), you often need to normalize a vector or handle missing values (impute). Using ArrayPool allows you to rent a buffer to hold the normalized values without constantly allocating new arrays, keeping the memory profile flat during long training epochs.

SIMD and System.Numerics.Vector<T>: Hardware Acceleration

Modern CPUs have SIMD (Single Instruction, Multiple Data) instructions (AVX2, AVX-512). These allow the CPU to perform mathematical operations on multiple data points simultaneously (e.g., adding 8 floats at once).

System.Numerics.Vector<T> is a hardware-accelerated type. When you use Vector<float>, the JIT compiler translates this into SIMD instructions if the hardware supports it.

The Goal: Zero-Allocation Slicing + Hardware Accelerated Math.

We combine Span<T> (zero-copy access) with Vector<T> (SIMD math) to process data at maximum speed.

Handling Missing Data with High Performance

In standard C#, handling missing data often involves LINQ: data.Where(x => x.HasValue).Select(...). On a Span<T>, standard LINQ is forbidden because:

  1. Span<T> cannot be used in iterators (yield return).
  2. LINQ delegates cause boxing and indirect calls, destroying performance.

Instead, we use loops and vectorization. For missing data imputation (e.g., replacing nulls with the mean), we can iterate over the Span<float>, identify invalid values (NaN), and replace them.

Visualizing Memory Layout

The following diagram illustrates how Span<T> acts as a window into a larger memory block (Heap or Stack), allowing SIMD operations without copying data.

A conceptual diagram showing a Span<float> acting as a lightweight window over a contiguous block of memory (either on the Stack or Heap), highlighting how it enables efficient SIMD vector processing and in-place data cleaning (such as replacing NaNs) without duplicating the underlying data.
Hold "Ctrl" to enable pan & zoom

A conceptual diagram showing a `Span` acting as a lightweight window over a contiguous block of memory (either on the Stack or Heap), highlighting how it enables efficient SIMD vector processing and in-place data cleaning (such as replacing NaNs) without duplicating the underlying data.

Practical Implementation: Zero-Allocation Imputation

Below is a performance-critical implementation of missing data handling. We assume missing data is represented as NaN (Not a Number) in a float buffer. We will replace NaN values with the mean of the vector using Span<T> and ArrayPool.

using System;
using System.Buffers;
using System.Numerics;
using System.Runtime.CompilerServices;

public class HighPerformanceImputation
{
    // Memory<T> is stored here to hold ownership of the rented array
    private Memory<float> _dataBuffer;

    public void ProcessData(int size)
    {
        // 1. Allocation Strategy: Rent from ArrayPool to avoid Heap Gen 0 GC
        float[] rentedArray = ArrayPool<float>.Shared.Rent(size);
        _dataBuffer = rentedArray.AsMemory(0, size);

        try
        {
            // Simulate loading data with missing values (NaN)
            InitializeDataWithMissingValues(_dataBuffer.Span);

            // 2. Calculate Mean (using Span for zero-copy access)
            float mean = CalculateMean(_dataBuffer.Span);

            // 3. Impute Missing Values (SIMD Accelerated)
            ImputeMissingValues(_dataBuffer.Span, mean);

            // 4. Use the data for AI Embedding (e.g., passing to a tensor)
            // Since we used Span, we didn't allocate new arrays during processing.
            ConsumeForInference(_dataBuffer.Span);
        }
        finally
        {
            // 5. Return the array to the pool. Crucial for long-running apps.
            ArrayPool<float>.Shared.Return(rentedArray);
        }
    }

    [MethodImpl(MethodImplOptions.AggressiveInlining)]
    private float CalculateMean(Span<float> data)
    {
        // We cannot use LINQ on Span. We use a simple loop.
        // For very large spans, we could use Vector<T> to sum chunks.
        double sum = 0;
        int count = 0;

        int i = 0;
        int length = data.Length;

        // Process in chunks for better cache locality
        const int blockSize = 64; 
        for (; i <= length - blockSize; i += blockSize)
        {
            var block = data.Slice(i, blockSize);
            for (int j = 0; j < blockSize; j++)
            {
                float val = block[j];
                if (!float.IsNaN(val))
                {
                    sum += val;
                    count++;
                }
            }
        }

        // Handle remaining elements
        for (; i < length; i++)
        {
            if (!float.IsNaN(data[i]))
            {
                sum += data[i];
                count++;
            }
        }

        return count == 0 ? 0 : (float)(sum / count);
    }

    [MethodImpl(MethodImplOptions.AggressiveInlining)]
    private void ImputeMissingValues(Span<float> data, float replacementValue)
    {
        int i = 0;
        int length = data.Length;

        // SIMD Vectorization Setup
        // Vector<T>.Count depends on hardware (e.g., 8 for float on AVX2)
        int vectorSize = Vector<float>.Count;
        Vector<float> replacementVector = new Vector<float>(replacementValue);

        // Vectorized loop: Processes multiple floats in a single CPU instruction
        for (; i <= length - vectorSize; i += vectorSize)
        {
            var slice = data.Slice(i, vectorSize);
            Vector<float> vector = new Vector<float>(slice);

            // Check for NaN using SIMD is complex because NaN != NaN.
            // A common trick is to compare the value to itself. 
            // However, standard Vector<T> lacks direct NaN checks.
            // For high performance, we often do a scalar check or use Avx intrinsics directly.
            // Here, we stick to scalar check inside the vector block for safety, 
            // or we can assume the data is dense and just replace if we are doing bulk operations.

            // For this example, we will do a scalar check per element in the vector block
            // to ensure correctness, as Vector<T> doesn't have IsNaN.
            for (int j = 0; j < vectorSize; j++)
            {
                if (float.IsNaN(slice[j]))
                {
                    slice[j] = replacementValue;
                }
            }
        }

        // Handle remaining elements
        for (; i < length; i++)
        {
            if (float.IsNaN(data[i]))
            {
                data[i] = replacementValue;
            }
        }
    }

    private void InitializeDataWithMissingValues(Span<float> data)
    {
        // Fill with random data and some NaNs
        Random rnd = new Random(42);
        for (int i = 0; i < data.Length; i++)
        {
            data[i] = (rnd.NextDouble() > 0.1) ? (float)rnd.NextDouble() * 100 : float.NaN;
        }
    }

    private void ConsumeForInference(Span<float> data)
    {
        // In an AI context, this Span would be passed to a Tensor constructor
        // or a native binding (like ONNX Runtime) without copying.
        // Example: Tensor.Create(data, dimensions);
        Console.WriteLine($"Processed {data.Length} elements with zero heap allocations.");
    }
}

Architectural Implications for AI

  1. Tensor Buffers: Modern AI frameworks (TensorFlow.NET, TorchSharp) often use Span<T> or Memory<T> to expose tensor data. This allows C# developers to write custom preprocessing logic (like the imputation above) directly on the tensor memory without the overhead of converting to a managed collection.
  2. Batch Processing: When training models, data is processed in batches. Span<T> allows you to slice a large buffer into batch-sized windows instantly. If you used List<T>.GetRange(), it would allocate a new list and copy elements for every batch, which is disastrous for performance.
  3. Interoperability: Span<T> is compatible with pointers and can be used with stackalloc. This allows creating temporary buffers for feature engineering (e.g., calculating a moving average) entirely on the stack, ensuring that the memory is reclaimed the moment the function returns, leaving no trace for the GC to clean up.

By mastering Span<T>, Memory<T>, and ArrayPool<T>, you move from standard C# application development to systems-level programming, enabling the high-throughput, low-latency data manipulation pipelines required for modern AI systems.

Basic Code Example

Here is a basic code example demonstrating high-performance handling of missing data using Span<T> and stack allocation, tailored for AI data preprocessing.

using System;
using System.Numerics; // Required for Vector<T> (SIMD)

public class HighPerformanceDataPreprocessor
{
    public static void ProcessSensorData()
    {
        // Real-world context: Processing a stream of sensor readings (e.g., temperature)
        // where some values are missing (represented as -999.0f).
        // We need to replace these with the global average without allocating new arrays.

        // 1. ALLOCATION: Heap allocation for the raw data buffer.
        // In a real AI pipeline, this might be a massive tensor buffer loaded from disk.
        float[] rawData = new float[] { 22.5f, -999.0f, 23.1f, 22.8f, -999.0f, 24.0f };

        // 2. ZERO-ALLOCATION SLICING: Create a Span over the existing array.
        // Span<T> provides a type-safe view into memory without copying data.
        // This allows processing "slices" of large tensors efficiently.
        Span<float> dataSlice = rawData.AsSpan();

        // Calculate the average of valid data (ignoring missing values) for imputation.
        // We use a simple loop here to avoid LINQ allocations.
        float sum = 0;
        int validCount = 0;
        foreach (float val in dataSlice)
        {
            if (val > -999.0f) // Check for missing data marker
            {
                sum += val;
                validCount++;
            }
        }
        float globalAverage = validCount > 0 ? sum / validCount : 0.0f;

        // 3. HARDWARE ACCELERATION (SIMD): Using Vector<T> for batch processing.
        // This processes multiple data points simultaneously (e.g., 4 floats at once on AVX2).
        // Note: For this simple "Hello World", we simulate the logic. 
        // In a real scenario, we would loop with Vector.IsHardwareAccelerated checks.

        // 4. IN-PLACE MUTATION: Modifying the Span directly.
        // No new memory is allocated for the result. This is critical for high-throughput AI.
        for (int i = 0; i < dataSlice.Length; i++)
        {
            if (dataSlice[i] < -998.0f) // Detect missing value
            {
                dataSlice[i] = globalAverage; // Impute directly into memory
            }
        }

        // Output results to verify
        Console.WriteLine($"Imputed Average: {globalAverage}");
        Console.WriteLine("Processed Data (Span): " + string.Join(", ", dataSlice.ToArray()));
    }
}

Explanation of the Code

  1. Contextual Problem: In AI and machine learning, datasets often contain missing values (NaNs or specific markers like -999). Before feeding data into a neural network, these must be handled. Standard approaches often create new arrays (allocating memory), which is slow and causes garbage collection (GC) pressure. This example solves the problem using zero-allocation techniques.

  2. Heap Allocation (float[]): The rawData array is allocated on the Heap. This is standard managed memory. While we want to avoid unnecessary allocations, we must start with data somewhere.

    • Why it matters: In AI, tensors (multidimensional arrays) are often gigabytes in size. Allocating them on the Heap is standard, but we must avoid creating copies during processing.
  3. Zero-Allocation Slicing (Span<T>): Span<float> dataSlice = rawData.AsSpan(); creates a lightweight view into the existing memory.

    • Why it matters: Span allows us to pass a "slice" of a massive tensor to a function without copying the data. It enforces memory safety at compile time (cannot outlive the data it points to). This is essential for processing huge datasets efficiently.
  4. Imputation Logic: We calculate the average of valid numbers. This requires iterating the data. We avoid LINQ (.Average()) because LINQ allocates an enumerator and delegates, which is forbidden on hot paths in high-performance code.

  5. In-Place Mutation: The for loop iterates through the Span. When a missing value is detected, we assign globalAverage directly to dataSlice[i].

    • Why it matters: We are modifying the original rawData array implicitly. This saves memory bandwidth and eliminates the need to allocate a second array to hold the results.
  6. SIMD Context (System.Numerics.Vector<T>): While the specific loop above is scalar (one by one), the using System.Numerics directive unlocks Vector<T>.

    • How it works: In a real-world scenario, you would check Vector.IsHardwareAccelerated. If true, you load 4, 8, or 16 floats into a CPU register simultaneously. You perform the comparison and math operations on all of them at once (Single Instruction, Multiple Data).
    • AI Connection: This is exactly how deep learning libraries (like TensorFlow or PyTorch) perform matrix multiplications on the CPU. They treat the data as Span<float> and use SIMD to crunch numbers at the hardware level.

Common Pitfalls

Using LINQ on Span<T> A frequent mistake is attempting to use LINQ extension methods (like .Where(), .Select(), or .ToArray()) directly on a Span<T>.

  • The Error: Span<T> does not implement IEnumerable<T>. You cannot use LINQ directly.
  • The Consequence: If you convert Span to an array or list to use LINQ (e.g., dataSlice.ToArray().Where(...)), you trigger a massive heap allocation. For a 1GB tensor, this doubles memory usage instantly and stresses the Garbage Collector, destroying performance.
  • The Solution: Use standard for loops or foreach (which works on Span<T> in modern C#) for iteration. For complex logic, write manual loops or use System.Numerics.Vector<T> for SIMD acceleration.

Memory Layout Visualization

The following diagram illustrates how Span<T> provides a view into heap memory without copying data.

A Span<T> provides a view into a contiguous region of heap memory, allowing direct access to data without copying or allocating new memory.
Hold "Ctrl" to enable pan & zoom

A `Span` provides a view into a contiguous region of heap memory, allowing direct access to data without copying or allocating new memory.
  • Heap: Stores the actual data. The float[] lives here.
  • Stack: Stores the Span<T> struct. It contains a pointer to the heap and the length. It is tiny (typically 16 bytes) and is cleaned up instantly when the function exits.
  • No Copy: The arrow indicates that the Span points to the Heap data. When we impute values in the Span, we are writing directly to the Heap memory.

The chapter continues with advanced code, exercises and solutions with analysis, you can find them on the ebook on Leanpub.com or Amazon



Code License: All code examples are released under the MIT License. Github repo.

Content Copyright: Copyright © 2026 Edgar Milvus | Privacy & Cookie Policy. All rights reserved.

All textual explanations, original diagrams, and illustrations are the intellectual property of the author. To support the maintenance of this site via AdSense, please read this content exclusively online. Copying, redistribution, or reproduction is strictly prohibited.