StudyComputer ScienceC#

Garbage Collection in C# - How .NET Cleans Up Your Mess

2026-03-23 29 min read Computer Science
Garbage Collection in C# - How .NET Cleans Up Your Mess

Garbage Collection in C#: How .NET Cleans Up Your Mess

You have just finished writing a beautiful piece of C# code. Objects are flying everywhere, lists are being populated, HTTP responses are being deserialized into models, and everything works. You never once called free(). You never once worried about dangling pointers. You just… created stuff and moved on with your life.

Must be nice, right? C++ developers are reading this paragraph with visible rage.

Here is the thing: those objects do not just vanish into thin air when you are done with them. Somewhere deep inside the .NET runtime, a sophisticated, battle-tested system is watching every allocation you make, tracking every reference you hold, and deciding which objects get to live and which ones get swept into oblivion. That system is the Garbage Collector (GC), and it is arguably the single most important piece of infrastructure sitting between your code and a catastrophic out-of-memory crash.

Understanding how the GC works is not just academic trivia. It is the difference between an application that hums along smoothly under load and one that freezes for 500 milliseconds every few seconds while the GC desperately tries to reclaim memory from the mountain of objects you carelessly left behind. If you have ever seen your application stutter, lag, or mysteriously consume 4 GB of RAM for no apparent reason, the GC was probably involved. And it was probably your fault.


What Is Garbage Collection (And Why Should You Care)?

In languages like C and C++, memory management is manual. You allocate memory with malloc or new, and you free it with free or delete. Forget to free it? Memory leak. Free it twice? Undefined behavior. Free it and then use it? Welcome to the world of use-after-free bugs, where your program does whatever it feels like, and “whatever it feels like” is usually “crash” or “get exploited by hackers.”

Garbage collection is the runtime’s automatic memory management system. It periodically identifies objects that are no longer reachable by your application and reclaims the memory they occupy. You allocate, you use, you stop referencing, and the GC cleans up. No manual intervention required.

Key Insight: Garbage collection does not happen the instant you stop using an object. It happens when the GC decides it is time, which is usually when memory pressure triggers a collection. Your object might sit in memory for milliseconds or minutes after you are done with it.

The Trade-Off

Nothing is free. The GC gives you safety and convenience, but it takes something in return: control and predictability. The GC decides when to run, how long it takes, and which objects to collect. During a collection, your application threads may be paused (a “stop-the-world” event). For most applications, this is a perfectly acceptable trade-off. For low-latency trading systems or real-time game engines, it is a constant negotiation.

// You write this:
var customer = new Customer("Alice", "alice@example.com");
ProcessOrder(customer);
// customer goes out of scope here... but it is NOT immediately freed

// The GC will eventually notice that nothing references 'customer' anymore
// and reclaim the memory. Eventually. On its own schedule.

Stack vs Heap: A Quick Refresher

Before we dive into the GC, we need to talk about where your data actually lives. The GC only manages heap memory, so understanding the distinction is essential.

The Stack

The stack is a small, fast, automatically managed region of memory. It stores:

  • Value types declared as local variables (int, double, bool, struct)
  • Method parameters
  • Return addresses (where to go after a method finishes)

The stack operates on a Last-In-First-Out basis. When a method is called, a stack frame is pushed. When it returns, the frame is popped. No GC involvement whatsoever. It is lightning fast and completely deterministic.

void CalculateTotal()
{
    int quantity = 5;           // Lives on the stack
    double price = 19.99;      // Lives on the stack
    double total = quantity * price; // Lives on the stack

    // When this method returns, all three variables are gone instantly.
    // No GC needed. The stack pointer just moves back.
}

The Managed Heap

The managed heap is where things get interesting (and where the GC earns its paycheck). It stores:

  • Reference type instances (classes, arrays, strings, delegates)
  • Boxed value types
  • Large objects (more on this later)

When you use the new keyword to create a reference type, memory is allocated on the managed heap. The variable on the stack holds a reference (essentially a pointer) to the heap-allocated object.

void CreateCustomer()
{
    // 'customer' (the reference) lives on the stack
    // The actual Customer object lives on the managed heap
    var customer = new Customer("Bob", "bob@example.com");

    // 'orders' (the reference) lives on the stack
    // The List<Order> object AND its internal array live on the heap
    var orders = new List<Order>();

    // When this method returns:
    // - The stack references are gone instantly
    // - The heap objects remain until the GC collects them
}

Key Insight: The GC only manages the managed heap. Stack memory is automatically reclaimed when a method returns. This is why value types allocated on the stack are “free” from the GC’s perspective, and why performance-sensitive code sometimes uses structs instead of classes.


GC Generations: The Generational Hypothesis

Here is the single most important idea behind .NET’s garbage collector: most objects die young.

Think about it. In a typical web request, you create a bunch of DTOs, strings, temporary lists, and response objects. The request completes in 50 milliseconds, and all of those objects are immediately garbage. Meanwhile, your singleton services, cached data, and configuration objects live for the entire lifetime of the application.

This observation is called the generational hypothesis, and it is the foundation of .NET’s generational garbage collection strategy. Instead of scanning every object on the heap every time, the GC divides the heap into three generations and focuses its effort where the most garbage is: the youngest objects.

The Apartment Building Analogy

Imagine a three-story apartment building where tenants are your objects.

Ground Floor (Generation 0): This is the lobby. Every new tenant starts here. It is small, crowded, and the turnover is insane. Most tenants check in, stay for five minutes, and leave. The building manager (GC) checks this floor constantly because there is always garbage to clean up, and it is quick because the floor is small.

Second Floor (Generation 1): If a tenant survives the lobby cleanup, they get promoted to the second floor. These tenants have been around for a little while, so they are probably going to stick around longer. The building manager checks this floor less often.

Third Floor (Generation 2): The long-term residents. These tenants have survived multiple cleanups and clearly plan to stay. The building manager rarely bothers checking this floor, and when they do, it is a big, time-consuming inspection.

Generation 0 (Gen 0)

Gen 0 is where all new objects are born (with one exception we will cover later). It is small, typically around 256 KB to a few MB. Collections here are extremely fast, usually under a millisecond.

// Every 'new' allocation starts in Gen 0
var name = new string("temporary");          // Gen 0
var list = new List<int>();                   // Gen 0
var dto = new CustomerDto { Id = 1 };         // Gen 0
var buffer = new byte[100];                   // Gen 0 (if < 85,000 bytes)

When Gen 0 fills up, the GC triggers a Gen 0 collection. It checks which objects in Gen 0 are still reachable. Unreachable objects are swept away, and the survivors are promoted to Gen 1.

Rule of Thumb: Gen 0 collections are the most frequent and the cheapest. If your application generates a lot of short-lived objects (and most do), Gen 0 collections happen constantly, and that is perfectly fine.

Generation 1 (Gen 1)

Gen 1 acts as a buffer between short-lived and long-lived objects. Objects that survive a Gen 0 collection land here. A Gen 1 collection scans both Gen 1 and Gen 0 (since collecting Gen 1 always implies collecting Gen 0 as well).

Gen 1 is typically a bit larger than Gen 0. Objects here have proven they are not completely ephemeral, but they have not yet earned “long-term resident” status.

Generation 2 (Gen 2)

Gen 2 is the big one. This is where long-lived objects reside: singletons, caches, static data, anything that survives multiple collections. A Gen 2 collection is also called a full collection because it scans the entire heap (Gen 0, Gen 1, and Gen 2).

Full collections are expensive. They take the longest, and they are the ones most likely to cause noticeable pauses in your application. The GC tries to avoid them, but when Gen 2 fills up or the system is under memory pressure, a full collection is inevitable.

// Demonstrating generation promotion
void ShowGenerations()
{
    var obj = new object();

    Console.WriteLine($"After creation: Gen {GC.GetGeneration(obj)}");
    // Output: After creation: Gen 0

    GC.Collect(); // Force a collection (don't do this in production!)
    Console.WriteLine($"After 1st GC: Gen {GC.GetGeneration(obj)}");
    // Output: After 1st GC: Gen 1

    GC.Collect();
    Console.WriteLine($"After 2nd GC: Gen {GC.GetGeneration(obj)}");
    // Output: After 2nd GC: Gen 2

    GC.Collect();
    Console.WriteLine($"After 3rd GC: Gen {GC.GetGeneration(obj)}");
    // Output: After 3rd GC: Gen 2 (can't go higher, already at max)
}

Generations at a Glance

PropertyGen 0Gen 1Gen 2
ContainsNewly allocated objectsSurvived 1 collectionSurvived 2+ collections
Typical SizeSmall (256 KB to a few MB)MediumLarge (can grow significantly)
Collection FrequencyVery frequentModerateInfrequent
Collection CostVery cheap (< 1 ms)ModerateExpensive (can be 10-100+ ms)
TriggerGen 0 budget exhaustedGen 1 budget exhaustedGen 2 budget exhausted or memory pressure
Also CollectsOnly Gen 0Gen 0 + Gen 1Gen 0 + Gen 1 + Gen 2 (full GC)

How Collection Actually Works: Mark, Sweep, Compact

Alright, so the GC decides it is time to collect. What actually happens? The process has three phases, and they are surprisingly intuitive.

Phase 1: Mark (Who Is Still Alive?)

The GC starts from a set of GC roots, which are references that are guaranteed to be “alive.” These include:

  • Local variables on the stack (in active methods)
  • Static fields
  • CPU registers that hold object references
  • GC handles (pinned objects, strong references)
  • Finalization queue references

Starting from these roots, the GC walks the object graph. It follows every reference from every root, marking each reachable object as “alive.” If an object cannot be reached from any root through any chain of references, it is considered unreachable and is garbage.

void MarkPhaseExample()
{
    var alice = new Customer("Alice");   // 'alice' is a root -> Customer is reachable
    var bob = new Customer("Bob");       // 'bob' is a root -> Customer is reachable
    alice.Friend = bob;                  // Alice references Bob (redundant, Bob is already reachable)

    bob = null;                          // 'bob' no longer points to Bob's Customer object
    // BUT Alice still references Bob through alice.Friend
    // So Bob's object is STILL reachable (through alice -> Friend -> Bob's object)

    alice = null;                        // Now neither Alice nor Bob is reachable
    // Both objects are garbage. The GC will reclaim them during the next collection.
}

Think of it like a “follow the string” game. The GC holds one end of the string (the roots) and follows it to every connected object. If an object has no string attached, it is floating in the void. Goodbye.

Phase 2: Sweep (Remove the Dead)

Once marking is complete, the GC knows exactly which objects are alive and which are garbage. The sweep phase logically reclaims the memory occupied by unreachable objects. In practice, the .NET GC does not always do a separate “sweep” pass; it combines sweeping with compaction for efficiency.

Phase 3: Compact (Defragment the Heap)

After dead objects are removed, the heap looks like Swiss cheese, with live objects scattered among gaps of freed memory. This fragmentation is bad for performance because:

  1. Allocating new objects requires finding a gap large enough, which is slow.
  2. Cache locality suffers when related objects are spread across memory.

The compaction phase slides all surviving objects together, eliminating the gaps. This turns the heap back into a contiguous block of memory, with all the free space at the end. The GC also updates all references to point to the objects’ new locations.

Before compaction:
[Alice][    ][Bob][        ][Carol][    ]
         ^          ^                ^
       (dead)     (dead)          (dead)

After compaction:
[Alice][Bob][Carol][                    ]
                    ^
              Free space (next allocation goes here)

Key Insight: Compaction is one of the reasons allocation in .NET is so fast. New objects are always allocated at the end of the used portion of the heap (a simple pointer bump), rather than searching for a free slot. This makes new in C# nearly as fast as stack allocation.

The “Stop-the-World” Reality

During a garbage collection (especially the mark phase), application threads must be paused. The GC needs a consistent view of the object graph, and if your threads are modifying references while the GC is tracing them, chaos ensues.

For Gen 0 and Gen 1 collections, these pauses are typically sub-millisecond and unnoticeable. For full Gen 2 collections on a large heap, the pause can be tens or even hundreds of milliseconds. The GC has strategies to minimize this (background GC, concurrent marking), but it is an inherent cost.

// You can observe GC pauses with a simple stopwatch
var sw = Stopwatch.StartNew();
GC.Collect(2, GCCollectionMode.Forced, blocking: true);
sw.Stop();
Console.WriteLine($"Full GC took: {sw.ElapsedMilliseconds} ms");
// On a large heap, this could be 50-200+ ms

The Large Object Heap (LOH)

Remember when I said all new objects start in Gen 0? I lied. Well, partially.

Any object 85,000 bytes or larger is allocated directly on the Large Object Heap (LOH). This is a separate region of the managed heap with its own rules:

  1. LOH objects are logically considered Gen 2. They are only collected during full (Gen 2) collections.
  2. The LOH is NOT compacted by default. Moving large objects around in memory is expensive, so the GC maintains a free list instead of compacting.
  3. Fragmentation is a real problem. Since the LOH is not compacted, allocating and deallocating large objects of varying sizes can leave unusable gaps.
// This array is 85,000+ bytes, so it goes straight to the LOH
var largeArray = new byte[85_000]; // 85,000 bytes -> LOH
Console.WriteLine($"Generation: {GC.GetGeneration(largeArray)}");
// Output: Generation: 2

// This one stays in Gen 0 (just under the threshold)
var smallArray = new byte[84_999]; // 84,999 bytes -> Gen 0
Console.WriteLine($"Generation: {GC.GetGeneration(smallArray)}");
// Output: Generation: 0

LOH Compaction (When You Really Need It)

Starting with .NET 4.5.1, you can request LOH compaction. It does not happen automatically, and for good reason (it is expensive), but if you have serious LOH fragmentation, it is an escape hatch.

// Request LOH compaction on the next full GC
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect(); // The next full GC will compact the LOH (one time only)

// After the collection, it resets to Default (no compaction)
// You must set it again if you want another compaction

Rule of Thumb: If you are frequently allocating and deallocating large arrays of varying sizes, consider using ArrayPool\<T> to reuse buffers instead of relying on the GC to manage LOH allocations. Pool your large objects whenever possible.

using System.Buffers;

// Instead of this (allocates on LOH, creates GC pressure):
byte[] buffer = new byte[100_000];
ProcessData(buffer);
// buffer becomes garbage, sits on LOH until next full GC

// Do this (rents from a pool, returns it when done):
byte[] pooledBuffer = ArrayPool<byte>.Shared.Rent(100_000);
try
{
    ProcessData(pooledBuffer);
}
finally
{
    ArrayPool<byte>.Shared.Return(pooledBuffer); // Back to the pool, no GC needed
}

Finalization and IDisposable: Cleaning Up Properly

Not all resources are managed by the GC. File handles, database connections, network sockets, and unmanaged memory are all examples of unmanaged resources that the GC knows nothing about. If you allocate a file handle and then abandon the object that holds it, the GC will eventually collect the object, but the file handle will leak.

This is where finalizers and IDisposable come in.

Finalizers (The Safety Net)

A finalizer (declared with ~ClassName()) is a special method that the GC calls before reclaiming an object’s memory. It gives you a last chance to release unmanaged resources.

public class FileWrapper
{
    private IntPtr _fileHandle;

    public FileWrapper(string path)
    {
        _fileHandle = NativeMethods.OpenFile(path);
    }

    // Finalizer: called by the GC before the object is collected
    ~FileWrapper()
    {
        if (_fileHandle != IntPtr.Zero)
        {
            NativeMethods.CloseFile(_fileHandle);
            _fileHandle = IntPtr.Zero;
        }
    }
}

But here is the catch (and it is a big one):

  1. Finalizable objects survive an extra collection. When the GC encounters a finalizable object during marking, it does not collect it. Instead, it moves the object to the finalization queue. A dedicated finalizer thread runs the finalizer, and the object is only eligible for collection in the next GC cycle. This means finalizable objects are always promoted at least one extra generation.

  2. Finalization is non-deterministic. You have no control over when the finalizer runs. It could be milliseconds or minutes after the object becomes unreachable.

  3. Finalization is expensive. The extra collection, the dedicated thread, the unpredictable timing, it all adds up.

Finalizers are the “break glass in case of emergency” mechanism. They are not for routine cleanup.

IDisposable (The Right Way)

IDisposable gives you deterministic cleanup. You explicitly call Dispose() (or use a using block) to release resources immediately, without waiting for the GC.

public class DatabaseConnection : IDisposable
{
    private SqlConnection _connection;
    private bool _disposed = false;

    public DatabaseConnection(string connectionString)
    {
        _connection = new SqlConnection(connectionString);
        _connection.Open();
    }

    public void ExecuteQuery(string sql)
    {
        ObjectDisposedException.ThrowIf(_disposed, this);
        // Execute query...
    }

    public void Dispose()
    {
        Dispose(disposing: true);
        GC.SuppressFinalize(this); // Tell the GC: "No need to finalize, I already cleaned up"
    }

    protected virtual void Dispose(bool disposing)
    {
        if (!_disposed)
        {
            if (disposing)
            {
                // Dispose managed resources
                _connection?.Dispose();
            }

            // Release unmanaged resources (if any)

            _disposed = true;
        }
    }

    ~DatabaseConnection()
    {
        Dispose(disposing: false); // Safety net: release unmanaged resources only
    }
}

The Using Statement (Your Best Friend)

The using statement ensures Dispose() is called even if an exception is thrown. It is syntactic sugar for a try/finally block.

// Classic using block
using (var connection = new DatabaseConnection("Server=localhost;Database=MyDb"))
{
    connection.ExecuteQuery("SELECT * FROM Customers");
} // Dispose() is called here, guaranteed

// Modern C# using declaration (scoped to the enclosing block)
using var stream = new FileStream("data.txt", FileMode.Open);
using var reader = new StreamReader(stream);
string content = reader.ReadToEnd();
// Both are disposed when the method exits

Key Insight: Always use using for IDisposable objects. If an object implements IDisposable, it is telling you “I hold resources that the GC cannot clean up on its own. Please call Dispose() when you are done.” Ignoring this is how you get connection pool exhaustion, file handle leaks, and 3 AM pages.

GC.SuppressFinalize: Why It Matters

Notice the GC.SuppressFinalize(this) call in the Dispose() method above. This tells the GC: “I have already cleaned up everything. Do not bother running my finalizer.” This is critical because:

  1. It prevents the object from being placed on the finalization queue.
  2. It allows the object to be collected in the current GC cycle rather than surviving an extra one.
  3. It avoids the overhead of the finalizer thread.
// Without SuppressFinalize:
// GC finds object -> moves to finalization queue -> finalizer thread runs ->
// object is collected on NEXT GC cycle (promoted one extra generation)

// With SuppressFinalize:
// GC finds object -> collects immediately (no finalization overhead)

GC Modes: Workstation vs Server

The .NET GC is not a one-size-fits-all system. It has different modes optimized for different workloads.

Workstation GC

Workstation GC is the default for client applications (console apps, WPF, WinForms). It is optimized for responsiveness and low latency on a single machine.

  • Collections happen on the thread that triggered the allocation.
  • Uses a single GC heap and a single GC thread.
  • Designed for scenarios where short pauses matter more than raw throughput.
  • Background GC is enabled by default for Gen 2 collections (the application can continue running while the GC does most of its work concurrently).

Server GC

Server GC is designed for server applications (ASP.NET, web APIs, microservices) running on multi-core machines. It is optimized for throughput.

  • Creates one heap and one dedicated GC thread per logical processor.
  • Collections happen in parallel across all GC threads simultaneously.
  • Higher memory usage (multiple heaps) but much higher throughput.
  • Better for applications handling many requests concurrently.
// Check which GC mode is active
Console.WriteLine($"Server GC: {GCSettings.IsServerGC}");
Console.WriteLine($"Latency Mode: {GCSettings.LatencyMode}");

Configuring GC Mode

You configure the GC mode in your project file or runtimeconfig.json:

<!-- In your .csproj file -->
<PropertyGroup>
    <ServerGarbageCollection>true</ServerGarbageCollection>
    <ConcurrentGarbageCollection>true</ConcurrentGarbageCollection>
</PropertyGroup>
// In runtimeconfig.json
{
    "runtimeOptions": {
        "configProperties": {
            "System.GC.Server": true,
            "System.GC.Concurrent": true
        }
    }
}

Latency Modes

.NET also offers latency modes that let you fine-tune the GC’s aggressiveness:

ModeDescriptionUse Case
BatchFull blocking collections, maximum throughputBatch processing, no UI
InteractiveDefault. Background Gen 2, moderate pausesGeneral-purpose applications
LowLatencyAvoids full GC when possibleTime-sensitive operations
SustainedLowLatencyMinimizes pauses aggressivelyReal-time applications, trading
NoGCRegionTemporarily disables GC entirelyUltra-critical sections
// Temporarily enter a low-latency mode for a critical section
GCSettings.LatencyMode = GCLatencyMode.SustainedLowLatency;
try
{
    // Perform latency-sensitive work here
    // Full GC collections are suppressed (but Gen 0/1 still occur)
    ProcessTimeSensitiveData();
}
finally
{
    GCSettings.LatencyMode = GCLatencyMode.Interactive; // Restore default
}

// Or, for ultra-critical sections, disable GC entirely
if (GC.TryStartNoGCRegion(50_000_000)) // Reserve 50 MB
{
    try
    {
        // No GC will happen here (as long as you stay within the budget)
        ExecuteCriticalPath();
    }
    finally
    {
        GC.EndNoGCRegion();
    }
}

Rule of Thumb: Use Workstation GC for client apps, Server GC for server apps. If you need to tune latency modes, you are probably in a specialized domain (gaming, finance, real-time systems) and should profile extensively before changing defaults.


Common Pitfalls and Performance Tips

Now that you understand how the GC works, let’s talk about how to stop fighting it and start working with it.

Pitfall 1: Calling GC.Collect() Manually

This is the number one GC sin. Calling GC.Collect() forces a full collection, which is expensive and disrupts the GC’s carefully tuned internal heuristics.

// DON'T DO THIS
void ProcessBatch()
{
    for (int i = 0; i < 1000; i++)
    {
        ProcessItem(i);
        GC.Collect(); // NO. Stop it. The GC is smarter than you.
    }
}

// The GC's internal heuristics track allocation patterns, heap sizes,
// and survival rates to determine the optimal time to collect.
// Forcing a collection throws all of that out the window.

There are legitimate (rare) cases for GC.Collect(), such as benchmarking or after releasing a large cache. But if you find yourself calling it in a loop, something has gone very wrong.

Pitfall 2: Unintentional Object Retention

Objects that you think are garbage might still be reachable through references you forgot about.

public class EventPublisher
{
    public event EventHandler DataChanged;

    public void Publish()
    {
        DataChanged?.Invoke(this, EventArgs.Empty);
    }
}

public class Subscriber
{
    private readonly byte[] _largeBuffer = new byte[10_000_000]; // 10 MB

    public Subscriber(EventPublisher publisher)
    {
        publisher.DataChanged += OnDataChanged; // This creates a reference!
    }

    private void OnDataChanged(object sender, EventArgs e)
    {
        // Handle the event
    }

    // If you never unsubscribe, the EventPublisher holds a reference
    // to this Subscriber FOREVER. The 10 MB buffer cannot be collected.
    // This is a classic .NET memory leak.
}

// The fix: unsubscribe when done
public class BetterSubscriber : IDisposable
{
    private readonly EventPublisher _publisher;
    private readonly byte[] _largeBuffer = new byte[10_000_000];

    public BetterSubscriber(EventPublisher publisher)
    {
        _publisher = publisher;
        _publisher.DataChanged += OnDataChanged;
    }

    private void OnDataChanged(object sender, EventArgs e) { }

    public void Dispose()
    {
        _publisher.DataChanged -= OnDataChanged; // Unsubscribe!
    }
}

Pitfall 3: Mid-Life Crisis (The Gen 1 Problem)

Objects that live just long enough to be promoted to Gen 1 (or worse, Gen 2) but then die quickly are the GC’s worst nightmare. They are too old for the cheap Gen 0 collection but too short-lived to justify their promotion.

// BAD: Object lives just long enough to get promoted, then dies
async Task ProcessRequestBad()
{
    var cache = new Dictionary<string, object>(); // Created in Gen 0

    await SomeSlowOperation(); // GC might run during this await, promoting 'cache' to Gen 1

    cache.Add("key", "value");
    // cache goes out of scope... but it is in Gen 1 now
    // It will only be collected during a Gen 1 collection, which is less frequent
}

// BETTER: Keep short-lived objects short-lived
async Task ProcessRequestBetter()
{
    await SomeSlowOperation();

    // Create the dictionary AFTER the await, so it stays in Gen 0
    var cache = new Dictionary<string, object>();
    cache.Add("key", "value");
    // cache is still in Gen 0 and will be collected cheaply
}

Pitfall 4: LOH Fragmentation from Varying-Size Arrays

// BAD: Creates LOH fragmentation over time
void ProcessVariousFiles(string[] filePaths)
{
    foreach (var path in filePaths)
    {
        byte[] buffer = File.ReadAllBytes(path); // Different sizes each time
        ProcessBuffer(buffer);
        // Each buffer is a different size on the LOH, creating Swiss cheese
    }
}

// BETTER: Use ArrayPool for large buffers
void ProcessVariousFilesBetter(string[] filePaths)
{
    foreach (var path in filePaths)
    {
        var fileInfo = new FileInfo(path);
        byte[] buffer = ArrayPool<byte>.Shared.Rent((int)fileInfo.Length);
        try
        {
            using var stream = File.OpenRead(path);
            int bytesRead = stream.Read(buffer, 0, (int)fileInfo.Length);
            ProcessBuffer(buffer.AsSpan(0, bytesRead));
        }
        finally
        {
            ArrayPool<byte>.Shared.Return(buffer);
        }
    }
}

Performance Tips Summary

  1. Allocate less. The cheapest GC is the one that never happens. Use Span<T>, stackalloc, ValueTask, and structs where appropriate.
  2. Allocate short-lived or long-lived, nothing in between. Objects should either die in Gen 0 or live forever. The “mid-life crisis” pattern is the most expensive.
  3. Pool large objects. Use ArrayPool<T> for large byte arrays. Use ObjectPool<T> from Microsoft.Extensions.ObjectPool for expensive objects.
  4. Dispose deterministically. Always use using for IDisposable objects. Do not rely on finalizers for routine cleanup.
  5. Avoid unnecessary boxing. Boxing a value type creates a new object on the heap. Use generics to avoid it.
  6. Be careful with closures and lambdas. They can capture variables and extend their lifetimes beyond what you expect.
  7. Profile before optimizing. Use tools like dotnet-counters, dotnet-trace, Visual Studio Diagnostic Tools, or JetBrains dotMemory to identify actual GC pressure before guessing.
// Tip 1: Use Span<T> and stackalloc to avoid heap allocation entirely
void ParseNumbersEfficiently(ReadOnlySpan<char> input)
{
    // stackalloc allocates on the stack, not the heap. Zero GC pressure.
    Span<int> results = stackalloc int[64];
    int count = 0;

    foreach (var range in input.Split(','))
    {
        if (int.TryParse(input[range], out int value))
        {
            results[count++] = value;
        }
    }

    ProcessResults(results[..count]);
}

// Tip 5: Avoid boxing
void BoxingExample()
{
    int value = 42;

    // BAD: boxing (creates object on heap)
    object boxed = value;

    // BAD: boxing through non-generic interface
    IComparable comparable = value;

    // GOOD: use generics to avoid boxing
    Compare<int>(value, 43); // No boxing
}

void Compare<T>(T a, T b) where T : IComparable<T>
{
    Console.WriteLine(a.CompareTo(b)); // No boxing because of generic constraint
}

Inspecting the GC at Runtime

You do not have to guess what the GC is doing. .NET provides APIs and tools to observe it.

// Get GC statistics
void PrintGCStats()
{
    var info = GC.GetGCMemoryInfo();

    Console.WriteLine($"Heap Size:        {info.HeapSizeBytes / 1024.0 / 1024.0:F2} MB");
    Console.WriteLine($"Fragmentation:    {info.FragmentedBytes / 1024.0 / 1024.0:F2} MB");
    Console.WriteLine($"Gen 0 Collections: {GC.CollectionCount(0)}");
    Console.WriteLine($"Gen 1 Collections: {GC.CollectionCount(1)}");
    Console.WriteLine($"Gen 2 Collections: {GC.CollectionCount(2)}");
    Console.WriteLine($"Total Memory:     {GC.GetTotalMemory(forceFullCollection: false) / 1024.0 / 1024.0:F2} MB");
    Console.WriteLine($"GC Latency Mode:  {GCSettings.LatencyMode}");
    Console.WriteLine($"Server GC:        {GCSettings.IsServerGC}");
}
# Monitor GC activity in real time from the command line
dotnet-counters monitor --process-id <PID> --counters System.Runtime

# Collect a GC trace for detailed analysis
dotnet-trace collect --process-id <PID> --providers Microsoft-Windows-DotNETRuntime:0x1:5

Wrapping Up

The .NET garbage collector is one of the most sophisticated pieces of runtime infrastructure in any managed platform. It uses generational collection based on the observation that most objects die young, dividing the heap into Gen 0, Gen 1, and Gen 2 to optimize for the common case. It marks reachable objects, sweeps the dead, compacts survivors, and does all of this with remarkably low overhead for most applications.

But “automatic” does not mean “invisible.” The GC is your partner, not your servant. Understanding how it works lets you write code that plays nice with its heuristics: allocate short-lived objects freely, pool your large buffers, dispose your unmanaged resources deterministically, and resist the urge to call GC.Collect(). Do these things, and the GC will quietly handle millions of allocations without you ever noticing. Ignore them, and you will spend your weekends staring at memory profiler output wondering why your API’s P99 latency just doubled.


Cheat Sheet

Key Questions & Answers

What is garbage collection in .NET?

Garbage collection is the automatic memory management system in the .NET runtime. It periodically identifies objects on the managed heap that are no longer reachable by the application and reclaims their memory. This eliminates manual memory management (no free() or delete) and prevents entire classes of bugs like use-after-free, double-free, and most memory leaks.

What are GC generations and why do they exist?

The GC divides the managed heap into three generations (Gen 0, Gen 1, Gen 2) based on the generational hypothesis: most objects die young. Gen 0 holds new objects and is collected frequently and cheaply. Gen 1 is a buffer zone. Gen 2 holds long-lived objects and is collected rarely but expensively. This allows the GC to focus its effort on the most productive area (Gen 0) rather than scanning the entire heap every time.

What is the Large Object Heap (LOH)?

The LOH is a separate heap region for objects 85,000 bytes or larger. LOH objects are logically Gen 2 and only collected during full collections. The LOH is not compacted by default (because moving large objects is expensive), which can lead to fragmentation. Use ArrayPool\<T> to mitigate LOH pressure.

When should I use IDisposable vs a finalizer?

Use IDisposable for deterministic cleanup of unmanaged resources (files, connections, handles). Always call Dispose() via a using statement. Finalizers are a safety net only, for cases where Dispose() was not called. Always call GC.SuppressFinalize(this) in your Dispose() method to avoid the extra GC overhead of finalization.

What is the difference between Workstation GC and Server GC?

Workstation GC uses a single heap and GC thread, optimized for responsiveness and low memory usage on client applications. Server GC creates one heap per logical processor with dedicated GC threads, optimized for throughput on multi-core server machines. Configure via <ServerGarbageCollection> in your .csproj or System.GC.Server in runtimeconfig.json.

Should I ever call GC.Collect() manually?

Almost never. The GC’s internal heuristics are carefully tuned based on allocation patterns, survival rates, and memory pressure. Manual collection disrupts these heuristics and often hurts performance. Legitimate exceptions include benchmarking scenarios, after releasing a very large cache, or during level loading in games. If you are calling it in a loop, you are doing it wrong.

How do I reduce GC pressure in performance-critical code?

Allocate less (use Span\<T>, stackalloc, structs). Pool large objects with ArrayPool\<T>. Avoid mid-life objects that get promoted and die in Gen 1/2. Dispose resources deterministically. Avoid boxing. Profile with dotnet-counters and dotnet-trace before optimizing.

Key Concepts at a Glance

QuestionAnswer
What triggers a GC?Generation budget exhausted, memory pressure, or manual GC.Collect() call
Where do new objects go?Gen 0 (unless >= 85,000 bytes, then LOH)
What happens to survivors?Promoted to the next generation (Gen 0 -> Gen 1 -> Gen 2)
What are GC roots?Stack variables, static fields, CPU registers, GC handles, finalization queue
Mark phase does what?Traces from roots to find all reachable objects
Compact phase does what?Slides surviving objects together, eliminates fragmentation
Why is allocation fast in .NET?Compaction means allocation is a simple pointer bump
LOH threshold?85,000 bytes (approximately 83 KB)
LOH compacted by default?No, uses a free list. Manual compaction available since .NET 4.5.1
Finalizer impact?Object survives an extra GC cycle, processed by dedicated finalizer thread
GC.SuppressFinalize purpose?Tells GC to skip finalization because Dispose() already cleaned up
Workstation GC optimized for?Responsiveness and low memory usage (client apps)
Server GC optimized for?Throughput on multi-core machines (server apps)
NoGCRegion purpose?Temporarily disables GC for ultra-critical code sections
Best tool for GC profiling?dotnet-counters, dotnet-trace, Visual Studio Diagnostic Tools, JetBrains dotMemory

Sources & Further Reading

PreviousAsync vs Parallel vs Concurrent - C# Concurrency Untangled