Zero Copy Streaming in .Net

The majority of developers working with file uploads and downloads in ASP.NET Core fall into a common trap, relying on the default model binding or IFormFile handling that buffers content into memory or disk. While this is fine for small files, it breaks down when your system is tasked with moving hundreds of megabytes or even multi gigabyte objects. The moment you copy, buffer, or reallocate unnecessarily, you introduce latency, exhaust memory, and increase the likelihood of bottlenecks.
This is where zero copy streaming becomes crucial. By designing upload and download pipelines that avoid buffering and instead work with the raw Stream or PipeReader, we can create applications that scale smoothly under heavy workloads.
The Problem with Buffering
A traditional file upload endpoint in ASP.NET Core might look like this:
[HttpPost("upload")]
public async Task<IActionResult> Upload(IFormFile file)
{
var path = Path.Combine("uploads", file.FileName);
await using var stream = System.IO.File.Create(path);
await file.CopyToAsync(stream);
return Ok(new { file.FileName, file.Length });
}
While simple, this has two significant drawbacks. First, the model binder buffers the request body in memory or a temporary file before the controller ever sees it. Second, when you call CopyToAsync, you copy once again. For very large files, the process is not only inefficient but also prone to failure under pressure.
What you want instead is direct, streaming access to the body without double handling.
Request Body as a Stream
ASP.NET Core gives you access to the raw request body via HttpContext.Request.Body. This property exposes a Stream you can read from incrementally, bypassing model binding and buffering.
[HttpPost("upload-stream")]
public async Task<IActionResult> UploadStream()
{
var filePath = Path.Combine("uploads", Guid.NewGuid() + ".bin");
await using var target = System.IO.File.Create(filePath);
await HttpContext.Request.Body.CopyToAsync(target);
return Ok(new { FilePath = filePath });
}
Here, we’ve eliminated buffering. The incoming request body is read directly and copied into the file stream in chunks. Memory stays constant regardless of file size, and throughput is limited only by network and disk speeds.
Multipart Uploads
Most uploads arrive as multipart/form-data, often containing metadata alongside binary data. ASP.NET Core provides the MultipartReader class for this scenario, which allows you to parse the stream part by part without loading it all into memory.
[HttpPost("upload-multipart")]
public async Task<IActionResult> UploadMultipart()
{
var boundary = Request.GetMultipartBoundary();
var reader = new MultipartReader(boundary, Request.Body);
MultipartSection section;
while ((section = await reader.ReadNextSectionAsync()) != null)
{
if (ContentDispositionHeaderValue.TryParse(section.ContentDisposition, out var cd))
{
if (cd.DispositionType.Equals("form-data") && cd.FileName.HasValue)
{
var filePath = Path.Combine("uploads", cd.FileName.Value.Trim('"'));
await using var target = System.IO.File.Create(filePath);
await section.Body.CopyToAsync(target);
}
}
}
return Ok("Upload complete");
}
This approach processes each section of the multipart message sequentially, never buffering more than the chunk currently being read. You can mix metadata extraction (from form fields) with file persistence in one pass, and the memory footprint remains constant.
Downloads Without Buffering
Streaming is just as important for downloads. A naïve implementation might read a file into a byte array and then return it:
[HttpGet("download/{name}")]
public IActionResult Download(string name)
{
var bytes = System.IO.File.ReadAllBytes(Path.Combine("uploads", name));
return File(bytes, "application/octet-stream", name);
}
This will kill your memory usage if the file is large. Instead, ASP.NET Core supports streaming directly from disk using FileStreamResult:
[HttpGet("download/{name}")]
public IActionResult DownloadStream(string name)
{
var path = Path.Combine("uploads", name);
var stream = System.IO.File.OpenRead(path);
return File(stream, "application/octet-stream", name);
}
Now the file is read in chunks and flushed directly to the response body. Clients can begin downloading immediately, and your server never holds the full file in memory.
Pipelines for Maximum Throughput
The .NET System.IO.Pipelines library provides an even lower level abstraction for high throughput scenarios. Unlike plain streams, pipelines let you read and write data without repeated allocations, while handling large payloads efficiently.
For example, you could design a proxy service that accepts an upload stream and forwards it to an upstream API without ever persisting locally….
[HttpPost("proxy-upload")]
public async Task<IActionResult> ProxyUpload([FromServices] IHttpClientFactory factory)
{
var client = factory.CreateClient("upstream");
using var request = new HttpRequestMessage(HttpMethod.Post, "/api/upload")
{
Content = new StreamContent(HttpContext.Request.Body)
};
var response = await client.SendAsync(request, HttpCompletionOption.ResponseHeadersRead);
return StatusCode((int)response.StatusCode);
}
Here, the incoming request stream is piped directly into the outgoing request, effectively creating a transparent streaming proxy. No buffering, no disk, no reallocation, just pure pass through.
Checksums, Filters, and Transformation in Flight
Sometimes you do need to process the data as it streams. A good example is computing a checksum while persisting an upload.
[HttpPost("upload-checksum")]
public async Task<IActionResult> UploadChecksum()
{
var filePath = Path.Combine("uploads", Guid.NewGuid() + ".bin");
await using var target = System.IO.File.Create(filePath);
using var sha256 = SHA256.Create();
var buffer = new byte[81920];
int read;
while ((read = await Request.Body.ReadAsync(buffer)) > 0)
{
sha256.TransformBlock(buffer, 0, read, null, 0);
await target.WriteAsync(buffer.AsMemory(0, read));
}
sha256.TransformFinalBlock(Array.Empty<byte>(), 0, 0);
var hash = BitConverter.ToString(sha256.Hash!).Replace("-", "").ToLowerInvariant();
return Ok(new { FilePath = filePath, Sha256 = hash });
}
This technique allows you to enrich your pipeline with checksums, virus scans, or even data transformations without sacrificing streaming behaviour.
Why Its good in Distributed Systems
When you scale out into microservices or cloud environments, zero copy streaming becomes more than just an optimisation, it’s a requirement. If you’re building an API gateway, a BFF, or an ingestion pipeline, you may be handling streams of data that must pass through multiple services. Buffering at each hop quickly compounds inefficiency. By designing streaming pipelines, where each service reads only the bytes it needs, transforms them if necessary, and forwards them, you avoid bottlenecks and build systems that scale with confidence. In cloud environments like Azure Container Apps or Kubernetes, this means smaller pods, less memory pressure, and predictable costs. Zero copy streaming is not about clever optimisation tricks. It is about designing applications that respect the reality of data movement in distributed systems. ASP.NET Core gives you all the primitives, raw request body streams, multipart readers, FileStreamResult, and System.IO.Pipelines, to build upload and download pipelines that never buffer unnecessarily. The challenge is to adopt these tools early, before buffering becomes an architectural liability. Whether you are handling a few large uploads, designing a streaming proxy, or orchestrating file ingestion in a microservices environment, free streaming ensures your system remains fast, efficient, and reliable under load.





