Skip to main content

Command Palette

Search for a command to run...

Container Security Beyond Scanners

Updated
7 min read
Container Security Beyond Scanners

Most engineers totally underestimate how much risk hides inside a container image. They assume a clean scan equals a secure base. They trust the registry. They trust the pipeline. They trust upstream maintainers. The assumption is dangerous. Containers sit at the edge of a hostile supply chain. Every layer you build, every dependency you fetch, every base image you pull, every instruction in your Dockerfile, and every environment you deploy into introduces a new attack surface. A scanner will show the symptoms. It will never show the path that created them. Real security depends on deeper thinking.

A trustworthy container begins with predictability. Predictable images are reproducible. You can build them today. You can build them a year from now. You can build them across different machines and environments. You get the same artefact every time. Predictability gives you auditability. Auditability gives you trust. Most people break this property without noticing. They use floating tags. They assume that pulling an image named dotnet or node or ubuntu produces a known result. It does not. The tag is mutable. You might pull a patch release today and a compromised release tomorrow. Nothing in the build will warn you. The image silently drifts. A secure container locks the base image by digest. You pull by hash. You build with certainty. If the upstream image mutates, your digest no longer matches. The build fails. Failure becomes a safety mechanism.

Predictable images are necessary but not enough. You also need to reduce surface area. Large images increase your exposure to vulnerabilities. Every extra package introduces functions you do not use and attack paths you do not expect. When you trim the runtime environment, you remove entire classes of exploits. A .NET application built with ReadyToRun, linked with trimming enabled, and packaged into a distro-less image removes shells, package managers, and most of the filesystem tools an attacker normally uses. The container becomes a narrow execution environment. Your process runs with the minimum set of dependencies required for survival. The container also becomes easier to reason about. When something behaves strangely, you know it is the application, not hidden system utilities that slipped in during a build.

A healthy container also starts the same way every time. Deterministic startup is an underrated security control. When a container boots, it should create the same environment, evaluate the same configuration, begin listening on the same ports, initialise the same services, and expose the same health probes. Predictable startup reduces the number of unknown states your platform must defend. In secure environments, you rely heavily on repeatable behaviour. When a service fails a readiness probe for reasons unrelated to load, that deviation is a signal. A security threat often expresses itself as an operational anomaly long before it triggers an alarm. Predictable behaviour turns anomalies into red flags.

Networking is another critical surface. Containers run inside virtual networks. These networks are created by Docker or Containerd or the orchestrator. Many clusters allow containers to talk to everything by default. This creates an internal attack mesh that an attacker can traverse once they compromise a single service. It is more dangerous than developers realise. When you restrict the network so each service can only reach its legitimate dependencies, you force attackers to fight the platform before they fight your application. In a container environment that restricts outbound traffic, a malicious payload that tries to call home immediately fails. Security becomes a property of the network rather than a property of the code. The kernel boundary sits at the heart of container isolation. Containers do not have their own kernels. They share the host kernel. Namespaces, cgroups, seccomp filters, and capabilities decide how far a process can reach. The default configuration is permissive. Engineers run containers with the full set of capabilities because it is easy. The choice turns a container breakout from difficult to trivial. A hardened container drops everything. It opts into the exact capabilities the process needs. A .NET service rarely needs to interact with device files or perform system administration tasks. It needs memory, CPU, sockets, and the ability to open files inside its own filesystem. Everything else becomes unnecessary. When you remove a capability, you delete an attack vector.

Security scanners still have a role, but they do not define your posture. They validate the composition of the image. They compare the embedded libraries to known vulnerabilities. They show you when you drift from baseline. The scanner does not know if the image came from a trusted pipeline or a compromised mirror. That is where signatures matter. When you sign an image, you attach a cryptographic identity to it. When you enforce signature verification in your orchestrator, you deny images that do not originate from your trusted source. You stop trusting the registry. You start trusting the signature.

The supply chain that feeds your image is larger than most people realise. Dependencies come from package registries. Base images come from public mirrors. Build agents run temporary operating systems. Each step exposes you to a hostile environment. Software bills of materials help you navigate this landscape. They record every artefact that enters the image. They create a chain of custody. When a library is compromised or a CVE is published, you can trace the affected images with confidence. A signed SBOM gives you verifiable lineage. It becomes a contract between your pipeline and your platform.

Many of the most damaging security incidents begin with indirect compromise. An attacker poisons a package version. They push a malicious update to a high volume library. They inject code into a repository that your build pipeline trusts. Containers that pull dependencies during the build are vulnerable. You reduce this exposure by caching dependencies in a controlled environment or by locking dependency versions with deterministic restore commands. .NET’s NuGet lock files decrease uncertainty. They let you restore the exact same dependencies used in previous builds. They also allow you to audit the delta between versions before accepting updates. The boundary between build and runtime also matters. Build containers should never run in production. They contain tools, compilers, shells, and utilities that you do not need once the application is built. Multi-stage builds let you separate these contexts cleanly. The build stage gathers dependencies and creates the artefact. The runtime stage receives only the final compiled output. You copy as little as possible between stages. This separation keeps build tools out of production. It also makes your runtime environment smaller and easier to reason about.

A hardened runtime environment avoids state. When a container is replaced, it should leave nothing behind. Any state that matters should move out to external storage that follows backup and audit rules. If a container writes sensitive information to an internal filesystem, an attacker who compromises the node may gain access to those files.

A safer design writes secrets to memory only.

Your platform injects them securely.

They never appear in environment variables.

They never appear in logs.

They never appear in the filesystem.

At this point, it helps to visualise the trust boundary. A simplified view looks like this.

The diagram shows how trust moves. The pipeline produces an artefact that is signed and documented. The orchestrator verifies the signature. The container runs with the least privileges possible. You eliminate ambiguity at each stage. You identify every artefact. You reject anything you did not sign.

Practical defence begins with the Dockerfile. You design it so every instruction earns its place. A trimmed .NET application might look like this example.

# Build stage
FROM mcr.microsoft.com/dotnet/sdk:9.0@sha256:<digest> AS build
WORKDIR /src
COPY . .
RUN dotnet publish -c Release -o /app --no-restore \
    /p:PublishSingleFile=true \
    /p:PublishTrimmed=true \
    /p:SelfContained=false

# Runtime stage
FROM gcr.io/distroless/dotnet:9@sha256:<digest>
WORKDIR /app
COPY --from=build /app .
USER 1001
ENTRYPOINT ["./YourService"]

The example builds a trimmed and single file .NET service. It runs without a shell. It restricts permissions. It makes the container easier to audit.

You enforce these standards at scale by shifting responsibility from developers to infrastructure. Developers write secure code. Platform engineers enforce secure boundaries. Cloud architects design networks and orchestration rules that remove dangerous defaults. Together they build a system where the security of a container does not depend on any single human decision.

A hardened container platform is quiet. It does not surprise you. It behaves the same in development, staging, and production. When something anomalous appears, you treat it as a security signal. When a container tries to perform a blocked operation, you inspect it. When a service makes an unexpected outbound request, you review it. Security becomes part of day to day engineering.

Building trust in container environments is slow work. You refine images. You lock every dependency. You enforce least privilege. You observe every deviation. Over time, the platform becomes predictable. Predictability is the foundation of security. Attackers thrive in uncertainty. Remove the uncertainty and you force them into narrow paths that you can monitor and disrupt.

A system becomes secure when you understand how it behaves in every state.

When you can reproduce its images.

When you can verify its signatures.

When you can explain every file inside an image and every dependency pulled during build.