Kubernetes for .NET Developers: From Docker to Production
Kubernetes isn't inherently complicated for .NET developers, but there's a translation gap between what you know about ASP.NET Core and what Kubernetes expects. Health checks become probes, configuration becomes ConfigMaps, scaling becomes HPAs. Let's bridge that gap.
Dockerfile Best Practices
Your Dockerfile is the foundation. Here's the pattern that works for every .NET API:
FROM mcr.microsoft.com/dotnet/sdk:9.0-alpine AS build
WORKDIR /src
COPY ["src/OrderApi/OrderApi.csproj", "src/OrderApi/"]
COPY ["src/OrderApi.Domain/OrderApi.Domain.csproj", "src/OrderApi.Domain/"]
RUN dotnet restore "src/OrderApi/OrderApi.csproj"
COPY . .
WORKDIR "/src/src/OrderApi"
RUN dotnet publish -c Release -o /app/publish --no-restore /p:UseAppHost=false
FROM mcr.microsoft.com/dotnet/aspnet:9.0-alpine AS runtime
WORKDIR /app
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
COPY --from=build /app/publish .
EXPOSE 8080
ENTRYPOINT ["dotnet", "OrderApi.dll"]
Key decisions: Alpine images cut size from ~210MB to ~85MB. Multi-stage build keeps the SDK out of runtime. Non-root user satisfies PodSecurityStandards. Layer caching with .csproj first means restores only rerun on dependency changes. Note: ASP.NET Core 8+ defaults to port 8080.
Health Probes
Map ASP.NET Core health checks to Kubernetes probes — but keep liveness simple:
builder.Services.AddHealthChecks()
.AddCheck("self", () => HealthCheckResult.Healthy(), tags: ["live"])
.AddNpgSql(connectionString, name: "database", tags: ["ready"])
.AddRedis(redisConnection, name: "cache", tags: ["ready"]);
// Liveness — only "is the process healthy?"
app.MapHealthChecks("/alive", new HealthCheckOptions
{
Predicate = check => check.Tags.Contains("live")
});
// Readiness — checks all dependencies
app.MapHealthChecks("/health", new HealthCheckOptions
{
Predicate = check => check.Tags.Contains("ready") || check.Tags.Contains("live")
});
Critical: Don't check database health in the liveness probe. A brief DB hiccup would cause Kubernetes to restart every pod simultaneously, turning a minor issue into a full outage.
Deployment Essentials
A production-ready deployment manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: order-api
spec:
replicas: 3
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0 # Zero downtime
template:
spec:
terminationGracePeriodSeconds: 30
containers:
- name: order-api
image: myregistry.azurecr.io/order-api:1.0.0
ports:
- containerPort: 8080
resources:
requests:
cpu: "100m"
memory: "256Mi"
limits:
cpu: "500m"
memory: "512Mi"
livenessProbe:
httpGet: { path: /alive, port: 8080 }
initialDelaySeconds: 10
periodSeconds: 15
readinessProbe:
httpGet: { path: /health, port: 8080 }
initialDelaySeconds: 5
periodSeconds: 10
startupProbe:
httpGet: { path: /alive, port: 8080 }
failureThreshold: 12
Use maxUnavailable: 0 for zero-downtime deployments. For secrets, use Azure Key Vault with the CSI Secrets Store Driver on AKS — no secrets in Git.
AKS Tips
- Enable workload identity (not pod identity) —
DefaultAzureCredentialpicks up the token automatically - Set resource requests accurately using
kubectl top podsbefore setting limits - Configure graceful shutdown to handle SIGTERM:
var lifetime = app.Services.GetRequiredService<IHostApplicationLifetime>();
lifetime.ApplicationStopping.Register(() => Thread.Sleep(TimeSpan.FromSeconds(10)));
- Use Helm charts from day one with environment-specific
values-{env}.yamloverrides - Set HPA scale-down stabilization to 5 minutes to prevent flapping after traffic spikes
Key Takeaways
Start with a solid Dockerfile, get your health probes right, and use Helm charts from day one. These three foundations save more debugging time than any other Kubernetes investment. The mental model transfers well from ASP.NET Core once you understand the mapping — the goal isn't to become a Kubernetes expert, it's to ship reliable .NET applications that scale.
Comments
Ajit Gangurde
Software Engineer II at Microsoft | 15+ years in .NET & Azure