Helm gives you a lot of rope. The patterns we used that backfired, the ones we replaced them with, and what to skip if you're starting today.
We have ~25 Helm charts in production. Some are first-party (we wrote them); some are third-party (upstream charts we use with values overrides). After enough iterations, this is the list of patterns we used and abandoned, plus what we use instead.
Before the anti-patterns: Helm earns its place for a specific shape of work. Distributing reusable Kubernetes manifests with configurable parameters, with a release lifecycle (upgrade, rollback) on top. For shipping software to other teams to install in their clusters, Helm is the standard. For deploying your own services to your own cluster, Helm is one option among several (kustomize, raw YAML in Argo CD, jsonnet, etc.).
We use Helm for the cases where third-party software comes packaged as a chart (e.g. cert-manager, kube-prometheus-stack, sealed-secrets) and for our internal "platform" charts that get reused across many namespaces. For per-service application manifests, we use plain YAML in Argo CD.
The most common mistake. You start with a Deployment, then realize you want to change replicas per environment, so it's {{ .Values.replicas }}. Then you want to change image tag, so {{ .Values.image.tag }}. Then resource requests, ConfigMap names, labels, annotations, environment variables...
A year later your values.yaml has 200 lines and your templates are 60% Go template syntax. To make a change you have to chase values through 4 layers of indirection.
What we do instead: only template what actually varies. Most of a Deployment is the same across environments. We keep static YAML for the parts that don't change; template only the parts that do. Our typical chart's values.yaml fits on one screen.
Specifically, things that almost never need to be templated:
apiVersion: apps/v1)_helpers.tpl for the few that are dynamic)If you're templating a value that's the same in every environment you've ever deployed, just hardcode it. The flexibility you're "preserving" is imaginary.
# Hard to navigate
service:
api:
deployment:
replicas:
production: 5
staging: 2
resources:
limits:
cpu: "1"
memory: 1Gi
In the template: {{ .Values.service.api.deployment.replicas.production }}. Long. Brittle. Get the path wrong and Helm silently renders empty.
What we do instead: flatter values, more explicit values files per environment. Instead of one nested values.yaml, have values-production.yaml and values-staging.yaml with flat structures:
# values-production.yaml
replicas: 5
resources_cpu_limit: "1"
resources_memory_limit: 1Gi
Templates are shorter; mistakes are obvious; per-env differences are visible at a glance.
The temptation: every service has its own subchart, so you can helm install my-platform and get the whole stack. Each subchart references the others via values overrides.
What goes wrong: values flow gets complicated. Overriding a subchart's value requires parent.subchart.thing. Multi-level dependencies create version coupling. A bug in one subchart blocks the parent release. Dependency version pinning becomes its own problem.
What we do instead: one chart per service, deployed independently. If they need to share data (a shared ConfigMap, a service URL), they read it from Kubernetes directly. Each chart's release is independent; no parent-child coupling.
The exception: bundled third-party software (a chart that ships an app + its required Redis + Postgres) can reasonably be a single chart with subcharts. But that's for distribution; if you're running these in your own cluster, deploy them as separate charts.
helm.sh/hook for everything#Helm hooks run jobs at specific lifecycle phases (pre-install, post-upgrade, etc.). Useful for database migrations and one-shot setup tasks.
What goes wrong: people use hooks for things that don't fit the model. "Run this Job whenever we deploy" becomes a hook; rollbacks don't re-trigger the hook; the hook fails silently and the rest of the release succeeds; debugging is awkward.
What we do instead:
helm.sh/hook-delete-policy: before-hook-creation to avoid leftover Jobs.The hook semantics are confusing. Use them deliberately, not as a catch-all.
_helpers.tpl#Every Helm chart has a templates/_helpers.tpl file with snippets like:
{{- define "myapp.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name .Chart.Name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
These are necessary for some boilerplate but they accumulate. Charts end up with 30+ helpers, half of them used once.
What we do instead: start from helm create output and ruthlessly prune. helm create gives you a starter chart with the standard helpers. We keep the ones we actually use, delete the rest. Most charts end up with 5-10 helpers.
latest or floating versions#In Chart.yaml:
dependencies:
- name: redis
version: ">=14.0.0"
Or worse, version: "latest". Result: deploys behave differently on different days because the dependency resolved to different versions.
What we do instead: exact version pins everywhere. version: "17.3.7". When we want to upgrade, we update the Chart.yaml deliberately in a PR. Each deploy of a given Helm chart resolves to identical dependencies.
For Helm itself, we pin to a specific minor version too (in our Argo CD config) for the same reason.
The hard rule: secrets never live in values files. Sounds obvious; teams still do it.
What we do instead: External Secrets Operator + AWS Secrets Manager. The chart includes an ExternalSecret resource that references the secret name in AWS; the operator syncs it into a Kubernetes Secret at deploy time. Actual secret material never touches the values files or the Git repo.
For some legacy charts that haven't been migrated, we use sealed-secrets — secrets are encrypted in Git with a key only the cluster has. Less ideal than ESO (sealed-secrets requires re-encrypting on every change) but better than plaintext.
The chart accepts extraEnv, extraVolumes, extraVolumeMounts, podAnnotations, extraLabels, extraInitContainers, extraSidecars, ad infinitum. Pretty soon the chart can produce any Deployment shape; the chart adds zero value over plain YAML at that point.
What we do instead: opinionated charts that do one thing well. If a use case requires features the chart doesn't support, the answer is sometimes "use a different chart" or "write plain YAML for this specific case." The chart's value is its opinionatedness; "infinitely flexible" charts give you the worst of both worlds.
For our internal services, we mostly use:
This works because: (a) we deploy to our own clusters, not other people's, (b) we don't need release lifecycle management (Argo CD handles that), (c) Kustomize's overlay model fits our "small differences per environment" use case better than Helm's templating model.
We still use Helm for: bundled third-party software (most charts come from Bitnami, the operator authors, etc.), our internal "platform" charts that other teams install, and historical charts we haven't migrated.
Template only what actually varies. Most of a Deployment is the same across environments.
Flatter values, per-env files. values-production.yaml beats deeply nested objects.
Exact version pins on dependencies. Floating versions break reproducibility.
No secrets in values. External Secrets Operator or sealed-secrets, period.
One chart per service, deployed independently. Subcharts add coupling without proportional benefit.
Consider Kustomize for internal app manifests. Helm shines for shipping software; Kustomize fits operating your own services.
Resist "infinitely flexible" charts. Opinionated beats flexible at the chart layer.
Helm is fine for what it's good at. The anti-patterns above are what happens when teams stretch Helm to fit problems it doesn't solve cleanly. Recognizing which problems Helm is the right tool for — and which it isn't — is the actual skill.
Get the latest tutorials, guides, and insights on AI, DevOps, Cloud, and Infrastructure delivered directly to your inbox.
There are two hard problems in computer science." We've worked on the cache-invalidation one for a while. The patterns that hold up at scale and the ones that look clean and aren't.
Backups are easy. Restores are hard. The quarterly drill we run, what's failed during it, and the discipline that makes "we have backups" actually mean something.
Explore more articles in this category
We run three different job queue systems across our services. The patterns that work across all of them, the differences that matter, and the operational gotchas.
We adopted Backstage for service catalogs and templates. What works, what was over-engineered for our size, and what we'd do differently.
We run a chaos game day each quarter. The scenarios that surfaced real problems, the ones that didn't, and the operational discipline that makes the practice pay back.