A curated list of shell one-liners that earn their place in real ops work — the ones I reach for weekly, not the trick-shot variety.
Most "bash one-liner" posts are full of clever tricks that nobody actually runs in production. This is the opposite: the small set of one-liners I genuinely reach for during real ops work. Each one solves a specific problem that comes up regularly. Nothing here is exotic.
sudo du -sh /var/* 2>/dev/null | sort -h
Sums up sizes of immediate subdirectories of /var (or any directory), sorted by size. -h for human-readable; sort -h understands the suffixes. Always the first thing I run when a disk-full alert fires. Then I drill into the biggest one with the same pattern.
A faster alternative when the directory is huge:
sudo ncdu /var
If ncdu is installed, it's the right tool. Interactive, navigable, much faster on large filesystems.
find /var/log -type f -mtime +30 -size +10M
Files older than 30 days and bigger than 10MB. Adjust thresholds. Combine with -delete once you trust it:
find /var/log -type f -mtime +30 -size +10M -delete
For dry-run safety, run without -delete first; eyeball the output; add -delete when sure.
ps aux --sort=-%mem | head
Top 10 by memory. Same flag with -%cpu for CPU. Old but solid; works in every container and most containers. --sort requires GNU procps (Linux), not BSD.
For sustained monitoring, top -o %MEM is interactive and lets you re-sort.
watch -n 1 'kubectl get pods -A | grep -v Running'
Re-runs the inner command every second; shows you what's NOT running. Useful for watching a deploy progress, or for catching a pod that's flapping. -n 1 is the interval; -d (highlight differences) is nice when the output is long.
journalctl -u myservice --since "10 min ago" --until "5 min ago"
journald is the right tool for systemd-managed services. The time arguments are flexible: "2025-04-29 14:00", "yesterday", "1 hour ago". The --until flank narrows the window.
For grepping inside that window:
journalctl -u myservice --since "10 min ago" | grep -i error
curl -s https://api.example.com/v1/things | jq .
jq . reads JSON on stdin and emits it pretty-printed. The . is the identity filter. jq is the right tool for any JSON wrangling beyond reading; learn it.
A common variant: extract a field across an array:
curl -s https://api.example.com/v1/things | jq -r '.items[].name'
-r strips quotes from string output. .items[].name iterates and pulls the name.
sudo lsof | grep /path/to/file
Useful for "I can't unmount this disk, what's still holding it?" or "what process opened this socket?" lsof is slow but exhaustive. For fast process-specific lookup:
sudo lsof -p 12345
Lists all files (including network sockets) open by PID 12345.
sudo ss -tlnp | grep :443
ss (not netstat — that's deprecated). -t for TCP, -l for listening, -n for numeric ports, -p to show the process. Tells you exactly which process is listening on 443.
For the reverse direction — what is a process connected to:
sudo ss -tnp 'state established' | grep -- 'pid=12345'
sudo tail -f /var/log/syslog /var/log/nginx/access.log
Multiple files, single follow. Prefixes each line with the source file (when alternating). For Kubernetes:
kubectl logs -f -l app=myservice --all-containers --max-log-requests=20 --prefix
Tails logs from all pods matching app=myservice, prefixed with pod name. The --prefix is the part most people miss.
grep -rIn 'TODO' --include='*.ts' .
-r recursive, -I skip binary files, -n line numbers, --include filter by glob. Faster than naive grep -r because it skips binaries.
For better defaults on a codebase, ripgrep (rg) is what I actually use:
rg -n TODO -t ts
Same result; faster; respects .gitignore. Install ripgrep everywhere you do dev work.
find . -name '*.md' -type f -exec sed -i.bak 's/oldterm/newterm/g' {} +
-i.bak makes backups (-i alone on macOS requires ''). The {} + form is faster than \; (one sed invocation for many files).
For complex multi-pattern edits, write a small Python script instead. sed-with-regex starts feeling fragile past simple substitutions.
curl -sS -o /dev/null -w "%{http_code} %{time_total}s\n" https://api.example.com/health
Output: status code + time. Run in a loop for crude latency check:
for i in {1..10}; do
curl -sS -o /dev/null -w "%{http_code} %{time_total}s\n" https://api.example.com/health
sleep 1
done
Cheap "is this endpoint slow right now?" probe.
echo "aGVsbG8=" | base64 -d
Two-way: base64 encodes, base64 -d decodes. Inevitable when dealing with Kubernetes Secrets (which are base64-encoded by convention) or JWT tokens.
For JWTs specifically, split on . and decode the middle part:
TOKEN="eyJ...long.token...sig"
echo "$TOKEN" | cut -d. -f2 | base64 -d 2>/dev/null | jq .
Useful for "what's actually in this JWT?" debugging.
openssl rand -hex 32
64-char hex string. Good for: API keys, signing secrets, anything where you want cryptographic randomness. -base64 for base64 output.
openssl rand -base64 24
diff <(some_command) <(sleep 1; some_command)
Process substitution shows the diff between two runs of a command. Useful for "did anything change?" sanity checks.
ssh prod 'df -h | grep -E "^/dev"'
Single quotes prevent local shell from expanding the command. Useful for cron-style remote checks.
date -u +"%Y-%m-%dT%H:%M:%SZ" # ISO 8601 UTC
date -u +%s # Epoch seconds
date -u -d "@1714564800" # Convert epoch back (GNU date)
Date math:
date -u -d "yesterday" +"%Y-%m-%d"
date -u -d "now - 1 hour" +"%H:%M"
On macOS, gdate (from brew install coreutils) is the GNU equivalent. Pure macOS date has different syntax.
Some patterns I see in "bash tricks" posts that I rarely or never write inline:
awk programs. Once awk gets past 2-3 actions, it's a script file, not a one-liner.xargs -P for parallelism. Works but parallel or a small Python script is clearer.find -exec rm with complex predicates. Cleanup with this level of complexity should be a reviewed script, not a one-liner I type at 2am.The principle: a one-liner you'd be comfortable typing fresh at 2am. If you'd want to read it carefully first, it's not really a one-liner anymore.
ShellCheck even on one-liners. shellcheck - accepts stdin. Quick sanity check before running something destructive.
Always dry-run destructive operations. Run the find without -delete first; eyeball; then add -delete.
ss not netstat. ip not ifconfig. rg not grep -r. Modern tools are faster and clearer.
Save the patterns that work. Keep a ~/.shell-notes.md of one-liners that solved real problems. Future-you will thank you.
These aren't impressive. They're the patterns that survive being typed weekly without thinking. The exotic stuff gets retired; these stay.
Get the latest tutorials, guides, and insights on AI, DevOps, Cloud, and Infrastructure delivered directly to your inbox.
After two years of running Karpenter on production EKS clusters, the NodePool patterns that survived, the ones we replaced, and the tuning that matters.
Replication is the foundation of database HA. What we monitor, how we practice failover, and the gotchas that show up only when you actually fail over.
Explore more articles in this category
We migrated most scheduled jobs from cron to systemd timers. The wins, the gotchas, and the cases we kept on cron anyway.
Generate an SSH key, set up passwordless login, and configure aliases for the servers you use daily — all without copy-pasting yet another long command.
A clear walkthrough of Linux file permissions. Read the funny rwx- letters, change them safely with chmod, fix "permission denied" errors with confidence.