Quickstart¶
This guide walks you through managing your first workload with Hybernate in under 5 minutes.
1. Deploy a Sample Workload¶
If you don't already have a workload to manage, create a simple Deployment:
kubectl create namespace sandbox
kubectl create deployment my-api \
--image=nginx:latest \
--replicas=3 \
-n sandbox
Wait for the pods to be ready:
2. Create a WorkloadPolicy¶
Apply a WorkloadPolicy to auto-discover and manage workloads in the namespace:
| workloadpolicy.yaml | |
|---|---|
The policy scans the namespace, classifies each workload as Active, Idle, or Wasteful, and auto-creates a ManagedWorkload for each one with sensible defaults.
Three ways to manage workloads
- WorkloadPolicy with
auto-manage(this quickstart): scans the namespace and auto-creates ManagedWorkloads for discovered workloads. Best for getting started quickly. - WorkloadPolicy with
suggest+kubectl hybernate export: scans and classifies workloads but doesn't create anything. You review the results and export the ones you want as ManagedWorkload manifests for GitOps. - ManagedWorkload directly: create a ManagedWorkload CR yourself with full control over every field. Best when you know exactly what you want.
3. Check What Was Discovered¶
You should see your workload classified:
Check the auto-created ManagedWorkload:
View its status:
Look at the status section:
View events on the resource:
At this point, Hybernate is already working. The forecast engine progresses through phases independently, regardless of dryRun:
- Observing — collecting data, no decisions yet. The engine needs at least 24 hours of data before it starts making predictions.
- Suggesting — the engine has enough data to predict daily patterns and starts evaluating idle and scale policies, but only logs what it would do. This is always dry run, even if
dryRun: false. - Active — the engine's confidence has crossed the threshold (default 85%). If
dryRun: false, it now takes real action: pausing, scaling, or destroying workloads. IfdryRun: true, it continues to log decisions without acting.
You can track which phase the engine is in:
Since dryRun is enabled and the engine starts in Observing, nothing will be touched. You can follow the events to watch it progress:
To see what happens when Hybernate actually takes action, you can bypass the automation and manually trigger a pause.
4. Manually Pause the Workload¶
Set the desired state to override automation and force a pause:
kubectl patch managedworkload my-api -n sandbox \
--type merge -p '{"spec":{"desiredState":"Paused"}}'
Hybernate will:
- Capture the current replica count (3)
- Scale the Deployment to 0
- Set the phase to
Paused
Verify:
kubectl get deployment my-api -n sandbox
# READY: 0/0
kubectl get managedworkload my-api -n sandbox -o jsonpath='{.status.phase}'
# Paused
5. Resume the Workload¶
kubectl patch managedworkload my-api -n sandbox \
--type merge -p '{"spec":{"desiredState":"Running"}}'
Hybernate restores the Deployment to 3 replicas and waits for readiness.
6. Enable Automation¶
Once you're comfortable with what you see in dry run, disable it to let Hybernate act:
kubectl patch managedworkload my-api -n sandbox \
--type json -p '[
{"op": "remove", "path": "/spec/desiredState"},
{"op": "replace", "path": "/spec/dryRun", "value": false}
]'
Hybernate will now:
- Monitor CPU and memory usage against their percentage-of-request thresholds
- Wait for all signals to confirm idle
- Apply the grace period
- Check the forecast engine before acting
- Pause the workload if everything agrees
- Auto-resume when demand returns
What's Next?¶
- ManagedWorkload Guide: full spec reference with examples
- Idle Detection: how signals and grace periods work
- Prometheus Signals: add custom PromQL checks
- WorkloadPolicy: discovery, classification, and auto-manage
- GitOps Export: export discovered workloads for ArgoCD/Flux