ResourceQuota = "Don't let this namespace use more than X total resources"
LimitRange = "Each container in this namespace should have resources between Y and Z"
They work together to provide both macro-level (namespace) and micro-level (container) resource management in your Kubernetes cluster.
ResourceQuota vs LimitRange - Key Differences
Aspect ResourceQuota LimitRange
Purpose Enforces total resource limits for a namespace Sets defaults and constraints for individual containers
Scope Namespace-level (affects all resources in namespace) Container/Pod-level (affects individual containers)
What it controls Aggregate resource consumption across all pods Resource requests/limits per container
Enforcement Prevents namespace from exceeding total quota Validates individual pod spe
spec:
hard:
requests.cpu: "1" # Total CPU requests in namespace ≤ 1 core
requests.memory: 1Gi # Total memory requests in namespace ≤ 1GB
limits.cpu: "2" # Total CPU limits in namespace ≤ 2 cores
limits.memory: 2Gi # Total memory limits in namespace ≤ 2GB
pods: "10" # Max 10 pods in namespace
services: "5" # Max 5 services in namespace
secrets: "10" # Max 10 secrets in namespace
configmaps: "10" # Max 10 configmaps in namespace
persistentvolumeclaims: "5" # Max 5 PVCs in namespace
spec:
limits:
- default: # Applied when no limits specified
cpu: 500m # Default CPU limit = 0.5 cores
memory: 512Mi # Default memory limit = 512MB
defaultRequest: # Applied when no requests specified
cpu: 100m # Default CPU request = 0.1 cores
memory: 128Mi # Default memory request = 128MB
type: Container
Practical Examples
Scenario 1: Pod without resource specifications
apiVersion: 1.0
kind: pod
metadata:
name: test-pod
namespace: dev
spec:
Containers:
- name: app
image:nginx
# no resources specified
Below is what happens
LimitRange applies defaults:
requests.cpu 100m, requests.memory: 128mi
limits.cpu : 500m, limits.memory: 512Mi
ResourceQuota counts these toward namespace totals
Scenario 2: Multiple pods and quota enforcement
Let's see how they work together:
# Check current usage
kubectl describe resourcequota dev-quota -n dev
Name: dev-quota
Namespace: dev
Resource Used Hard
-------- ---- ----
limits.cpu 500m 2
limits.memory 512Mi 2Gi
requests.cpu 100m 1
requests.memory 128Mi 1Gi
pods 1 10
Real-world Interaction Examples
Example 1: Pod creation within limits
apiVersion: v1
kind: Pod
metadata:
name: pod-1
namespace: dev
spec:
containers:
- name: app
image: nginx
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 400m
memory: 512Mi
LimitRange: No validation issues (with
LimitRange: No validation issues (within min/max bounds)
ResourceQuota: Sufficient quota remaining
Example 2: Pod creation exceeding quota
apiVersion: v1
kind: Pod
metadata:
name: pod-large
namespace: dev
spec:
containers:
- name: app
image: nginx
resources:
requests:
cpu: 2 # 2 cores
memory: 2Gi
limits:
cpu: 4 # 4 cores
memory: 4Gi
Example 3: Too many pods
After creating 10 pods, the 11th pod fails:
kubectl get pods -n dev
# Error: pods "pod-11" is forbidden: exceeded quota: dev-quota
Common Use Cases
ResourceQuota Use Cases:
Multi-tenant clusters - Prevent one team from consuming all resources
Cost control - Limit resource consumption per project/environment
Resource isolation - Ensure fair sharing of cluster resources
LimitRange Use Cases:
Prevent resource hogging - Set maximum limits per container
Ensure quality of service - Set minimum guarantees per container
Developer convenience - Provide sensible defaults
Resource validation - Catch misconfigured pods early
Advanced LimitRange Features
You can enhance your LimitRange with more constraints:
apiVersion: v1
kind: LimitRange
metadata:
name: advanced-limits
namespace: dev
spec:
limits:
- type: Container
max:
cpu: "1"
memory: "1Gi"
min:
cpu: "10m"
memory: "4Mi"
default:
cpu: "500m"
memory: "512Mi"
defaultRequest:
cpu: "100m"
memory: "128Mi"
- type: Pod
max:
cpu: "2"
memory: "2Gi"
# Check quota usage
kubectl describe resourcequota dev-quota -n dev
# Check limit ranges
kubectl describe limitrange dev-limits -n dev
# See what defaults are applied to a pod
kubectl get pod <pod-name> -n dev -o yaml
# Check if pods are failing due to quotas
kubectl get events -n dev --field-selector reason=FailedCreate
No comments:
Post a Comment