本节内容:
- 容器的资源限制,既requests,limits,LimitRange
- CustomResourceDefinitions,自定义的资源配置
如果不设置requests,只有limits,则requests=limits
如果podA和podB的cpu资源request是 1:5, 则全力使用cpu时最大资源也是 1:5
pod OOMKill后会 10s, 20s, 30,… 300s 每隔一段时间重试,直到最后一直5min一次尝试重启
即使为pod设置了limits和requests,在容器内看见的cpu和memory的数量依旧是node级别的
Kubernetes将pod划分为3种QoS等级:
apiVersion: v1
kind: Pod
metadata:name: limited-pod
spec:containers:- image: busyboxcommand: ["dd", "if=/dev/zero", "of=/dev/null"]name: mainresources:requests:cpu: 200mmemory: 10Milimits:cpu: 1memory: 20Mi
kubectl create -f limits.yaml -n foo
kubectl create -f limits-pod-too-big.yaml -n foo
The Pod "too-big" is invalid: spec.containers[0].resources.requests: Invalid value: "2": must be less than or equal to cpu limit
如果不配置pod的resources,则默认是LimitRange中配置的
apiVersion: v1
kind: LimitRange
metadata:name: example
spec:limits:- type: Podmin:cpu: 50mmemory: 5Mimax:cpu: 1memory: 1Gi- type: ContainerdefaultRequest:cpu: 100mmemory: 10Midefault:cpu: 200mmemory: 100Mimin:cpu: 50mmemory: 5Mimax:cpu: 1memory: 1GimaxLimitRequestRatio: # limit最大为request的几倍cpu: 4memory: 10- type: PersistentVolumeClaimmin:storage: 1Gimax:storage: 10Gi
apiVersion: v1
kind: Pod
metadata:name: too-big
spec:containers:- image: busyboxargs: ["sleep", "9999999"]name: mainresources:requests:cpu: 2
kubectl create -f quota-cpu-memory.yaml -n foo
kubectl create -f ../Chapter03/kubia-manual.yaml -n foo
Error from server (Forbidden): error when creating "../Chapter03/kubia-manual.yaml": pods "kubia-manual" is forbidden: failed quota: cpu-and-mem: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
apiVersion: v1
kind: ResourceQuota
metadata:name: cpu-and-mem
spec:hard: # 命名空间下资源总量设置requests.cpu: 400mrequests.memory: 200Milimits.cpu: 600mlimits.memory: 500Mirequests.storage: 500Gi # 可声明的总存储量ssd.storageclass.storage.k8s.io/requests.storage: 300Gi # ssd可申请量standard.storageclass.storage.k8s.io/requests.storage: 1Ti# 配置ns中资源最大数量pods: 10replicationcontrollers: 5secrets: 10configmaps: 10persistentvolumeclaims: 5services: 5services.loadbalancers: 1services.nodeports: 2ssd.storageclass.storage.k8s.io/persistentvolumeclaims: 2
由于我们创建了ResourceQuota,那么需要一个LimitRange,不然在创建pod时没有配置requests,limits的话 是没办法创建pod的。
LeastRequestedPriority: 优先将 pod调度到请求量少的节点上(也就是拥有更多未分配资源的节点)
MostRequestedPriority: 优先调度到请求量多的节点(拥有更少未分配资源的节点), 使用场景:
⽬前配额作⽤范围共有4种:
BestEffort: BestEffort QoS
NotBestEffort : Burstable 和 Guaranteed QoS 的 pod
Termination: 配置了 activeDeadlineSeconds 的pod
NotTerminating:没有指定 activeDeadlineSeconds 的pod
apiVersion: v1
kind: ResourceQuota
metadata:name: besteffort-notterminating-pods
spec:scopes: # 最多创建4个属于BestEffort QoS并且没有配置activeDeadlineSeconds的pod- BestEffort- NotTerminatinghard:pods: 4
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:name: websites.extensions.example.com
spec:scope: Namespacedgroup: extensions.example.comversions:- name: "v1"served: truestorage: trueschema:openAPIV3Schema:type: objectproperties:spec:type: objectproperties:gitRepo: type: stringnames:kind: Website# 名称的单数形式,作为命令行使用时和显示时的别名singular: website# 名称的复数形式,用于 URL:/apis/<组>/<版本>/<名称的复数形式>plural: websites# 允许你在命令行使用较短的字符串来匹配资源shortNames:- ws
apiVersion: extensions.example.com/v1
kind: Website
metadata:name: kubia
spec:gitRepo: https://github.com/luksa/kubia-website-example.git
上一篇: 《渔王的儿子》阅读答案
下一篇:【JVM】搞清类加载机制