SCHEDULER¶
What is Kube-Scheduler? Kube-scheduler is the default scheduler for Kubernetes, responsible for assigning newly created pods to suitable nodes within the cluster. It makes these decisions based on a variety of factors to ensure optimal performance, resource utilisation, and adherence to any constraints or policies defined by the user.
Key Responsibilities of Kube-Scheduler 1. Node Selection: Identifying the best node for each pod based on resource requirements and availability.
-
Resource Allocation: Ensuring that CPU, memory, and other resources are adequately allocated.
-
Constraints Handling: Considering constraints like node affinity, taints, and tolerations.
-
Prioritization: Ranking nodes based on various criteria to find the most suitable one.
Example: The Restaurant Table Scheduler Imagine a busy restaurant where guests arrive without reservations and need to be seated at the appropriate tables. The restaurant has a host (analogous to the kube-scheduler) whose job is to seat guests at the best available table based on several factors. Let’s explore how this restaurant scenario parallels the functioning of the kube-scheduler.
Step-by-Step Scheduling Process Guest Arrival (Pod Creation) In Kubernetes, a pod represents one or more containers that need to run on a node. When a new pod is created, it’s similar to a new group of guests arriving at the restaurant. 2. Checking Table Availability (Node Filtering)
• The host first checks which tables are available. Similarly, the kube-scheduler filters out nodes that cannot accommodate the pod due to insufficient resources or other constraints.
- Considering Guest Preferences (Node Affinity and Anti-Affinity)
• Some guests may prefer to sit in a specific area of the restaurant (near the window, away from the kitchen). The host considers these preferences. In Kubernetes, this is managed through node affinity and anti-affinity rules that guide the scheduler on preferred or avoided nodes.
- Matching Table Size to Party Size (Resource Requests)
• The host matches the table size with the number of guests. In Kubernetes, the kube-scheduler looks at the resource requests (CPU, memory) specified for the pod and matches them with the available resources on the nodes.
- Special Requests (Taints and Tolerations)
• Some guests might have special requests, like requiring a high chair or a quiet corner. The host must ensure these needs are met. Similarly, nodes can have taints that only certain pods with matching tolerations can tolerate, ensuring special conditions are respected.
- Selecting the Best Table (Node Prioritization)
• Once suitable tables are identified, the host prioritizes them based on factors like proximity to the kitchen for quicker service or distance from noisy areas. The kube-scheduler ranks the nodes using various scoring algorithms to choose the best fit for the pod.
- Seating the Guests (Binding the Pod)
• Finally, the host seats the guests at the selected table. The kube-scheduler assigns the pod to the chosen node, officially binding it.
Example in Kubernetes Terms
Let’s consider a concrete example in Kubernetes:
apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - name: nginx image: nginx resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" nodeSelector: disktype: ssd tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" Kube-Scheduler Components Scheduling Algorithm The kube-scheduler follows a two-step process: filtering and scoring.
-
Filtering: The scheduler filters out nodes that do not meet the pod’s requirements. This includes checks for resource availability, node conditions, taints, and affinity/anti-affinity rules.
-
Scoring: After filtering, the scheduler scores the remaining nodes to find the most suitable one. Various plugins and scoring functions are used, considering factors like resource utilization, pod topology spread, and custom user-defined rules.
Plugins and Extensibility Kube-scheduler is highly extensible, allowing custom scheduling policies and plugins. This flexibility enables users to tailor the scheduling process to meet specific needs and optimize resource allocation and performance.
Conclusion Properly configuring and understanding the kube-scheduler is crucial for several scenarios:
High-Density Clusters: When running a large number of pods, efficient scheduling ensures optimal resource usage and performance Resource-Constrained Environments: In environments with limited resources, effective scheduling prevents resource contention and ensures stable operation. Complex Workloads: Applications with specific placement needs, like affinity/anti-affinity rules and resource constraints, benefit from customized scheduling policies. The kube Guy is free today. If you enjoy the content and would like to support my efforts to keep these stories free and accessible, please consider making a donation here (Buy Me a Coffee Link)
Your generosity helps cover the costs of maintaining this blog and allows me to continue creating high-quality, informative content. Thank you for your support!
---¶
---¶
---¶
---¶
---¶
---¶
---¶
---¶
---¶
---¶
Voir les labels du deployment knative-operator¶
kubectl get deployment knative-operator -n knative-operator --show-labels
Voir les labels du deployment operator-webhook¶
kubectl get deployment operator-webhook -n knative-operator --show-labels
Voir les deux en même temps avec leurs labels¶
kubectl get deployments -n knative-operator --show-labels
Plus de détails en format YAML pour voir tous les labels¶
kubectl get deployment knative-operator -n knative-operator -o yaml | grep -A 5 "labels:" kubectl get deployment operator-webhook -n knative-operator -o yaml | grep -A 5 "labels:"
Format tableau avec les labels spécifiques¶
kubectl get deployments -n knative-operator \ -o custom-columns=NAME:.metadata.name,APP:.metadata.labels.app,ENV:.metadata.labels.environment
Voir si les labels app et environment existent¶
kubectl get deployments -n knative-operator -o json | jq '.items[].metadata.labels | {name: .name, app: .app, environment: .environment}'
Version plus simple pour voir tous les labels¶
kubectl get deployments -n knative-operator -o yaml | yq '.items[].metadata | {"name": .name, "labels": .labels}'