The CKA Exam Changed After February 18 — Here’s What You Actually Need to Practice Now
For the Certified Kubernetes Administrator (CKA) exam in 2025, the main thing you need is not just to memorize commands. The exam has changed since February 18, and the new is more hands-on, scenario-based, and realistic than ever. I’ve selected four examples that perfectly match the spirit of the new CKA.
👉 if you’re not a Medium member, rea this story for free, here.
Table of Contents
· Scenario 1: Leave Room on the Node with CPU/Memory Calculations
∘ Context
∘ Always Remember
∘ Step 1: Calculate Per-Container Requests
∘ Step 2: Apply Resource Requests in Manifest
∘ Step 3: Validate Allocation
∘ Explanation
∘ Expected Outcome
∘ Exam Tip
· Scenario 2: Managing Pod Scheduling with PriorityClass
∘ Key Concepts
∘ Step 1: Create PriorityClasses
∘ Step 2: Deploy Low-Priority Workload
∘ Step 3: Deploy High-Priority Workload
∘ Step 4: Observe Preemption
∘ Step 5: Troubleshooting
∘ Summary
∘ Exam Tip
· Scenario 3: Installing and Testing Gateway API with NGINX Gateway
∘ Step 1: Install Gateway API CRDs
∘ Step 2: Deploy NGINX Gateway Fabric Controller
∘ Step 3: Check GatewayClass
∘ Step 4: Deploy Backend Application
∘ Step 5: Create Gateway
∘ Step 6: Create HTTPRoute
∘ Step 7: Validate Gateway and Route
∘ Step 8: Test Routing
∘ Explanation
∘ Expected Outcome
∘ Exam Tip
· Scenario 4: Creating and Using a DatabaseBackup CRD
∘ Step 1: Create the CRD
∘ Step 2: Create Valid Instance
∘ Step 3: Test Invalid Instance
∘ Explanation
∘ Expected Outcome
∘ Exam Tip
· Final Advice for Exam Day
Scenario 1: Leave Room on the Node with CPU/Memory Calculations¶
Domain: Workloads & Scheduling (15%)
Objective: Distribute CPU and memory requests across multiple containers while intentionally leaving capacity free for other workloads.
Context¶
You have a node with:
- 4Gi allocatable memory
- 2 CPU (2000 millicores)
You must deploy 3 Pods, each with 2 containers (total of 6 containers), while using only about two-thirds of the node’s capacity.
Always Remember¶
- “Leave room” in exam tasks means you should not allocate full node capacity.
- Use quick mental math to divide resources evenly.
- Always check
kubectl describe nodefor allocatable values before calculating.
Step 1: Calculate Per-Container Requests¶
Target usage: ≈ 66% of total capacity
- Memory per container:
4Gi × 0.66 ÷ 6 ≈ 450Mi - CPU per container:
2000m × 0.66 ÷ 6 ≈ 220m
Total usage: ≈ 2.64Gi memory and 1320m CPU
Remaining: ≈ 1.36Gi memory and 680m CPU
Step 2: Apply Resource Requests in Manifest¶
resources:
requests:
memory: "450Mi"
cpu: "220m"
Apply this block to each container.
Step 3: Validate Allocation¶
kubectl describe node <node-name>
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[*].resources}'
Explanation¶
- Overcommitting nodes causes throttling or eviction.
- Requests affect scheduling; limits affect usage.
- “Leave room” requires manual calculation.
Expected Outcome¶
- All Pods scheduled successfully.
- Node retains unallocated capacity.
Exam Tip¶
- Convert CPUs to millicores, Gi to Mi.
- If Pods are Pending, recheck requests.
- No calculators allowed, practice mental math.
Scenario 2: Managing Pod Scheduling with PriorityClass¶
Domain: Scheduling (5%)
Objective: Influence Kubernetes scheduler using PriorityClass, ensuring higher-priority workloads are scheduled first and can preempt lower-priority Pods.
Key Concepts¶
value: Higher number = higher priority.preemptionPolicy:PreemptLowerPriorityallows eviction of lower-priority pods.PriorityClassis cluster-scoped.- Preemption only happens when absolutely needed.
Step 1: Create PriorityClasses¶
File:priorityclasses.yaml
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: high-priority
value: 100000
globalDefault: false
preemptionPolicy: PreemptLowerPriority
description: "High priority class"
---
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: low-priority
value: 100
globalDefault: false
preemptionPolicy: PreemptLowerPriority
description: "Low priority class"
kubectl apply -f priorityclasses.yaml
kubectl get priorityclass
Step 2: Deploy Low-Priority Workload¶
File:nginx-low.yaml
...
priorityClassName: low-priority
Step 3: Deploy High-Priority Workload¶
File:nginx-high.yaml
...
priorityClassName: high-priority
kubectl apply -f nginx-low.yaml
kubectl apply -f nginx-high.yaml
Step 4: Observe Preemption¶
kubectl get deployments
kubectl get pods -o wide
kubectl describe pod <pod-name>
Expect: Low-priority pods evicted or Pending. High-priority pods running.
Step 5: Troubleshooting¶
- Node might have enough resources, adjust values.
- Preemption takes time.
- Confirm
preemptionPolicy.
Summary¶
- Use
PriorityClassto control pod scheduling. - High-priority workloads can evict lower-priority ones.
- Check
kubectl describeto confirm preemption.
Exam Tip¶
- Stuck Pods with
0/1 nodes available= no pods to evict. - Fastest solution: high-value
PriorityClass+ attach to Pod. - Look for preemption events to verify behavior.
Scenario 3: Installing and Testing Gateway API with NGINX Gateway¶
Domain: Services & Networking (20%)
Objective: Deploy NGINX Gateway Fabric, configure Gateway API (Gateway, HTTPRoute), and verify routing to backend application.
Step 1: Install Gateway API CRDs¶
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.1.0/standard-install.yaml
kubectl get crd | grep gateway
Step 2: Deploy NGINX Gateway Fabric Controller¶
kubectl create namespace nginx-gateway
kubectl apply -k https://github.com/nginxinc/nginx-gateway-fabric/tree/main/deploy/default
kubectl get pods -n nginx-gateway
Step 3: Check GatewayClass¶
kubectl get gatewayclass
Expect:nginx
Step 4: Deploy Backend Application¶
File:whoami.yaml
apiVersion: apps/v1
...
kind: Deployment
...
image: docker.io/containous/whoami:v1.5.0
kubectl apply -f whoami.yaml
kubectl get pods,svc
Step 5: Create Gateway¶
File:gateway.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
...
gatewayClassName: nginx
Step 6: Create HTTPRoute¶
File:route.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
...
hostnames:
- example.com
kubectl apply -f gateway.yaml
kubectl apply -f route.yaml
Step 7: Validate Gateway and Route¶
kubectl get gateway
kubectl describe httproute whoami-route
Expect:Accepted=True, PROGRAMMED=True
Step 8: Test Routing¶
Map in /etc/hosts:
<EXTERNAL-IP> example.com
Test:
curl http://example.com
Explanation¶
GatewayClassties to controller.Gatewayexposes cluster traffic.HTTPRoutemaps requests to services.- Troubleshoot top-down: CRDs → Controller → Gateway → Route → Backend.
Expected Outcome¶
- Gateway and HTTPRoute accepted.
- Curl returns valid app response.
Exam Tip¶
- No DNS? Use
/etc/hosts. - Always test with
curl. - If controller exists, skip to resource creation.
Scenario 4: Creating and Using a DatabaseBackup CRD¶
Domain: Cluster Architecture, Installation & Configuration (25%)
Objective: Create a CustomResourceDefinition for database backups with validation, and test schema enforcement.
Step 1: Create the CRD¶
File:databasebackup-crd.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: databasebackups.dbadmin.com
spec:
group: dbadmin.com
...
versions:
- name: v1
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
database:
type: string
schedule:
type: string
retentionDays:
type: integer
minimum: 1
required:
- database
- schedule
- retentionDays
kubectl apply -f databasebackup-crd.yaml
Step 2: Create Valid Instance¶
File:prod-db-backup.yaml
apiVersion: dbadmin.com/v1
kind: DatabaseBackup
metadata:
name: prod-db-backup
spec:
database: prod-db
schedule: "0 2 * * *"
retentionDays: 7
kubectl apply -f prod-db-backup.yaml
kubectl get databasebackups
Step 3: Test Invalid Instance¶
File:invalid-db-backup.yaml
retentionDays: 0
kubectl apply -f invalid-db-backup.yaml
Expect:
spec.retentionDays: Invalid value: 0: must be greater than or equal to 1
Explanation¶
requiredenforces presence.minimumenforces range.- CRDs store and validate data only, no logic unless a controller exists.
Expected Outcome¶
- Valid instance works.
- Invalid instance fails with schema error.
Exam Tip¶
- Use
kubectl explain crdto explore structure. - Existing CRD? Skip to creating instances.
- Schema nesting errors are common.
A Small Recommendation
For anyone planning the CKA in 2026, I maintain a practical guide that’s continuously refined with real feedback and updated labs.
It’s available on Gumroad or Payhip with full details there.
This weekend only (January 17–18), there’s 40% off with the code JANUARY26.
You can grab the free one here if you want:
Final Advice for Exam Day¶
The CKA exam is not only about your knowledge of Kubernetes that you possess but also your ability to handle the stress of the situation. Be prepared for things that don’t go well: Your terminal may freeze, your browser may lag, or maybe the YAML you wrote yesterday suddenly won’t apply. Practice real situations, and also simulate how to overcome inconveniences while dealing with issues such as switching to different SSH sessions, a strange DNS failure, or even low screen resolution. Conclusively focus on setting up your environment to set up your alias, without making excessive optimizations that may confuse your brain by being entangled in dotfiles. In addition, it’s essential to reserve time for revision. Mistakes happen when one is in a hurry. Make sure you finish with at least 10–15 minutes for revision.
If you want to get some practice in without setting anything up, KodeKloud has a bunch of free hands-on Kubernetes labs you can run straight from your browser: https://kodekloud.com/pages/free-labs/kubernetes
They’re great for getting familiar with the basics or quickly simulating exam-style tasks.
But honestly?
The best way that worked for me, and for a lot of people I know, is setting up your own cluster. Nothing beats hitting real issues, debugging them, and figuring things out under pressure. That’s where the real learning happens.
📘 Conquer the CKA Exam 🔥 40% OFF with JANUARY26 (valid January 17–18 only) Gumroad: devopsdynamo.gumroad.com/l/Conquer-cka-exam Payhip: payhip.com/b/3iAsH
More from DevOpsDynamo¶
Recommended from Medium¶
[
See more recommendations
](https://medium.com/?source=post_page---read_next_recirc--a9941213a65a---------------------------------------)
