KWOK : mettre en place un cluster de milliers de nœuds en quelques secondes …

Karim
33 min readNov 20, 2022

--

shorturl.at/cikot

Ce dépôt sur GitHub est une boîte à outils qui permet de mettre en place un cluster de milliers de nœuds en quelques secondes. Dans ce contexte, tous les nœuds sont simulés pour se comporter comme des nœuds réels, de sorte que l’approche globale utilise une empreinte de ressources assez faible que vous pouvez facilement utiliser sur votre ordinateur portable :

Lancement d’une instance Ubuntu 22.04 LTS dans Linode :

avec une installation très rapide de k0s pour former un cluster Kubernetes :

root@localhost:~# curl -sSLf https://get.k0s.sh | sudo sh
Downloading k0s from URL: https://github.com/k0sproject/k0s/releases/download/v1.25.4+k0s.0/k0s-v1.25.4+k0s.0-amd64
k0s is now executable in /usr/local/bin
root@localhost:~# k0s install controller --single
root@localhost:~# k0s start
root@localhost:~# k0s status
Version: v1.25.4+k0s.0
Process ID: 1064
Role: controller
Workloads: true
SingleNode: true
Kube-api probing successful: true
Kube-api probing last error:
root@localhost:~# k0s kubectl cluster-info
Kubernetes control plane is running at https://localhost:6443
CoreDNS is running at https://localhost:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. 443/TCP 97s
root@localhost:~# k0s kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
localhost Ready control-plane 100s v1.25.4+k0s 172.105.131.23 <none> Ubuntu 22.04.1 LTS 5.15.0-47-generic containerd://1.6.9
root@localhost:~# curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.25.4/bin/linux/amd64/kubectl && chmod +x kubectl && mv kubectl /usr/bin/
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 42.9M 100 42.9M 0 0 75.2M 0 --:--:-- --:--:-- --:--:-- 75.3M

root@localhost:~# k0s kubeconfig admin > ~/.kube/config
root@localhost:~# type kubectl
kubectl is hashed (/usr/bin/kubectl)
root@localhost:~# kubectl get po,svc -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/kube-proxy-clxh7 1/1 Running 0 3m56s
kube-system pod/kube-router-88x25 1/1 Running 0 3m56s
kube-system pod/coredns-5d5b5b96f9-4xzsl 1/1 Running 0 4m3s
kube-system pod/metrics-server-69d9d66ff8-fxrt7 1/1 Running 0 4m2s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4m20s
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 4m8s
kube-system service/metrics-server ClusterIP 10.98.18.100 <none> 443/TCP 4m2s

ou avec Kui :

Il est alors possible de déployer KWOK dans ce petit cluster :

root@localhost:~# apt install jq -y
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
jq is already the newest version (1.6-2.1ubuntu3).
0 upgraded, 0 newly installed, 0 to remove and 75 not upgraded.
root@localhost:~# KWOK_WORK_DIR=$(mktemp -d)
root@localhost:~# KWOK_REPO=kubernetes-sigs/kwok
root@localhost:~# KWOK_LATEST_RELEASE=$(curl "https://api.github.com/repos/${KWOK_REPO}/releases/latest" | jq -r '.tag_name')
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 27817 0 27817 0 0 109k 0 --:--:-- --:--:-- --:--:-- 109k

Génération d’un modèle de personnalisation YAML dans le répertoire temporaire précédemment défini :

root@localhost:~# cat <<EOF > "${KWOK_WORK_DIR}/kustomization.yaml"
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
images:
- name: registry.k8s.io/kwok/kwok
newTag: "${KWOK_LATEST_RELEASE}"
resources:
- "https://github.com/${KWOK_REPO}/kustomize/kwok?ref=${KWOK_LATEST_RELEASE}"
EOF
root@localhost:~# kubectl kustomize "${KWOK_WORK_DIR}" > "${KWOK_WORK_DIR}/kwok.yaml"

Déploiement de KWOK dans le cluster :

root@localhost:~# kubectl apply -f "${KWOK_WORK_DIR}/kwok.yaml"

serviceaccount/kwok-controller created
clusterrole.rbac.authorization.k8s.io/kwok-controller created
clusterrolebinding.rbac.authorization.k8s.io/kwok-controller created
deployment.apps/kwok-controller created

On visualise donc ceci avec Kubeshark :

Je peux créer avec KWOK un premier noeud (Fake) dans le cluster k0s :

root@localhost:~# kubectl apply -f - <<EOF
apiVersion: v1
kind: Node
metadata:
annotations:
node.alpha.kubernetes.io/ttl: "0"
kwok.x-k8s.io/node: fake
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/os: linux
kubernetes.io/arch: amd64
kubernetes.io/hostname: kwok-node-0
kubernetes.io/os: linux
kubernetes.io/role: agent
node-role.kubernetes.io/agent: ""
type: kwok
name: kwok-node-0
spec:
taints: # Avoid scheduling actual running pods to fake Node
- effect: NoSchedule
key: kwok.x-k8s.io/node
value: fake
status:
allocatable:
cpu: 32
memory: 256Gi
pods: 110
capacity:
cpu: 32
memory: 256Gi
pods: 110
nodeInfo:
architecture: amd64
bootID: ""
containerRuntimeVersion: ""
kernelVersion: ""
kubeProxyVersion: fake
kubeletVersion: fake
machineID: ""
operatingSystem: linux
osImage: ""
systemUUID: ""
phase: Running
EOF
node/kwok-node-0 created
root@localhost:~# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
localhost Ready control-plane 20m v1.25.4+k0s 172.105.131.23 <none> Ubuntu 22.04.1 LTS 5.15.0-47-generic containerd://1.6.9
kwok-node-0 Ready agent 10s fake 10.244.0.4 <none> <unknown> <unknown> <unknown>

Mais pourquoi pas un millier d’autres noeuds …

root@localhost:~# for i in {1..1000}; do kubectl apply -f - <<EOF
apiVersion: v1
kind: Node
metadata:
annotations:
node.alpha.kubernetes.io/ttl: "0"
kwok.x-k8s.io/node: fake
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/os: linux
kubernetes.io/arch: amd64
kubernetes.io/hostname: kwok-node-$i
kubernetes.io/os: linux
kubernetes.io/role: agent
node-role.kubernetes.io/agent: ""
type: kwok
name: kwok-node-$i
spec:
taints: # Avoid scheduling actual running pods to fake Node
- effect: NoSchedule
key: kwok.x-k8s.io/node
value: fake
status:
allocatable:
cpu: 32
memory: 256Gi
pods: 110
capacity:
cpu: 32
memory: 256Gi
pods: 110
nodeInfo:
architecture: amd64
bootID: ""
containerRuntimeVersion: ""
kernelVersion: ""
kubeProxyVersion: fake
kubeletVersion: fake
machineID: ""
operatingSystem: linux
osImage: ""
systemUUID: ""
phase: Running
EOF
done
ode/kwok-node-955 created
node/kwok-node-956 created
node/kwok-node-957 created
node/kwok-node-958 created
node/kwok-node-959 created
node/kwok-node-960 created
node/kwok-node-961 created
node/kwok-node-962 created
node/kwok-node-963 created
node/kwok-node-964 created
node/kwok-node-965 created
node/kwok-node-966 created
node/kwok-node-967 created
node/kwok-node-968 created
node/kwok-node-969 created
node/kwok-node-970 created
node/kwok-node-971 created
node/kwok-node-972 created
node/kwok-node-973 created
node/kwok-node-974 created
node/kwok-node-975 created
node/kwok-node-976 created
node/kwok-node-977 created
node/kwok-node-978 created
node/kwok-node-979 created
node/kwok-node-980 created
node/kwok-node-981 created
node/kwok-node-982 created
node/kwok-node-983 created
node/kwok-node-984 created
node/kwok-node-985 created
node/kwok-node-986 created
node/kwok-node-987 created
node/kwok-node-988 created
node/kwok-node-989 created
node/kwok-node-990 created
node/kwok-node-991 created
node/kwok-node-992 created
node/kwok-node-993 created
node/kwok-node-994 created
node/kwok-node-995 created
node/kwok-node-996 created
node/kwok-node-997 created
node/kwok-node-998 created
node/kwok-node-999 created
node/kwok-node-1000 created

Voilà ! 1000 noeuds dans ce cluster …

Et plus d’un millier de Pods également :

root@localhost:~# kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: fake-pod
namespace: default
spec:
replicas: 1000
selector:
matchLabels:
app: fake-pod
template:
metadata:
labels:
app: fake-pod
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: type
operator: In
values:
- kwok
# A taints was added to an automatically created Node.
# You can remove taints of Node or add this tolerations.
tolerations:
- key: "kwok.x-k8s.io/node"
operator: "Exists"
effect: "NoSchedule"
containers:
- name: fake-container
image: fake-image
EOF
deployment.apps/fake-pod created
fake-pod-6cf6574478-7jsc9   1/1     Running       0          60s     10.0.10.210   kwok-node-227    <none>           <none>
fake-pod-6cf6574478-kw4mb 1/1 Running 0 7m36s 10.0.10.213 kwok-node-477 <none> <none>
fake-pod-6cf6574478-rwcc4 1/1 Running 0 7m31s 10.0.10.216 kwok-node-525 <none> <none>
fake-pod-6cf6574478-2z2d9 1/1 Running 0 7m56s 10.0.10.218 kwok-node-794 <none> <none>
fake-pod-6cf6574478-jb9mq 1/1 Running 0 8m18s 10.0.10.221 kwok-node-483 <none> <none>
fake-pod-6cf6574478-shhgb 1/1 Running 0 7m31s 10.0.10.224 kwok-node-851 <none> <none>
fake-pod-6cf6574478-4x5xw 1/1 Running 0 7m46s 10.0.10.227 kwok-node-604 <none> <none>
fake-pod-6cf6574478-sdgpd 1/1 Running 0 8m3s 10.0.10.231 kwok-node-803 <none> <none>
fake-pod-6cf6574478-6vd9z 1/1 Running 0 7m45s 10.0.10.234 kwok-node-659 <none> <none>
fake-pod-6cf6574478-gr6rz 1/1 Running 0 7m45s 10.0.10.237 kwok-node-956 <none> <none>
fake-pod-6cf6574478-cv9t7 1/1 Running 0 7m45s 10.0.10.239 kwok-node-681 <none> <none>
fake-pod-6cf6574478-wp6sr 1/1 Running 0 8m11s 10.0.10.243 kwok-node-975 <none> <none>
fake-pod-6cf6574478-j6v2r 1/1 Running 0 8m8s 10.0.10.245 kwok-node-569 <none> <none>
fake-pod-6cf6574478-jk6vx 1/1 Running 0 7m26s 10.0.10.249 kwok-node-470 <none> <none>
fake-pod-6cf6574478-ttph6 1/1 Running 0 8m1s 10.0.10.252 kwok-node-467 <none> <none>
fake-pod-6cf6574478-wwvm6 1/1 Running 0 7m23s 10.0.10.255 kwok-node-726 <none> <none>
fake-pod-6cf6574478-4zlfk 1/1 Running 0 8m5s 10.0.11.2 kwok-node-622 <none> <none>
fake-pod-6cf6574478-dgq62 1/1 Running 0 7m26s 10.0.11.5 kwok-node-942 <none> <none>
fake-pod-6cf6574478-n48q5 1/1 Running 0 7m26s 10.0.11.7 kwok-node-928 <none> <none>
fake-pod-6cf6574478-c6mm8 1/1 Running 0 7m34s 10.0.11.11 kwok-node-860 <none> <none>
fake-pod-6cf6574478-wfcmz 1/1 Running 0 7m23s 10.0.11.14 kwok-node-674 <none> <none>
fake-pod-6cf6574478-zsl84 1/1 Running 0 8m1s 10.0.11.17 kwok-node-824 <none> <none>
fake-pod-6cf6574478-t42jl 1/1 Running 0 8m17s 10.0.11.20 kwok-node-623 <none> <none>
fake-pod-6cf6574478-45hlz 1/1 Running 0 7m44s 10.0.11.23 kwok-node-934 <none> <none>
fake-pod-6cf6574478-szxwr 1/1 Running 0 7m26s 10.0.11.25 kwok-node-811 <none> <none>
fake-pod-6cf6574478-mvfhk 1/1 Running 0 7m24s 10.0.11.29 kwok-node-590 <none> <none>
fake-pod-6cf6574478-bdckc 1/1 Running 0 7m37s 10.0.11.32 kwok-node-703 <none> <none>
fake-pod-6cf6574478-wpttc 1/1 Running 0 8m17s 10.0.11.38 kwok-node-977 <none> <none>
fake-pod-6cf6574478-c5gwd 1/1 Running 0 7m24s 10.0.11.39 kwok-node-963 <none> <none>
fake-pod-6cf6574478-gp7zh 1/1 Running 0 8m8s 10.0.11.44 kwok-node-708 <none> <none>
fake-pod-6cf6574478-mtr9m 1/1 Running 0 7m57s 10.0.11.47 kwok-node-715 <none> <none>
fake-pod-6cf6574478-nrvfj 1/1 Running 0 8m17s 10.0.11.50 kwok-node-650 <none> <none>
fake-pod-6cf6574478-wlgsg 1/1 Running 0 7m38s 10.0.11.53 kwok-node-456 <none> <none>
fake-pod-6cf6574478-jjr5w 1/1 Running 0 8m10s 10.0.11.55 kwok-node-799 <none> <none>
fake-pod-6cf6574478-mx5p5 1/1 Running 0 7m46s 10.0.11.59 kwok-node-805 <none> <none>
fake-pod-6cf6574478-7rk75 1/1 Running 0 5s 10.0.11.60 kwok-node-375 <none> <none>
fake-pod-6cf6574478-qk2bf 1/1 Running 0 7m47s 10.0.11.62 kwok-node-523 <none> <none>
fake-pod-6cf6574478-g44hq 1/1 Running 0 7m42s 10.0.11.64 kwok-node-981 <none> <none>
fake-pod-6cf6574478-r4xtc 1/1 Running 0 2s 10.0.11.67 kwok-node-831 <none> <none>

Sans écrouler l’instance Ubuntu …

Le projet sur GitHub propose également le client Kwokctl :

Je supprime le cluster k0s et j’y installe Docker :

root@localhost:~# systemctl stop k0scontroller
root@localhost:~# k0s reset
WARN[2022-11-20 22:02:24] To ensure a full reset, a node reboot is recommended.
root@localhost:~# curl -fsSL https://get.docker.com | sh -
# Executing docker install script, commit: 4f282167c425347a931ccfd95cc91fab041d414f
+ sh -c apt-get update -qq >/dev/null
+ sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null
+ sh -c mkdir -p /etc/apt/keyrings && chmod -R 0755 /etc/apt/keyrings
+ sh -c curl -fsSL "https://download.docker.com/linux/ubuntu/gpg" | gpg --dearmor --yes -o /etc/apt/keyrings/docker.gpg
+ sh -c chmod a+r /etc/apt/keyrings/docker.gpg
+ sh -c echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu jammy stable" > /etc/apt/sources.list.d/docker.list
+ sh -c apt-get update -qq >/dev/null
+ sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq --no-install-recommends docker-ce docker-ce-cli containerd.io docker-compose-plugin docker-scan-plugin >/dev/null
+ version_gte 20.10
+ [ -z ]
+ return 0
+ sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq docker-ce-rootless-extras >/dev/null
+ sh -c docker version
Client: Docker Engine - Community
Version: 20.10.21
API version: 1.41
Go version: go1.18.7
Git commit: baeda1f
Built: Tue Oct 25 18:01:58 2022
OS/Arch: linux/amd64
Context: default
Experimental: true

Server: Docker Engine - Community
Engine:
Version: 20.10.21
API version: 1.41 (minimum version 1.12)
Go version: go1.18.7
Git commit: 3056208
Built: Tue Oct 25 17:59:49 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.10
GitCommit: 770bd0108c32f3fb5c73ae1264f7e503fe7b2661
runc:
Version: 1.1.4
GitCommit: v1.1.4-0-g5fd4c4d
docker-init:
Version: 0.19.0
GitCommit: de40ad0

================================================================================

To run Docker as a non-privileged user, consider setting up the
Docker daemon in rootless mode for your user:

dockerd-rootless-setuptool.sh install

Visit https://docs.docker.com/go/rootless/ to learn about rootless mode.


To run the Docker daemon as a fully privileged service, but granting non-root
users access, refer to https://docs.docker.com/go/daemon-access/

WARNING: Access to the remote API on a privileged Docker daemon is equivalent
to root access on the host. Refer to the 'Docker daemon attack surface'
documentation for details: https://docs.docker.com/go/attack-surface/

================================================================================

root@localhost:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

Récupération de kwokctl :

root@localhost:~# KWOK_REPO=kubernetes-sigs/kwok
root@localhost:~# KWOK_LATEST_RELEASE=$(curl "https://api.github.com/repos/${KWOK_REPO}/releases/latest" | jq -r '.tag_name')
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 27817 0 27817 0 0 113k 0 --:--:-- --:--:-- --:--:-- 113k

root@localhost:~# snap install go --classic
2022-11-20T22:10:54Z INFO Waiting for automatic snapd restart...
go 1.18.8 from Michael Hudson-Doyle (mwhudson) installed

root@localhost:~# wget -O kwokctl -c "https://github.com/${KWOK_REPO}/releases/download/${KWOK_LATEST_RELEASE}/kwokctl-$(go env GOOS)-$(go env GOARCH)"

2022-11-20 22:11:22 (36.8 MB/s) - ‘kwokctl’ saved [10983856/10983856]

root@localhost:~# chmod +x kwokctl
root@localhost:~# mv kwokctl /usr/local/bin/kwokctl
root@localhost:~# kwokctl --help
Kwokctl is a Kwok cluster management tool

Usage:
kwokctl [command] [flags]
kwokctl [command]

Available Commands:
completion Generate the autocompletion script for the specified shell
create Creates one of [cluster]
delete Deletes one of [cluster]
get Gets one of [artifacts, clusters, kubeconfig]
help Help about any command
kubectl kubectl in cluster
logs Logs one of [audit, etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kwok-controller, prometheus]
snapshot Snapshot [save, restore] one of cluster

Flags:
-h, --help help for kwokctl
--name string cluster name (default "kwok")

Use "kwokctl [command] --help" for more information about a command.

Je peux alors créer un premier fake cluster via Docker localement :

root@localhost:~# kwokctl create cluster --name=kwok1

Creating cluster "kwok-kwok1"
Pull image registry.k8s.io/etcd:3.5.4-0
3.5.4-0: Pulling from etcd
36698cfa5275: Pull complete
218162c73ec6: Pull complete
942a5aeb4815: Pull complete
3ac2be597443: Pull complete
4f1961be28e9: Pull complete
Digest: sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1
Status: Downloaded newer image for registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/etcd:3.5.4-0
Pull image registry.k8s.io/kube-apiserver:v1.25.2
v1.25.2: Pulling from kube-apiserver
0a602d5f6ca3: Pull complete
c7517d8a12eb: Pull complete
bcebda00fc90: Pull complete
Digest: sha256:86e7b79379dddf58d7b7189d02ca96cc7e07d18efa4eb42adcaa4cf94531b96e
Status: Downloaded newer image for registry.k8s.io/kube-apiserver:v1.25.2
registry.k8s.io/kube-apiserver:v1.25.2
Pull image registry.k8s.io/kube-controller-manager:v1.25.2
v1.25.2: Pulling from kube-controller-manager
0a602d5f6ca3: Already exists
c7517d8a12eb: Already exists
0f45f4a2df25: Pull complete
Digest: sha256:f961aee35fd2e9a5ee057365e56c5bf40a39bfef91f785f312e51891db41876b
Status: Downloaded newer image for registry.k8s.io/kube-controller-manager:v1.25.2
registry.k8s.io/kube-controller-manager:v1.25.2
Pull image registry.k8s.io/kube-scheduler:v1.25.2
v1.25.2: Pulling from kube-scheduler
0a602d5f6ca3: Already exists
c7517d8a12eb: Already exists
82a2de4c4465: Pull complete
Digest: sha256:ef2e24a920a7432aff5b435562301dde3beb528b0c7bbec58ddf0a9af64d5fce
Status: Downloaded newer image for registry.k8s.io/kube-scheduler:v1.25.2
registry.k8s.io/kube-scheduler:v1.25.2
Pull image registry.k8s.io/kwok/kwok:v0.0.1
v0.0.1: Pulling from kwok/kwok
213ec9aee27d: Pull complete
59d214cbc96c: Pull complete
Digest: sha256:8f03c71a0c4b20c7b2ba00a5864efb85dc8cec948d880b3f6cf0d653d037e383
Status: Downloaded newer image for registry.k8s.io/kwok/kwok:v0.0.1
registry.k8s.io/kwok/kwok:v0.0.1
Starting cluster "kwok-kwok1"
[+] Running 6/6
⠿ Network kwok-kwok1 Created 0.0s
⠿ Container kwok-kwok1-etcd Started 0.9ss
⠿ Container kwok-kwok1-kube-apiserver Started 1.2ss
⠿ Container kwok-kwok1-kwok-controller Started 2.9ss
⠿ Container kwok-kwok1-kube-controller-manager Started 2.9ss
⠿ Container kwok-kwok1-kube-scheduler Started 3.0ss
Cluster "kwok-kwok1" is ready
You can now use your cluster with:

kubectl config use-context kwok-kwok1

Thanks for using kwok!

Le cluster fake est présent même s’il n’a pas vraiment de ressources …

root@localhost:~# kubectl config use-context kwok-kwok1
Switched to context "kwok-kwok1".
root@localhost:~# kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:32766

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
root@localhost:~# kubectl get nodes -o wide
No resources found
root@localhost:~# kubectl get po,svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 91s

avec ces conteneurs Docker localement :

root@localhost:~# docker ps -a

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c723c69db233 registry.k8s.io/kube-scheduler:v1.25.2 "/go-runner kube-sch…" 2 minutes ago Up 2 minutes kwok-kwok1-kube-scheduler
4efb63dc359d registry.k8s.io/kube-controller-manager:v1.25.2 "/go-runner kube-con…" 2 minutes ago Up 2 minutes kwok-kwok1-kube-controller-manager
4db85e36b637 registry.k8s.io/kwok/kwok:v0.0.1 "/usr/local/bin/kwok…" 2 minutes ago Up 2 minutes kwok-kwok1-kwok-controller
130c7ea2193f registry.k8s.io/kube-apiserver:v1.25.2 "/go-runner kube-api…" 2 minutes ago Up 2 minutes 0.0.0.0:32766->6443/tcp, :::32766->6443/tcp kwok-kwok1-kube-apiserver
5965ab1f6224 registry.k8s.io/etcd:3.5.4-0 "etcd --data-dir /et…" 2 minutes ago Up 2 minutes 2379-2380/tcp, 4001/tcp, 7001/tcp kwok-kwok1-etcd

Et pourquoi ne pas en créer une dizaine de clusters fake ici :

root@localhost:~# for i in {2..10}; do kwokctl create cluster --name=kwok$i; done
Creating cluster "kwok-kwok2"
Starting cluster "kwok-kwok2"
[+] Running 6/6
⠿ Network kwok-kwok2 Created 0.0s
⠿ Container kwok-kwok2-etcd Started 0.4ss
⠿ Container kwok-kwok2-kube-apiserver Started 0.6ss
⠿ Container kwok-kwok2-kwok-controller Started 1.2ss
⠿ Container kwok-kwok2-kube-controller-manager Started 1.3ss
⠿ Container kwok-kwok2-kube-scheduler Started 1.2ss
Cluster "kwok-kwok2" is ready
You can now use your cluster with:

kubectl config use-context kwok-kwok2

Thanks for using kwok!
Creating cluster "kwok-kwok3"
Starting cluster "kwok-kwok3"
[+] Running 6/6
⠿ Network kwok-kwok3 Created 0.0s
⠿ Container kwok-kwok3-etcd Started 0.4ss
⠿ Container kwok-kwok3-kube-apiserver Started 0.6ss
⠿ Container kwok-kwok3-kube-scheduler Started 1.2ss
⠿ Container kwok-kwok3-kube-controller-manager Started 1.3ss
⠿ Container kwok-kwok3-kwok-controller Started 1.1ss
Cluster "kwok-kwok3" is ready
You can now use your cluster with:

kubectl config use-context kwok-kwok3

Thanks for using kwok!
.
.
.
.
Cluster "kwok-kwok8" is ready
You can now use your cluster with:

kubectl config use-context kwok-kwok8

Thanks for using kwok!
Creating cluster "kwok-kwok9"
Starting cluster "kwok-kwok9"
[+] Running 6/6
⠿ Network kwok-kwok9 Created 0.0s
⠿ Container kwok-kwok9-etcd Started 0.4ss
⠿ Container kwok-kwok9-kube-apiserver Started 0.8ss
⠿ Container kwok-kwok9-kwok-controller Started 1.2ss
⠿ Container kwok-kwok9-kube-controller-manager Started 1.4ss
⠿ Container kwok-kwok9-kube-scheduler Started 1.4ss
Cluster "kwok-kwok9" is ready
You can now use your cluster with:

kubectl config use-context kwok-kwok9

Thanks for using kwok!
Creating cluster "kwok-kwok10"
Starting cluster "kwok-kwok10"
[+] Running 6/6
⠿ Network kwok-kwok10 Created 0.1s
⠿ Container kwok-kwok10-etcd Started 0.4ss
⠿ Container kwok-kwok10-kube-apiserver Started 0.7ss
⠿ Container kwok-kwok10-kube-scheduler Started 1.5ss
⠿ Container kwok-kwok10-kube-controller-manager Started 1.5ss
⠿ Container kwok-kwok10-kwok-controller Started 1.2ss
Cluster "kwok-kwok10" is ready
You can now use your cluster with:

kubectl config use-context kwok-kwok10

Thanks for using kwok!

Même si le prix à payer est un grand nombre de conteneurs en exécution ici :

et donc une charge mémoire plus élevée …

Il est tout aussi simple de les supprimer …

root@localhost:~# for i in {2..10}; do kwokctl delete cluster --name=kwok$i; done
Stopping cluster "kwok-kwok2"
[+] Running 6/6
⠿ Container kwok-kwok2-kwok-controller Removed 0.4ss
⠿ Container kwok-kwok2-kube-controller-manager Removed 0.4ss
⠿ Container kwok-kwok2-kube-scheduler Removed 0.4ss
⠿ Container kwok-kwok2-kube-apiserver Removed 1.3ss
⠿ Container kwok-kwok2-etcd Removed 0.2ss
⠿ Network kwok-kwok2 Removed 0.1ss
Deleting cluster "kwok-kwok2"
Cluster "kwok-kwok2" deleted
Stopping cluster "kwok-kwok3"
[+] Running 6/6
⠿ Container kwok-kwok3-kube-controller-manager Removed 0.3ss
⠿ Container kwok-kwok3-kube-scheduler Removed 0.3ss
⠿ Container kwok-kwok3-kwok-controller Removed 0.3ss
⠿ Container kwok-kwok3-kube-apiserver Removed 1.3ss
⠿ Container kwok-kwok3-etcd Removed 0.2ss
⠿ Network kwok-kwok3 Removed 0.1ss
Deleting cluster "kwok-kwok3"
Cluster "kwok-kwok3" deleted
Stopping cluster "kwok-kwok4"

Création d’une centaine de noeuds fake dans le cluster fake lui même …

root@localhost:~# for i in {1..100}; do kubectl apply -f - <<EOF
apiVersion: v1
kind: Node
metadata:
annotations:
node.alpha.kubernetes.io/ttl: "0"
kwok.x-k8s.io/node: fake
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/os: linux
kubernetes.io/arch: amd64
kubernetes.io/hostname: kwok-node-$i
kubernetes.io/os: linux
kubernetes.io/role: agent
node-role.kubernetes.io/agent: ""
type: kwok
name: kwok-node-$i
spec:
taints: # Avoid scheduling actual running pods to fake Node
- effect: NoSchedule
key: kwok.x-k8s.io/node
value: fake
status:
allocatable:
cpu: 32
memory: 256Gi
pods: 110
capacity:
cpu: 32
memory: 256Gi
pods: 110
nodeInfo:
architecture: amd64
bootID: ""
containerRuntimeVersion: ""
kernelVersion: ""
kubeProxyVersion: fake
kubeletVersion: fake
machineID: ""
operatingSystem: linux
osImage: ""
systemUUID: ""
phase: Running
EOF
done
node/kwok-node-51 created
node/kwok-node-52 created
node/kwok-node-53 created
node/kwok-node-54 created
node/kwok-node-55 created
node/kwok-node-56 created
node/kwok-node-57 created
node/kwok-node-58 created
node/kwok-node-59 created
node/kwok-node-60 created
node/kwok-node-61 created
node/kwok-node-62 created
node/kwok-node-63 created
node/kwok-node-64 created
node/kwok-node-65 created
node/kwok-node-66 created
node/kwok-node-67 created
node/kwok-node-68 created
node/kwok-node-69 created
node/kwok-node-70 created
node/kwok-node-71 created
node/kwok-node-72 created
node/kwok-node-73 created
node/kwok-node-74 created
node/kwok-node-75 created
node/kwok-node-76 created
node/kwok-node-77 created
node/kwok-node-78 created
node/kwok-node-79 created
node/kwok-node-80 created
node/kwok-node-81 created
node/kwok-node-82 created
node/kwok-node-83 created
node/kwok-node-84 created
node/kwok-node-85 created
node/kwok-node-86 created
node/kwok-node-87 created
node/kwok-node-88 created
node/kwok-node-89 created
node/kwok-node-90 created
node/kwok-node-91 created
node/kwok-node-92 created
node/kwok-node-93 created
node/kwok-node-94 created
node/kwok-node-95 created
node/kwok-node-96 created
node/kwok-node-97 created
node/kwok-node-98 created
node/kwok-node-99 created
node/kwok-node-100 created
root@localhost:~# kubectl get nodes -o wide

NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kwok-node-1 Ready agent 49s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-10 Ready agent 47s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-100 Ready agent 33s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-11 Ready agent 47s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-12 Ready agent 47s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-13 Ready agent 47s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-14 Ready agent 47s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-15 Ready agent 46s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-16 Ready agent 46s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-17 Ready agent 46s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-18 Ready agent 46s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-19 Ready agent 46s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-2 Ready agent 49s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-20 Ready agent 46s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-21 Ready agent 45s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-22 Ready agent 45s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-23 Ready agent 45s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-24 Ready agent 45s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-25 Ready agent 45s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-26 Ready agent 45s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-27 Ready agent 44s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-28 Ready agent 44s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-29 Ready agent 44s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-3 Ready agent 49s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-30 Ready agent 44s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-31 Ready agent 44s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-32 Ready agent 44s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-33 Ready agent 43s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-34 Ready agent 43s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-35 Ready agent 43s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-36 Ready agent 43s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-37 Ready agent 43s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-38 Ready agent 43s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-39 Ready agent 43s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-4 Ready agent 48s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-40 Ready agent 42s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-41 Ready agent 42s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-42 Ready agent 42s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-43 Ready agent 42s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-44 Ready agent 42s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-45 Ready agent 42s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-46 Ready agent 41s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-47 Ready agent 41s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-48 Ready agent 41s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-49 Ready agent 41s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-5 Ready agent 48s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-50 Ready agent 41s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-51 Ready agent 41s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-52 Ready agent 41s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-53 Ready agent 40s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-54 Ready agent 40s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-55 Ready agent 40s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-56 Ready agent 40s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-57 Ready agent 40s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-58 Ready agent 40s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-59 Ready agent 39s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-6 Ready agent 48s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-60 Ready agent 39s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-61 Ready agent 39s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-62 Ready agent 39s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-63 Ready agent 39s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-64 Ready agent 39s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-65 Ready agent 38s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-66 Ready agent 38s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-67 Ready agent 38s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-68 Ready agent 38s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-69 Ready agent 38s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-7 Ready agent 48s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-70 Ready agent 38s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-71 Ready agent 37s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-72 Ready agent 37s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-73 Ready agent 37s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-74 Ready agent 37s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-75 Ready agent 37s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-76 Ready agent 37s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-77 Ready agent 36s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-78 Ready agent 36s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-79 Ready agent 36s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-8 Ready agent 48s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-80 Ready agent 36s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-81 Ready agent 36s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-82 Ready agent 36s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-83 Ready agent 35s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-84 Ready agent 35s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-85 Ready agent 35s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-86 Ready agent 35s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-87 Ready agent 35s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-88 Ready agent 35s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-89 Ready agent 35s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-9 Ready agent 47s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-90 Ready agent 34s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-91 Ready agent 34s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-92 Ready agent 34s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-93 Ready agent 34s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-94 Ready agent 34s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-95 Ready agent 34s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-96 Ready agent 33s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-97 Ready agent 33s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-98 Ready agent 33s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
kwok-node-99 Ready agent 33s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>

Avec un milliers de Pods fake répartis dans cette centaine de noeuds fake …

root@localhost:~# kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: fake-pod
namespace: default
spec:
replicas: 1000
selector:
matchLabels:
app: fake-pod
template:
metadata:
labels:
app: fake-pod
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: type
operator: In
values:
- kwok
# A taints was added to an automatically created Node.
# You can remove taints of Node or add this tolerations.
tolerations:
- key: "kwok.x-k8s.io/node"
operator: "Exists"
effect: "NoSchedule"
containers:
- name: fake-container
image: fake-image
EOF
deployment.apps/fake-pod created
fake-pod-6cf6574478-wz4jc   1/1     Running   0          15s   10.0.1.150   kwok-node-89    <none>           <none>
fake-pod-6cf6574478-wzn6c 1/1 Running 0 17s 10.0.1.112 kwok-node-97 <none> <none>
fake-pod-6cf6574478-x2ptf 1/1 Running 0 24s 10.0.0.228 kwok-node-28 <none> <none>
fake-pod-6cf6574478-x2qp7 1/1 Running 0 4s 10.0.2.100 kwok-node-42 <none> <none>
fake-pod-6cf6574478-x56kp 1/1 Running 0 2s 10.0.2.140 kwok-node-68 <none> <none>
fake-pod-6cf6574478-x5c2p 1/1 Running 0 29s 10.0.0.117 kwok-node-25 <none> <none>
fake-pod-6cf6574478-x5q56 1/1 Running 0 14s 10.0.1.170 kwok-node-77 <none> <none>
fake-pod-6cf6574478-x6j68 1/1 Running 0 30s 10.0.0.94 kwok-node-78 <none> <none>
fake-pod-6cf6574478-x6qng 1/1 Running 0 34s 10.0.0.13 kwok-node-39 <none> <none>
fake-pod-6cf6574478-x77jl 1/1 Running 0 4s 10.0.2.95 kwok-node-49 <none> <none>
fake-pod-6cf6574478-x88tv 1/1 Running 0 28s 10.0.0.139 kwok-node-14 <none> <none>
fake-pod-6cf6574478-xbtzb 1/1 Running 0 30s 10.0.0.112 kwok-node-8 <none> <none>
fake-pod-6cf6574478-xbwwd 1/1 Running 0 6s 10.0.2.60 kwok-node-22 <none> <none>
fake-pod-6cf6574478-xbx4c 1/1 Running 0 33s 10.0.0.52 kwok-node-25 <none> <none>
fake-pod-6cf6574478-xc6c4 1/1 Running 0 9s 10.0.2.8 kwok-node-44 <none> <none>
fake-pod-6cf6574478-xfv5l 1/1 Running 0 9s 10.0.2.6 kwok-node-89 <none> <none>
fake-pod-6cf6574478-xgnh7 1/1 Running 0 18s 10.0.1.86 kwok-node-4 <none> <none>
fake-pod-6cf6574478-xk786 1/1 Running 0 27s 10.0.0.173 kwok-node-35 <none> <none>
fake-pod-6cf6574478-xkrtn 1/1 Running 0 34s 10.0.0.23 kwok-node-46 <none> <none>
fake-pod-6cf6574478-xmhgt 1/1 Running 0 10s 10.0.1.240 kwok-node-8 <none> <none>
fake-pod-6cf6574478-xn2zd 1/1 Running 0 5s 10.0.2.83 kwok-node-59 <none> <none>
fake-pod-6cf6574478-xp2nt 1/1 Running 0 26s 10.0.0.178 kwok-node-64 <none> <none>
fake-pod-6cf6574478-xq6dt 1/1 Running 0 21s 10.0.1.22 kwok-node-79 <none> <none>
fake-pod-6cf6574478-xqbnh 1/1 Running 0 25s 10.0.0.200 kwok-node-12 <none> <none>
fake-pod-6cf6574478-xs4gr 1/1 Running 0 8s 10.0.2.23 kwok-node-18 <none> <none>
fake-pod-6cf6574478-xtjp6 1/1 Running 0 14s 10.0.1.173 kwok-node-15 <none> <none>
fake-pod-6cf6574478-xwjxn 1/1 Running 0 8s 10.0.2.19 kwok-node-46 <none> <none>
fake-pod-6cf6574478-xwnh4 1/1 Running 0 15s 10.0.1.148 kwok-node-67 <none> <none>
fake-pod-6cf6574478-xx7nr 1/1 Running 0 16s 10.0.1.135 kwok-node-47 <none> <none>
fake-pod-6cf6574478-xxghz 1/1 Running 0 13s 10.0.1.189 kwok-node-23 <none> <none>
fake-pod-6cf6574478-xxkgl 1/1 Running 0 24s 10.0.0.215 kwok-node-54 <none> <none>
fake-pod-6cf6574478-xzx9w 1/1 Running 0 1s 10.0.2.156 kwok-node-48 <none> <none>
fake-pod-6cf6574478-z24kw 1/1 Running 0 9s 10.0.2.10 kwok-node-99 <none> <none>
fake-pod-6cf6574478-z4jcq 1/1 Running 0 20s 10.0.1.47 kwok-node-64 <none> <none>
fake-pod-6cf6574478-z4wmz 1/1 Running 0 7s 10.0.2.34 kwok-node-5 <none> <none>
fake-pod-6cf6574478-z8ngw 1/1 Running 0 34s 10.0.0.1 kwok-node-71 <none> <none>
fake-pod-6cf6574478-z9dgl 1/1 Running 0 25s 10.0.0.195 kwok-node-43 <none> <none>
fake-pod-6cf6574478-z9jdp 1/1 Running 0 21s 10.0.1.30 kwok-node-44 <none> <none>
fake-pod-6cf6574478-zckwd 1/1 Running 0 10s 10.0.1.251 kwok-node-81 <none> <none>
fake-pod-6cf6574478-zfmdj 1/1 Running 0 30s 10.0.0.105 kwok-node-91 <none> <none>
fake-pod-6cf6574478-zl8xb 1/1 Running 0 32s 10.0.0.61 kwok-node-3 <none> <none>
fake-pod-6cf6574478-zlrlv 1/1 Running 0 9s 10.0.2.2 kwok-node-50 <none> <none>
fake-pod-6cf6574478-zmq2b 1/1 Running 0 18s 10.0.1.88 kwok-node-1 <none> <none>
fake-pod-6cf6574478-znvn2 1/1 Running 0 31s 10.0.0.90 kwok-node-58 <none> <none>
fake-pod-6cf6574478-zpkg7 1/1 Running 0 14s 10.0.1.158 kwok-node-90 <none> <none>
fake-pod-6cf6574478-zqvw2 1/1 Running 0 7s 10.0.2.43 kwok-node-66 <none> <none>
fake-pod-6cf6574478-zqzqp 1/1 Running 0 34s 10.0.0.27 kwok-node-20 <none> <none>
fake-pod-6cf6574478-zqzx2 1/1 Running 0 17s 10.0.1.111 kwok-node-14 <none> <none>
fake-pod-6cf6574478-zwhn6 1/1 Running 0 10s 10.0.1.238 kwok-node-79 <none> <none>
fake-pod-6cf6574478-zxhxm 1/1 Running 0 14s 10.0.1.159 kwok-node-99 <none> <none>

Récupération de Kind pour le test defake-kubelet :

un faux kubelet peut simuler un nombre quelconque de nœuds et maintenir des pods sur ces nœuds. Il est utile pour tester le plan de contrôle. fake-kubelet est une simulation d’un nœud de Kubernetes.

Il peut être utilisé comme une alternative à Kind dans certains scénarios où vous n’avez pas besoin d’exécuter réellement le pod.

root@localhost:~# curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.17.0/kind-linux-amd64 && chmod +x ./kind && mv ./kind /usr/local/bin/kind
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 97 100 97 0 0 925 0 --:--:-- --:--:-- --:--:-- 923
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 6766k 100 6766k 0 0 10.5M 0 --:--:-- --:--:-- --:--:-- 10.5M
root@localhost:~# kind
kind creates and manages local Kubernetes clusters using Docker container 'nodes'

Usage:
kind [command]

Available Commands:
build Build one of [node-image]
completion Output shell completion code for the specified shell (bash, zsh or fish)
create Creates one of [cluster]
delete Deletes one of [cluster]
export Exports one of [kubeconfig, logs]
get Gets one of [clusters, nodes, kubeconfig]
help Help about any command
load Loads images into nodes
version Prints the kind CLI version

Flags:
-h, --help help for kind
--loglevel string DEPRECATED: see -v instead
-q, --quiet silence all stderr output
-v, --verbosity int32 info log verbosity, higher value produces more output
--version version for kind

Use "kind [command] --help" for more information about a command.

Et récupération ensuite de fake-k8s :

root@localhost:~# wget -c https://github.com/wzshiming/fake-k8s/releases/download/v0.2.0/fake-k8s_linux_amd64

root@localhost:~# chmod +x fake-k8s_linux_amd64 && mv fake-k8s_linux_amd64 /usr/local/bin/fake-k8s

fake-k8s est un outil permettant d’exécuter de faux clusters Kubernetes :

root@localhost:~# fake-k8s create cluster --name c1

Creating cluster "fake-k8s-c1"
Download https://dl.k8s.io/release/v1.24.1/bin/linux/amd64/kubectl
############################################################| 100% 0s
Download https://dl.k8s.io/release/v1.24.1/bin/linux/amd64/kube-apiserver
############################################################| 100% 1s
Download https://dl.k8s.io/release/v1.24.1/bin/linux/amd64/kube-controller-manager
############################################################| 100% 6s
Download https://dl.k8s.io/release/v1.24.1/bin/linux/amd64/kube-scheduler
############################################################| 100% 0s
Download https://github.com/wzshiming/fake-kubelet/releases/download/v0.7.4/fake-kubelet_linux_amd64
############################################################| 100% 2s
Download https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz
############################################################| 100% 1s
Starting cluster "fake-k8s-c1"
Wait for cluster "fake-k8s-c1" to be ready
Cluster "fake-k8s-c1" is ready
> kubectl --context fake-k8s-c1 get node
NAME STATUS ROLES AGE VERSION
fake-0 Ready agent 1s fake
fake-1 Ready agent 1s fake
fake-2 Ready agent 1s fake
fake-3 Ready agent 1s fake
fake-4 Ready agent 1s fakeou un second …
root@localhost:~# fake-k8s create cluster --name c2
Creating cluster "fake-k8s-c2"
Starting cluster "fake-k8s-c2"
Wait for cluster "fake-k8s-c2" to be ready
Cluster "fake-k8s-c2" is ready
> kubectl --context fake-k8s-c2 get node
NAME STATUS ROLES AGE VERSION
fake-0 Ready agent 0s fake
fake-1 Ready agent 0s fake
fake-2 Ready agent 0s fake
fake-3 Ready agent 0s fake
fake-4 Ready agent 0s fake
root@localhost:~# fake-k8s get clusters 
c1
c2

et facilement supprimable …

root@localhost:~# fake-k8s delete cluster --name c1 && fake-k8s delete cluster --name c2
Stopping cluster "fake-k8s-c1"
Deleting cluster "fake-k8s-c1"
Cluster "fake-k8s-c1" deleted
Stopping cluster "fake-k8s-c2"
Deleting cluster "fake-k8s-c2"

On peut égalment créer des pods fake …

root@localhost:~# fake-k8s create cluster
Creating cluster "fake-k8s-default"
Starting cluster "fake-k8s-default"
Wait for cluster "fake-k8s-default" to be ready
Cluster "fake-k8s-default" is ready
> kubectl --context fake-k8s-default get node
NAME STATUS ROLES AGE VERSION
fake-0 Ready agent 1s fake
fake-1 Ready agent 1s fake
fake-2 Ready agent 1s fake
fake-3 Ready agent 1s fake
fake-4 Ready agent 1s fake
root@localhost:~# kubectl --context=fake-k8s-default get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
fake-0 Ready agent 7s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
fake-1 Ready agent 7s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
fake-2 Ready agent 7s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
fake-3 Ready agent 7s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
fake-4 Ready agent 7s fake 196.168.0.1 <none> <unknown> <unknown> <unknown>
root@localhost:~# kubectl --context=fake-k8s-default apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: fake-pod
namespace: default
spec:
replicas: 100
selector:
matchLabels:
app: fake-pod
template:
metadata:
labels:
app: fake-pod
spec:
containers:
- name: fake-pod
image: fake
EOF
deployment.apps/fake-pod created
root@localhost:~# kubectl --context=fake-k8s-default get pod -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
fake-pod-6f5fffcbc-26m97 1/1 Running 0 38s 10.0.0.58 fake-2 <none> <none>
fake-pod-6f5fffcbc-287rc 1/1 Running 0 38s 10.0.0.64 fake-0 <none> <none>
fake-pod-6f5fffcbc-28s2q 1/1 Running 0 38s 10.0.0.60 fake-0 <none> <none>
fake-pod-6f5fffcbc-2df4j 1/1 Running 0 37s 10.0.0.81 fake-4 <none> <none>
fake-pod-6f5fffcbc-2xw4g 1/1 Running 0 37s 10.0.0.90 fake-4 <none> <none>
fake-pod-6f5fffcbc-4jwlx 1/1 Running 0 40s 10.0.0.13 fake-2 <none> <none>
fake-pod-6f5fffcbc-5dhxh 1/1 Running 0 37s 10.0.0.79 fake-0 <none> <none>
fake-pod-6f5fffcbc-5nd6d 1/1 Running 0 37s 10.0.0.82 fake-0 <none> <none>
.
.
.
.
fake-pod-6f5fffcbc-zbwf5 1/1 Running 0 40s 10.0.0.31 fake-4 <none> <none>
fake-pod-6f5fffcbc-zg4nx 1/1 Running 0 38s 10.0.0.68 fake-4 <none> <none>
fake-pod-6f5fffcbc-zgwln 1/1 Running 0 38s 10.0.0.65 fake-3 <none> <none>
fake-pod-6f5fffcbc-zmrjz 1/1 Running 0 40s 10.0.0.21 fake-2 <none> <none>
fake-pod-6f5fffcbc-zr9fv 1/1 Running 0 39s 10.0.0.51 fake-2 <none> <none>
fake-pod-6f5fffcbc-zrlx6 1/1 Running 0 39s 10.0.0.40 fake-4 <none> <none>

Comme au tout début de cet article, je peux réaliser une simulation mais en utilisant fake-kubelet comme évoqué plus haut. Pour cela utilisation d’un cluster avec Kind :

root@localhost:~# kind create cluster

Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.25.3) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
root@localhost:~# kubectl cluster-info --context kind-kind
Kubernetes control plane is running at https://127.0.0.1:38141
CoreDNS is running at https://127.0.0.1:38141/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

et les noeuds fake apparaissent avec ce manifest YAML:

root@localhost:~# kubectl apply -f https://raw.githubusercontent.com/wzshiming/fake-kubelet/master/deploy.yaml

serviceaccount/fake-kubelet created
clusterrole.rbac.authorization.k8s.io/fake-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/fake-kubelet created
configmap/fake-kubelet created
deployment.apps/fake-kubelet created

root@localhost:~# kubectl get node -o wide

NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
fake-0 Ready agent 4s fake 10.244.0.3 <none> <unknown> <unknown> <unknown>
fake-1 Ready agent 4s fake 10.244.0.3 <none> <unknown> <unknown> <unknown>
fake-2 Ready agent 4s fake 10.244.0.3 <none> <unknown> <unknown> <unknown>
fake-3 Ready agent 4s fake 10.244.0.3 <none> <unknown> <unknown> <unknown>
fake-4 Ready agent 4s fake 10.244.0.3 <none> <unknown> <unknown> <unknown>
kind-control-plane Ready control-plane 31s v1.25.3 192.168.144.2 <none> Ubuntu 22.04.1 LTS 5.15.0-47-generic containerd://1.6.9

et des Pods fake aussi …

root@localhost:~# kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: fake-pod
namespace: default
spec:
replicas: 10
selector:
matchLabels:
app: fake-pod
template:
metadata:
labels:
app: fake-pod
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: type
operator: In
values:
- fake-kubelet
tolerations: # A taints was added to an automatically created Node. You can remove taints of Node or add this tolerations
- key: "fake-kubelet/provider"
operator: "Exists"
effect: "NoSchedule"
# nodeName: fake-0 # Or direct scheduling to a fake node
containers:
- name: fake-pod
image: fake
EOF
deployment.apps/fake-pod created
root@localhost:~# kubectl get pod -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
fake-pod-7f8c95d884-8cd7w 1/1 Running 0 55s 10.0.0.15 fake-3 <none> <none>
fake-pod-7f8c95d884-bbtnr 1/1 Running 0 55s 10.0.0.20 fake-0 <none> <none>
fake-pod-7f8c95d884-f72sz 1/1 Running 0 55s 10.0.0.19 fake-4 <none> <none>
fake-pod-7f8c95d884-gkdz8 1/1 Running 0 55s 10.0.0.13 fake-2 <none> <none>
fake-pod-7f8c95d884-jvb7j 1/1 Running 0 55s 10.0.0.14 fake-4 <none> <none>
fake-pod-7f8c95d884-kgnzl 1/1 Running 0 55s 10.0.0.12 fake-0 <none> <none>
fake-pod-7f8c95d884-q8cxh 1/1 Running 0 55s 10.0.0.17 fake-3 <none> <none>
fake-pod-7f8c95d884-q8sqc 1/1 Running 0 55s 10.0.0.18 fake-2 <none> <none>
fake-pod-7f8c95d884-rgg67 1/1 Running 0 55s 10.0.0.11 fake-1 <none> <none>
fake-pod-7f8c95d884-zs9vf 1/1 Running 0 55s 10.0.0.16 fake-1 <none> <none>

Comme l’indique le dépôt GitHub, pourquoi ne pas créer des noeuds dans le cluster avec une architecture matérielle particulière comme ARM64 …

root@localhost:~# for i in {10..15}; do kubectl apply -f - <<EOF
apiVersion: v1
kind: Node
metadata:
annotations:
node.alpha.kubernetes.io/ttl: "0"
labels:
app: fake-kubelet
beta.kubernetes.io/arch: arm64
beta.kubernetes.io/os: linux
kubernetes.io/arch: arm64
kubernetes.io/hostname: fake-arm-$i
kubernetes.io/os: linux
kubernetes.io/role: agent
node-role.kubernetes.io/agent: ""
type: fake-kubelet # Matches to fake-kubelet's environment variable TAKE_OVER_LABELS_SELECTOR, this node will be taken over by fake-kubelet
name: fake-arm-$i
spec:
taints: # Avoid scheduling actual running pods to fake Node
- effect: NoSchedule
key: fake-kubelet/provider
value: fake
status:
allocatable:
cpu: 32
memory: 256Gi
pods: 110
capacity:
cpu: 32
memory: 256Gi
pods: 110
nodeInfo:
architecture: arm64
bootID: ""
containerRuntimeVersion: ""
kernelVersion: ""
kubeProxyVersion: fake
kubeletVersion: fake
machineID: ""
operatingSystem: linux
osImage: ""
systemUUID: ""
phase: Running
EOF
done
node/fake-arm-10 created
node/fake-arm-11 created
node/fake-arm-12 created
node/fake-arm-13 created
node/fake-arm-14 created
node/fake-arm-15 created

et ils apparaissent bien …

root@localhost:~# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
fake-0 Ready agent 7m42s fake 10.244.0.3 <none> <unknown> <unknown> <unknown>
fake-1 Ready agent 7m42s fake 10.244.0.3 <none> <unknown> <unknown> <unknown>
fake-2 Ready agent 7m42s fake 10.244.0.3 <none> <unknown> <unknown> <unknown>
fake-3 Ready agent 7m42s fake 10.244.0.3 <none> <unknown> <unknown> <unknown>
fake-4 Ready agent 7m42s fake 10.244.0.3 <none> <unknown> <unknown> <unknown>
fake-arm-10 Ready agent 26s fake 10.244.0.3 <none> <unknown> <unknown> <unknown>
fake-arm-11 Ready agent 26s fake 10.244.0.3 <none> <unknown> <unknown> <unknown>
fake-arm-12 Ready agent 26s fake 10.244.0.3 <none> <unknown> <unknown> <unknown>
fake-arm-13 Ready agent 25s fake 10.244.0.3 <none> <unknown> <unknown> <unknown>
fake-arm-14 Ready agent 25s fake 10.244.0.3 <none> <unknown> <unknown> <unknown>
fake-arm-15 Ready agent 25s fake 10.244.0.3 <none> <unknown> <unknown> <unknown>
kind-control-plane Ready control-plane 8m9s v1.25.3 192.168.144.2 <none> Ubuntu 22.04.1 LTS 5.15.0-47-generic containerd://1.6.9

avec des ressources modérées consommées pour le noeud …

root@localhost:~# kubectl get po,svc -A

NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/fake-pod-7f8c95d884-8cd7w 1/1 Running 0 5m44s
default pod/fake-pod-7f8c95d884-bbtnr 1/1 Running 0 5m44s
default pod/fake-pod-7f8c95d884-f72sz 1/1 Running 0 5m44s
default pod/fake-pod-7f8c95d884-gkdz8 1/1 Running 0 5m44s
default pod/fake-pod-7f8c95d884-jvb7j 1/1 Running 0 5m44s
default pod/fake-pod-7f8c95d884-kgnzl 1/1 Running 0 5m44s
default pod/fake-pod-7f8c95d884-q8cxh 1/1 Running 0 5m44s
default pod/fake-pod-7f8c95d884-q8sqc 1/1 Running 0 5m44s
default pod/fake-pod-7f8c95d884-rgg67 1/1 Running 0 5m44s
default pod/fake-pod-7f8c95d884-zs9vf 1/1 Running 0 5m44s
kube-system pod/coredns-565d847f94-65z2r 1/1 Running 0 8m47s
kube-system pod/coredns-565d847f94-srtcf 1/1 Running 0 8m47s
kube-system pod/etcd-kind-control-plane 1/1 Running 0 9m1s
kube-system pod/fake-kubelet-64dc9d99c7-9w924 1/1 Running 0 8m39s
kube-system pod/kindnet-6cf2q 1/1 Running 0 8m37s
kube-system pod/kindnet-bjnr9 1/1 Running 0 8m37s
kube-system pod/kindnet-c8jlf 1/1 Running 0 81s
kube-system pod/kindnet-j6k6g 1/1 Running 0 80s
kube-system pod/kindnet-jv5gw 1/1 Running 0 81s
kube-system pod/kindnet-ksz2q 1/1 Running 0 8m37s
kube-system pod/kindnet-lh9vb 1/1 Running 0 81s
kube-system pod/kindnet-plgv7 1/1 Running 0 8m37s
kube-system pod/kindnet-r8s7c 1/1 Running 0 80s
kube-system pod/kindnet-rtl4q 1/1 Running 0 8m48s
kube-system pod/kindnet-vlw7s 1/1 Running 0 80s
kube-system pod/kindnet-w7ld2 1/1 Running 0 8m37s
kube-system pod/kube-apiserver-kind-control-plane 1/1 Running 0 9m1s
kube-system pod/kube-controller-manager-kind-control-plane 1/1 Running 0 9m
kube-system pod/kube-proxy-69wbv 1/1 Running 0 80s
kube-system pod/kube-proxy-6n5kd 1/1 Running 0 81s
kube-system pod/kube-proxy-7c6qx 1/1 Running 0 8m37s
kube-system pod/kube-proxy-7p2gn 1/1 Running 0 81s
kube-system pod/kube-proxy-9mzbp 1/1 Running 0 8m37s
kube-system pod/kube-proxy-cbk8k 1/1 Running 0 8m37s
kube-system pod/kube-proxy-d48kp 1/1 Running 0 80s
kube-system pod/kube-proxy-g9klm 1/1 Running 0 81s
kube-system pod/kube-proxy-hns8c 1/1 Running 0 8m48s
kube-system pod/kube-proxy-ndq9x 1/1 Running 0 80s
kube-system pod/kube-proxy-p45rq 1/1 Running 0 8m37s
kube-system pod/kube-proxy-rkcbc 1/1 Running 0 8m37s
kube-system pod/kube-scheduler-kind-control-plane 1/1 Running 0 9m2s
local-path-storage pod/local-path-provisioner-684f458cdd-pvx6j 1/1 Running 0 8m47s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9m2s
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 9m1s

Donc ici fake-kubelet est une simulation d’un nœud de Kubernetes. Il y a un fake-k8s ici, qui peut démarrer un cluster en utilisant les nœuds de simulation de fake-kubelet. Il peut être utilisé comme une alternative à Kind dans certains scénarios où vous n’avez pas besoin d’exécuter réellement le Pod …

Considérant les limites de Kubernetes, un outil de simulation peut être utile même s’il ne permettra pas réellement de tester les comportements et consommation de ressources réels surtout dans le cas de grands clusters :

Pour rappel =>

Un cluster est un ensemble de nœuds (machines physiques ou virtuelles) exécutant des agents Kubernetes, gérés par le plan de contrôle. Kubernetes v1.25 prend en charge des clusters comptant jusqu’à 5000 nœuds. Plus précisément, Kubernetes est conçu pour s’adapter aux configurations qui répondent à tous les critères suivants :

  • Pas plus de 110 Pods par nœud
  • Pas plus de 5000 Nœuds
  • Pas plus de 150000 Pods au total
  • Pas plus de 300 000 Conteneurs au total

Vous pouvez faire évoluer votre cluster en ajoutant ou en supprimant des nœuds. La manière dont vous procédez dépend de la façon dont votre cluster est déployé …

À suivre !

--

--

Karim
Karim

Written by Karim

Above the clouds, the sky is always blue ...

Responses (1)