Running Suite: Kubernetes e2e suite =================================== Random Seed: 1621615143 - Will randomize all specs Will run 5484 specs Running in parallel across 10 nodes May 21 16:39:05.126: INFO: >>> kubeConfig: /root/.kube/config May 21 16:39:05.130: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 21 16:39:05.154: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 21 16:39:05.201: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 21 16:39:05.201: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 21 16:39:05.201: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 21 16:39:05.209: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) May 21 16:39:05.209: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 21 16:39:05.209: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds' (0 seconds elapsed) May 21 16:39:05.209: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 21 16:39:05.209: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) May 21 16:39:05.209: INFO: e2e test version: v1.19.11 May 21 16:39:05.211: INFO: kube-apiserver version: v1.19.11 May 21 16:39:05.211: INFO: >>> kubeConfig: /root/.kube/config May 21 16:39:05.217: INFO: Cluster IP family: ipv4 May 21 16:39:05.222: INFO: >>> kubeConfig: /root/.kube/config May 21 16:39:05.242: INFO: Cluster IP family: ipv4 May 21 16:39:05.225: INFO: >>> kubeConfig: /root/.kube/config May 21 16:39:05.250: INFO: Cluster IP family: ipv4 May 21 16:39:05.223: INFO: >>> kubeConfig: /root/.kube/config May 21 16:39:05.252: INFO: Cluster IP family: ipv4 May 21 16:39:05.238: INFO: >>> kubeConfig: /root/.kube/config May 21 16:39:05.258: INFO: Cluster IP family: ipv4 May 21 16:39:05.244: INFO: >>> kubeConfig: /root/.kube/config May 21 16:39:05.262: INFO: Cluster IP family: ipv4 May 21 16:39:05.246: INFO: >>> kubeConfig: /root/.kube/config May 21 16:39:05.265: INFO: Cluster IP family: ipv4 May 21 16:39:05.253: INFO: >>> kubeConfig: /root/.kube/config May 21 16:39:05.270: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 21 16:39:05.295: INFO: >>> kubeConfig: /root/.kube/config May 21 16:39:05.314: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ May 21 16:39:05.299: INFO: >>> kubeConfig: /root/.kube/config May 21 16:39:05.318: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:05.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl May 21 16:39:05.496: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 21 16:39:05.499: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should reuse port when apply to an existing SVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:806 STEP: creating Agnhost SVC May 21 16:39:05.503: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1258 create -f -' May 21 16:39:05.770: INFO: stderr: "" May 21 16:39:05.770: INFO: stdout: "service/agnhost-primary created\n" STEP: getting the original port May 21 16:39:05.770: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1258 get service agnhost-primary -o jsonpath={.spec.ports[0].port}' May 21 16:39:05.883: INFO: stderr: "" May 21 16:39:05.883: INFO: stdout: "6379" STEP: applying the same configuration May 21 16:39:05.883: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1258 apply -f -' May 21 16:39:06.162: INFO: stderr: "Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\n" May 21 16:39:06.162: INFO: stdout: "service/agnhost-primary configured\n" STEP: getting the port after applying configuration May 21 16:39:06.162: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1258 get service agnhost-primary -o jsonpath={.spec.ports[0].port}' May 21 16:39:06.283: INFO: stderr: "" May 21 16:39:06.283: INFO: stdout: "6379" STEP: checking the result [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:06.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1258" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC","total":-1,"completed":1,"skipped":139,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:05.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl May 21 16:39:05.523: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 21 16:39:05.526: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if cluster-info dump succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1077 STEP: running cluster-info dump May 21 16:39:05.529: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-4212 cluster-info dump' May 21 16:39:06.224: INFO: stderr: "" May 21 16:39:06.228: INFO: stdout: "{\n \"kind\": \"NodeList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/nodes\",\n \"resourceVersion\": \"48998\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"kali-control-plane\",\n \"selfLink\": \"/api/v1/nodes/kali-control-plane\",\n \"uid\": \"3bc732d5-d94d-4ab3-a172-a75c133ce66c\",\n \"resourceVersion\": \"48956\",\n \"creationTimestamp\": \"2021-05-21T15:13:14Z\",\n \"labels\": {\n \"beta.kubernetes.io/arch\": \"amd64\",\n \"beta.kubernetes.io/os\": \"linux\",\n \"ingress-ready\": \"true\",\n \"kubernetes.io/arch\": \"amd64\",\n \"kubernetes.io/hostname\": \"kali-control-plane\",\n \"kubernetes.io/os\": \"linux\",\n \"node-role.kubernetes.io/master\": \"\"\n },\n \"annotations\": {\n \"kubeadm.alpha.kubernetes.io/cri-socket\": \"unix:///run/containerd/containerd.sock\",\n \"node.alpha.kubernetes.io/ttl\": \"0\",\n \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n },\n \"managedFields\": [\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:13:14Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \".\": {},\n \"f:volumes.kubernetes.io/controller-managed-attach-detach\": {}\n },\n \"f:labels\": {\n \".\": {},\n \"f:beta.kubernetes.io/arch\": {},\n \"f:beta.kubernetes.io/os\": {},\n \"f:ingress-ready\": {},\n \"f:kubernetes.io/arch\": {},\n \"f:kubernetes.io/hostname\": {},\n \"f:kubernetes.io/os\": {}\n }\n },\n \"f:spec\": {\n \"f:providerID\": {}\n },\n \"f:status\": {\n \"f:addresses\": {\n \".\": {},\n \"k:{\\\"type\\\":\\\"Hostname\\\"}\": {\n \".\": {},\n \"f:address\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"InternalIP\\\"}\": {\n \".\": {},\n \"f:address\": {},\n \"f:type\": {}\n }\n },\n \"f:allocatable\": {\n \".\": {},\n \"f:cpu\": {},\n \"f:ephemeral-storage\": {},\n \"f:hugepages-1Gi\": {},\n \"f:hugepages-2Mi\": {},\n \"f:memory\": {},\n \"f:pods\": {}\n },\n \"f:capacity\": {\n \".\": {},\n \"f:cpu\": {},\n \"f:ephemeral-storage\": {},\n \"f:hugepages-1Gi\": {},\n \"f:hugepages-2Mi\": {},\n \"f:memory\": {},\n \"f:pods\": {}\n },\n \"f:conditions\": {\n \".\": {},\n \"k:{\\\"type\\\":\\\"DiskPressure\\\"}\": {\n \".\": {},\n \"f:lastHeartbeatTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"MemoryPressure\\\"}\": {\n \".\": {},\n \"f:lastHeartbeatTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"PIDPressure\\\"}\": {\n \".\": {},\n \"f:lastHeartbeatTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastHeartbeatTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:daemonEndpoints\": {\n \"f:kubeletEndpoint\": {\n \"f:Port\": {}\n }\n },\n \"f:images\": {},\n \"f:nodeInfo\": {\n \"f:architecture\": {},\n \"f:bootID\": {},\n \"f:containerRuntimeVersion\": {},\n \"f:kernelVersion\": {},\n \"f:kubeProxyVersion\": {},\n \"f:kubeletVersion\": {},\n \"f:machineID\": {},\n \"f:operatingSystem\": {},\n \"f:osImage\": {},\n \"f:systemUUID\": {}\n }\n }\n }\n },\n {\n \"manager\": \"kubeadm\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:13:17Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \"f:kubeadm.alpha.kubernetes.io/cri-socket\": {}\n },\n \"f:labels\": {\n \"f:node-role.kubernetes.io/master\": {}\n }\n }\n }\n },\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:13:49Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \"f:node.alpha.kubernetes.io/ttl\": {}\n }\n },\n \"f:spec\": {\n \"f:podCIDR\": {},\n \"f:podCIDRs\": {\n \".\": {},\n \"v:\\\"10.244.0.0/24\\\"\": {}\n },\n \"f:taints\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"podCIDR\": \"10.244.0.0/24\",\n \"podCIDRs\": [\n \"10.244.0.0/24\"\n ],\n \"providerID\": \"kind://docker/kali/kali-control-plane\",\n \"taints\": [\n {\n \"key\": \"node-role.kubernetes.io/master\",\n \"effect\": \"NoSchedule\"\n }\n ]\n },\n \"status\": {\n \"capacity\": {\n \"cpu\": \"88\",\n \"ephemeral-storage\": \"459602040Ki\",\n \"hugepages-1Gi\": \"0\",\n \"hugepages-2Mi\": \"0\",\n \"memory\": \"65849824Ki\",\n \"pods\": \"110\"\n },\n \"allocatable\": {\n \"cpu\": \"88\",\n \"ephemeral-storage\": \"459602040Ki\",\n \"hugepages-1Gi\": \"0\",\n \"hugepages-2Mi\": \"0\",\n \"memory\": \"65849824Ki\",\n \"pods\": \"110\"\n },\n \"conditions\": [\n {\n \"type\": \"MemoryPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2021-05-21T16:39:02Z\",\n \"lastTransitionTime\": \"2021-05-21T15:13:08Z\",\n \"reason\": \"KubeletHasSufficientMemory\",\n \"message\": \"kubelet has sufficient memory available\"\n },\n {\n \"type\": \"DiskPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2021-05-21T16:39:02Z\",\n \"lastTransitionTime\": \"2021-05-21T15:13:08Z\",\n \"reason\": \"KubeletHasNoDiskPressure\",\n \"message\": \"kubelet has no disk pressure\"\n },\n {\n \"type\": \"PIDPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2021-05-21T16:39:02Z\",\n \"lastTransitionTime\": \"2021-05-21T15:13:08Z\",\n \"reason\": \"KubeletHasSufficientPID\",\n \"message\": \"kubelet has sufficient PID available\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastHeartbeatTime\": \"2021-05-21T16:39:02Z\",\n \"lastTransitionTime\": \"2021-05-21T15:13:49Z\",\n \"reason\": \"KubeletReady\",\n \"message\": \"kubelet is posting ready status\"\n }\n ],\n \"addresses\": [\n {\n \"type\": \"InternalIP\",\n \"address\": \"172.18.0.3\"\n },\n {\n \"type\": \"Hostname\",\n \"address\": \"kali-control-plane\"\n }\n ],\n \"daemonEndpoints\": {\n \"kubeletEndpoint\": {\n \"Port\": 10250\n }\n },\n \"nodeInfo\": {\n \"machineID\": \"34385849da584383988461f411f72b36\",\n \"systemUUID\": \"8eda95c3-4d8b-4712-a84e-b5a53507e203\",\n \"bootID\": \"8e840902-9ac1-4acc-b00a-3731226c7bea\",\n \"kernelVersion\": \"5.4.0-73-generic\",\n \"osImage\": \"Ubuntu 20.10\",\n \"containerRuntimeVersion\": \"containerd://1.5.1\",\n \"kubeletVersion\": \"v1.19.11\",\n \"kubeProxyVersion\": \"v1.19.11\",\n \"operatingSystem\": \"linux\",\n \"architecture\": \"amd64\"\n },\n \"images\": [\n {\n \"names\": [\n \"k8s.gcr.io/kube-apiserver:v1.19.11\"\n ],\n \"sizeBytes\": 120059217\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-proxy:v1.19.11\"\n ],\n \"sizeBytes\": 119607500\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-controller-manager:v1.19.11\"\n ],\n \"sizeBytes\": 112022883\n },\n {\n \"names\": [\n \"ghcr.io/k8snetworkplumbingwg/multus-cni@sha256:e72aa733faf24d1f62b69ded1126b9b8da0144d35c8a410c4fa0a860006f9eed\",\n \"ghcr.io/k8snetworkplumbingwg/multus-cni:stable\"\n ],\n \"sizeBytes\": 104808100\n },\n {\n \"names\": [\n \"k8s.gcr.io/etcd:3.4.13-0\"\n ],\n \"sizeBytes\": 86742272\n },\n {\n \"names\": [\n \"docker.io/kindest/kindnetd:v20210326-1e038dc5\"\n ],\n \"sizeBytes\": 53960776\n },\n {\n \"names\": [\n \"docker.io/envoyproxy/envoy@sha256:55d35e368436519136dbd978fa0682c49d8ab99e4d768413510f226762b30b07\",\n \"docker.io/envoyproxy/envoy:v1.18.3\"\n ],\n \"sizeBytes\": 51364868\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-scheduler:v1.19.11\"\n ],\n \"sizeBytes\": 47723856\n },\n {\n \"names\": [\n \"quay.io/metallb/speaker@sha256:c150c6a26a29de43097918f08551bbd2a80de229225866a3814594798089e51c\",\n \"quay.io/metallb/speaker:main\"\n ],\n \"sizeBytes\": 39322460\n },\n {\n \"names\": [\n \"k8s.gcr.io/build-image/debian-base:v2.1.0\"\n ],\n \"sizeBytes\": 21086532\n },\n {\n \"names\": [\n \"k8s.gcr.io/coredns:1.7.0\"\n ],\n \"sizeBytes\": 13982350\n },\n {\n \"names\": [\n \"docker.io/rancher/local-path-provisioner:v0.0.14\"\n ],\n \"sizeBytes\": 13367922\n },\n {\n \"names\": [\n \"docker.io/projectcontour/contour@sha256:1b6849d5bda1f5b2f839dad799922a043b82debaba9fa907723b5eb4c49f2e9e\",\n \"docker.io/projectcontour/contour:v1.15.1\"\n ],\n \"sizeBytes\": 11888781\n },\n {\n \"names\": [\n \"docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475\",\n \"docker.io/library/alpine:3.6\"\n ],\n \"sizeBytes\": 2021226\n },\n {\n \"names\": [\n \"k8s.gcr.io/pause:3.4.1\"\n ],\n \"sizeBytes\": 301268\n },\n {\n \"names\": [\n \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n \"k8s.gcr.io/pause:3.2\"\n ],\n \"sizeBytes\": 299513\n }\n ]\n }\n },\n {\n \"metadata\": {\n \"name\": \"kali-worker\",\n \"selfLink\": \"/api/v1/nodes/kali-worker\",\n \"uid\": \"bf0fecb5-899d-46af-8fbb-4fe8a989e19a\",\n \"resourceVersion\": \"48414\",\n \"creationTimestamp\": \"2021-05-21T15:13:50Z\",\n \"labels\": {\n \"beta.kubernetes.io/arch\": \"amd64\",\n \"beta.kubernetes.io/os\": \"linux\",\n \"kubernetes.io/arch\": \"amd64\",\n \"kubernetes.io/hostname\": \"kali-worker\",\n \"kubernetes.io/os\": \"linux\"\n },\n \"annotations\": {\n \"kubeadm.alpha.kubernetes.io/cri-socket\": \"unix:///run/containerd/containerd.sock\",\n \"node.alpha.kubernetes.io/ttl\": \"0\",\n \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n },\n \"managedFields\": [\n {\n \"manager\": \"kubeadm\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:13:50Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \"f:kubeadm.alpha.kubernetes.io/cri-socket\": {}\n }\n }\n }\n },\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:14:00Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \"f:node.alpha.kubernetes.io/ttl\": {}\n }\n },\n \"f:spec\": {\n \"f:podCIDR\": {},\n \"f:podCIDRs\": {\n \".\": {},\n \"v:\\\"10.244.1.0/24\\\"\": {}\n }\n }\n }\n },\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T16:18:23Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \".\": {},\n \"f:volumes.kubernetes.io/controller-managed-attach-detach\": {}\n },\n \"f:labels\": {\n \".\": {},\n \"f:beta.kubernetes.io/arch\": {},\n \"f:beta.kubernetes.io/os\": {},\n \"f:kubernetes.io/arch\": {},\n \"f:kubernetes.io/hostname\": {},\n \"f:kubernetes.io/os\": {}\n }\n },\n \"f:spec\": {\n \"f:providerID\": {}\n },\n \"f:status\": {\n \"f:addresses\": {\n \".\": {},\n \"k:{\\\"type\\\":\\\"Hostname\\\"}\": {\n \".\": {},\n \"f:address\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"InternalIP\\\"}\": {\n \".\": {},\n \"f:address\": {},\n \"f:type\": {}\n }\n },\n \"f:allocatable\": {\n \".\": {},\n \"f:cpu\": {},\n \"f:ephemeral-storage\": {},\n \"f:example.com/fakecpu\": {},\n \"f:hugepages-1Gi\": {},\n \"f:hugepages-2Mi\": {},\n \"f:memory\": {},\n \"f:pods\": {}\n },\n \"f:capacity\": {\n \".\": {},\n \"f:cpu\": {},\n \"f:ephemeral-storage\": {},\n \"f:hugepages-1Gi\": {},\n \"f:hugepages-2Mi\": {},\n \"f:memory\": {},\n \"f:pods\": {}\n },\n \"f:conditions\": {\n \".\": {},\n \"k:{\\\"type\\\":\\\"DiskPressure\\\"}\": {\n \".\": {},\n \"f:lastHeartbeatTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"MemoryPressure\\\"}\": {\n \".\": {},\n \"f:lastHeartbeatTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"PIDPressure\\\"}\": {\n \".\": {},\n \"f:lastHeartbeatTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastHeartbeatTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:daemonEndpoints\": {\n \"f:kubeletEndpoint\": {\n \"f:Port\": {}\n }\n },\n \"f:images\": {},\n \"f:nodeInfo\": {\n \"f:architecture\": {},\n \"f:bootID\": {},\n \"f:containerRuntimeVersion\": {},\n \"f:kernelVersion\": {},\n \"f:kubeProxyVersion\": {},\n \"f:kubeletVersion\": {},\n \"f:machineID\": {},\n \"f:operatingSystem\": {},\n \"f:osImage\": {},\n \"f:systemUUID\": {}\n }\n }\n }\n },\n {\n \"manager\": \"e2e.test\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T16:38:19Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:capacity\": {\n \"f:example.com/fakecpu\": {}\n }\n }\n }\n }\n ]\n },\n \"spec\": {\n \"podCIDR\": \"10.244.1.0/24\",\n \"podCIDRs\": [\n \"10.244.1.0/24\"\n ],\n \"providerID\": \"kind://docker/kali/kali-worker\"\n },\n \"status\": {\n \"capacity\": {\n \"cpu\": \"88\",\n \"ephemeral-storage\": \"459602040Ki\",\n \"example.com/fakecpu\": \"1k\",\n \"hugepages-1Gi\": \"0\",\n \"hugepages-2Mi\": \"0\",\n \"memory\": \"65849824Ki\",\n \"pods\": \"110\"\n },\n \"allocatable\": {\n \"cpu\": \"88\",\n \"ephemeral-storage\": \"459602040Ki\",\n \"example.com/fakecpu\": \"1k\",\n \"hugepages-1Gi\": \"0\",\n \"hugepages-2Mi\": \"0\",\n \"memory\": \"65849824Ki\",\n \"pods\": \"110\"\n },\n \"conditions\": [\n {\n \"type\": \"MemoryPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2021-05-21T16:38:24Z\",\n \"lastTransitionTime\": \"2021-05-21T15:13:50Z\",\n \"reason\": \"KubeletHasSufficientMemory\",\n \"message\": \"kubelet has sufficient memory available\"\n },\n {\n \"type\": \"DiskPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2021-05-21T16:38:24Z\",\n \"lastTransitionTime\": \"2021-05-21T15:13:50Z\",\n \"reason\": \"KubeletHasNoDiskPressure\",\n \"message\": \"kubelet has no disk pressure\"\n },\n {\n \"type\": \"PIDPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2021-05-21T16:38:24Z\",\n \"lastTransitionTime\": \"2021-05-21T15:13:50Z\",\n \"reason\": \"KubeletHasSufficientPID\",\n \"message\": \"kubelet has sufficient PID available\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastHeartbeatTime\": \"2021-05-21T16:38:24Z\",\n \"lastTransitionTime\": \"2021-05-21T15:14:10Z\",\n \"reason\": \"KubeletReady\",\n \"message\": \"kubelet is posting ready status\"\n }\n ],\n \"addresses\": [\n {\n \"type\": \"InternalIP\",\n \"address\": \"172.18.0.2\"\n },\n {\n \"type\": \"Hostname\",\n \"address\": \"kali-worker\"\n }\n ],\n \"daemonEndpoints\": {\n \"kubeletEndpoint\": {\n \"Port\": 10250\n }\n },\n \"nodeInfo\": {\n \"machineID\": \"98d2449126e74eecbb40badb4b2185ab\",\n \"systemUUID\": \"7ff5ff91-bd1d-40de-a20e-40120f9ccd57\",\n \"bootID\": \"8e840902-9ac1-4acc-b00a-3731226c7bea\",\n \"kernelVersion\": \"5.4.0-73-generic\",\n \"osImage\": \"Ubuntu 20.10\",\n \"containerRuntimeVersion\": \"containerd://1.5.1\",\n \"kubeletVersion\": \"v1.19.11\",\n \"kubeProxyVersion\": \"v1.19.11\",\n \"operatingSystem\": \"linux\",\n \"architecture\": \"amd64\"\n },\n \"images\": [\n {\n \"names\": [\n \"k8s.gcr.io/kube-apiserver:v1.19.11\"\n ],\n \"sizeBytes\": 120059217\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-proxy:v1.19.11\"\n ],\n \"sizeBytes\": 119607500\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-controller-manager:v1.19.11\"\n ],\n \"sizeBytes\": 112022883\n },\n {\n \"names\": [\n \"ghcr.io/k8snetworkplumbingwg/multus-cni@sha256:e72aa733faf24d1f62b69ded1126b9b8da0144d35c8a410c4fa0a860006f9eed\",\n \"ghcr.io/k8snetworkplumbingwg/multus-cni:stable\"\n ],\n \"sizeBytes\": 104808100\n },\n {\n \"names\": [\n \"k8s.gcr.io/etcd:3.4.13-0\"\n ],\n \"sizeBytes\": 86742272\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb\",\n \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0\"\n ],\n \"sizeBytes\": 85425365\n },\n {\n \"names\": [\n \"docker.io/kindest/kindnetd:v20210326-1e038dc5\"\n ],\n \"sizeBytes\": 53960776\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-scheduler:v1.19.11\"\n ],\n \"sizeBytes\": 47723856\n },\n {\n \"names\": [\n \"k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\",\n \"k8s.gcr.io/e2e-test-images/agnhost:2.20\"\n ],\n \"sizeBytes\": 46251412\n },\n {\n \"names\": [\n \"docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a\",\n \"docker.io/library/httpd:2.4.39-alpine\"\n ],\n \"sizeBytes\": 41901429\n },\n {\n \"names\": [\n \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"docker.io/library/httpd:2.4.38-alpine\"\n ],\n \"sizeBytes\": 40765017\n },\n {\n \"names\": [\n \"quay.io/metallb/speaker@sha256:c150c6a26a29de43097918f08551bbd2a80de229225866a3814594798089e51c\",\n \"quay.io/metallb/speaker:main\"\n ],\n \"sizeBytes\": 39322460\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55\",\n \"gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17\"\n ],\n \"sizeBytes\": 25311280\n },\n {\n \"names\": [\n \"k8s.gcr.io/build-image/debian-base:v2.1.0\"\n ],\n \"sizeBytes\": 21086532\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213\",\n \"gcr.io/kubernetes-e2e-test-images/nonroot:1.0\"\n ],\n \"sizeBytes\": 17747507\n },\n {\n \"names\": [\n \"docker.io/kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7\",\n \"docker.io/kubernetesui/metrics-scraper:v1.0.6\"\n ],\n \"sizeBytes\": 15079854\n },\n {\n \"names\": [\n \"k8s.gcr.io/coredns:1.7.0\"\n ],\n \"sizeBytes\": 13982350\n },\n {\n \"names\": [\n \"docker.io/rancher/local-path-provisioner:v0.0.14\"\n ],\n \"sizeBytes\": 13367922\n },\n {\n \"names\": [\n \"docker.io/projectcontour/contour@sha256:1b6849d5bda1f5b2f839dad799922a043b82debaba9fa907723b5eb4c49f2e9e\",\n \"docker.io/projectcontour/contour:v1.15.1\"\n ],\n \"sizeBytes\": 11888781\n },\n {\n \"names\": [\n \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"docker.io/library/nginx:1.14-alpine\"\n ],\n \"sizeBytes\": 6978806\n },\n {\n \"names\": [\n \"gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e\",\n \"gcr.io/google-samples/hello-go-gke:1.0\"\n ],\n \"sizeBytes\": 4381769\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411\",\n \"gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0\"\n ],\n \"sizeBytes\": 3054649\n },\n {\n \"names\": [\n \"gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0\",\n \"gcr.io/authenticated-image-pulling/alpine:3.7\"\n ],\n \"sizeBytes\": 2110879\n },\n {\n \"names\": [\n \"docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475\",\n \"docker.io/library/alpine:3.6\"\n ],\n \"sizeBytes\": 2021226\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc\",\n \"gcr.io/kubernetes-e2e-test-images/nautilus:1.0\"\n ],\n \"sizeBytes\": 1804628\n },\n {\n \"names\": [\n \"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796\",\n \"docker.io/library/busybox:1.29\"\n ],\n \"sizeBytes\": 732685\n },\n {\n \"names\": [\n \"docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47\",\n \"docker.io/library/busybox:1.28\"\n ],\n \"sizeBytes\": 727869\n },\n {\n \"names\": [\n \"k8s.gcr.io/pause:3.4.1\"\n ],\n \"sizeBytes\": 301268\n },\n {\n \"names\": [\n \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n \"k8s.gcr.io/pause:3.2\"\n ],\n \"sizeBytes\": 299513\n },\n {\n \"names\": [\n \"k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa\",\n \"k8s.gcr.io/pause:3.3\"\n ],\n \"sizeBytes\": 299480\n }\n ]\n }\n },\n {\n \"metadata\": {\n \"name\": \"kali-worker2\",\n \"selfLink\": \"/api/v1/nodes/kali-worker2\",\n \"uid\": \"5234e2ff-cf16-452a-8b18-7d6d2790f051\",\n \"resourceVersion\": \"47866\",\n \"creationTimestamp\": \"2021-05-21T15:13:50Z\",\n \"labels\": {\n \"beta.kubernetes.io/arch\": \"amd64\",\n \"beta.kubernetes.io/os\": \"linux\",\n \"kubernetes.io/arch\": \"amd64\",\n \"kubernetes.io/hostname\": \"kali-worker2\",\n \"kubernetes.io/os\": \"linux\"\n },\n \"annotations\": {\n \"kubeadm.alpha.kubernetes.io/cri-socket\": \"unix:///run/containerd/containerd.sock\",\n \"node.alpha.kubernetes.io/ttl\": \"0\",\n \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n },\n \"managedFields\": [\n {\n \"manager\": \"kubeadm\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:13:50Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \"f:kubeadm.alpha.kubernetes.io/cri-socket\": {}\n }\n }\n }\n },\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:14:10Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \"f:node.alpha.kubernetes.io/ttl\": {}\n }\n },\n \"f:spec\": {\n \"f:podCIDR\": {},\n \"f:podCIDRs\": {\n \".\": {},\n \"v:\\\"10.244.2.0/24\\\"\": {}\n }\n }\n }\n },\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T16:11:43Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \".\": {},\n \"f:volumes.kubernetes.io/controller-managed-attach-detach\": {}\n },\n \"f:labels\": {\n \".\": {},\n \"f:beta.kubernetes.io/arch\": {},\n \"f:beta.kubernetes.io/os\": {},\n \"f:kubernetes.io/arch\": {},\n \"f:kubernetes.io/hostname\": {},\n \"f:kubernetes.io/os\": {}\n }\n },\n \"f:spec\": {\n \"f:providerID\": {}\n },\n \"f:status\": {\n \"f:addresses\": {\n \".\": {},\n \"k:{\\\"type\\\":\\\"Hostname\\\"}\": {\n \".\": {},\n \"f:address\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"InternalIP\\\"}\": {\n \".\": {},\n \"f:address\": {},\n \"f:type\": {}\n }\n },\n \"f:allocatable\": {\n \".\": {},\n \"f:cpu\": {},\n \"f:ephemeral-storage\": {},\n \"f:hugepages-1Gi\": {},\n \"f:hugepages-2Mi\": {},\n \"f:memory\": {},\n \"f:pods\": {}\n },\n \"f:capacity\": {\n \".\": {},\n \"f:cpu\": {},\n \"f:ephemeral-storage\": {},\n \"f:hugepages-1Gi\": {},\n \"f:hugepages-2Mi\": {},\n \"f:memory\": {},\n \"f:pods\": {}\n },\n \"f:conditions\": {\n \".\": {},\n \"k:{\\\"type\\\":\\\"DiskPressure\\\"}\": {\n \".\": {},\n \"f:lastHeartbeatTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"MemoryPressure\\\"}\": {\n \".\": {},\n \"f:lastHeartbeatTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"PIDPressure\\\"}\": {\n \".\": {},\n \"f:lastHeartbeatTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastHeartbeatTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:daemonEndpoints\": {\n \"f:kubeletEndpoint\": {\n \"f:Port\": {}\n }\n },\n \"f:images\": {},\n \"f:nodeInfo\": {\n \"f:architecture\": {},\n \"f:bootID\": {},\n \"f:containerRuntimeVersion\": {},\n \"f:kernelVersion\": {},\n \"f:kubeProxyVersion\": {},\n \"f:kubeletVersion\": {},\n \"f:machineID\": {},\n \"f:operatingSystem\": {},\n \"f:osImage\": {},\n \"f:systemUUID\": {}\n }\n }\n }\n }\n ]\n },\n \"spec\": {\n \"podCIDR\": \"10.244.2.0/24\",\n \"podCIDRs\": [\n \"10.244.2.0/24\"\n ],\n \"providerID\": \"kind://docker/kali/kali-worker2\"\n },\n \"status\": {\n \"capacity\": {\n \"cpu\": \"88\",\n \"ephemeral-storage\": \"459602040Ki\",\n \"hugepages-1Gi\": \"0\",\n \"hugepages-2Mi\": \"0\",\n \"memory\": \"65849824Ki\",\n \"pods\": \"110\"\n },\n \"allocatable\": {\n \"cpu\": \"88\",\n \"ephemeral-storage\": \"459602040Ki\",\n \"hugepages-1Gi\": \"0\",\n \"hugepages-2Mi\": \"0\",\n \"memory\": \"65849824Ki\",\n \"pods\": \"110\"\n },\n \"conditions\": [\n {\n \"type\": \"MemoryPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2021-05-21T16:36:44Z\",\n \"lastTransitionTime\": \"2021-05-21T15:13:50Z\",\n \"reason\": \"KubeletHasSufficientMemory\",\n \"message\": \"kubelet has sufficient memory available\"\n },\n {\n \"type\": \"DiskPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2021-05-21T16:36:44Z\",\n \"lastTransitionTime\": \"2021-05-21T15:13:50Z\",\n \"reason\": \"KubeletHasNoDiskPressure\",\n \"message\": \"kubelet has no disk pressure\"\n },\n {\n \"type\": \"PIDPressure\",\n \"status\": \"False\",\n \"lastHeartbeatTime\": \"2021-05-21T16:36:44Z\",\n \"lastTransitionTime\": \"2021-05-21T15:13:50Z\",\n \"reason\": \"KubeletHasSufficientPID\",\n \"message\": \"kubelet has sufficient PID available\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastHeartbeatTime\": \"2021-05-21T16:36:44Z\",\n \"lastTransitionTime\": \"2021-05-21T15:14:10Z\",\n \"reason\": \"KubeletReady\",\n \"message\": \"kubelet is posting ready status\"\n }\n ],\n \"addresses\": [\n {\n \"type\": \"InternalIP\",\n \"address\": \"172.18.0.4\"\n },\n {\n \"type\": \"Hostname\",\n \"address\": \"kali-worker2\"\n }\n ],\n \"daemonEndpoints\": {\n \"kubeletEndpoint\": {\n \"Port\": 10250\n }\n },\n \"nodeInfo\": {\n \"machineID\": \"25bccfd396a54993943292be31875b17\",\n \"systemUUID\": \"7e1793f9-37c3-4045-9e0f-c6477f13ac6c\",\n \"bootID\": \"8e840902-9ac1-4acc-b00a-3731226c7bea\",\n \"kernelVersion\": \"5.4.0-73-generic\",\n \"osImage\": \"Ubuntu 20.10\",\n \"containerRuntimeVersion\": \"containerd://1.5.1\",\n \"kubeletVersion\": \"v1.19.11\",\n \"kubeProxyVersion\": \"v1.19.11\",\n \"operatingSystem\": \"linux\",\n \"architecture\": \"amd64\"\n },\n \"images\": [\n {\n \"names\": [\n \"k8s.gcr.io/kube-apiserver:v1.19.11\"\n ],\n \"sizeBytes\": 120059217\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-proxy:v1.19.11\"\n ],\n \"sizeBytes\": 119607500\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-controller-manager:v1.19.11\"\n ],\n \"sizeBytes\": 112022883\n },\n {\n \"names\": [\n \"ghcr.io/k8snetworkplumbingwg/multus-cni@sha256:e72aa733faf24d1f62b69ded1126b9b8da0144d35c8a410c4fa0a860006f9eed\",\n \"ghcr.io/k8snetworkplumbingwg/multus-cni:stable\"\n ],\n \"sizeBytes\": 104808100\n },\n {\n \"names\": [\n \"k8s.gcr.io/etcd:3.4.13-0\"\n ],\n \"sizeBytes\": 86742272\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb\",\n \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0\"\n ],\n \"sizeBytes\": 85425365\n },\n {\n \"names\": [\n \"docker.io/kubernetesui/dashboard@sha256:148991563e374c83b75e8c51bca75f512d4f006ddc791e96a91f1c7420b60bd9\",\n \"docker.io/kubernetesui/dashboard:v2.2.0\"\n ],\n \"sizeBytes\": 67775224\n },\n {\n \"names\": [\n \"docker.io/kindest/kindnetd:v20210326-1e038dc5\"\n ],\n \"sizeBytes\": 53960776\n },\n {\n \"names\": [\n \"k8s.gcr.io/kube-scheduler:v1.19.11\"\n ],\n \"sizeBytes\": 47723856\n },\n {\n \"names\": [\n \"k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\",\n \"k8s.gcr.io/e2e-test-images/agnhost:2.20\"\n ],\n \"sizeBytes\": 46251412\n },\n {\n \"names\": [\n \"docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a\",\n \"docker.io/library/httpd:2.4.39-alpine\"\n ],\n \"sizeBytes\": 41901429\n },\n {\n \"names\": [\n \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"docker.io/library/httpd:2.4.38-alpine\"\n ],\n \"sizeBytes\": 40765017\n },\n {\n \"names\": [\n \"quay.io/metallb/speaker@sha256:c150c6a26a29de43097918f08551bbd2a80de229225866a3814594798089e51c\",\n \"quay.io/metallb/speaker:main\"\n ],\n \"sizeBytes\": 39322460\n },\n {\n \"names\": [\n \"quay.io/metallb/controller@sha256:68c52b5301b42cad0cbf497f3d83c2e18b82548a9c36690b99b2023c55cb715a\",\n \"quay.io/metallb/controller:main\"\n ],\n \"sizeBytes\": 35989620\n },\n {\n \"names\": [\n \"k8s.gcr.io/build-image/debian-base:v2.1.0\"\n ],\n \"sizeBytes\": 21086532\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213\",\n \"gcr.io/kubernetes-e2e-test-images/nonroot:1.0\"\n ],\n \"sizeBytes\": 17747507\n },\n {\n \"names\": [\n \"k8s.gcr.io/coredns:1.7.0\"\n ],\n \"sizeBytes\": 13982350\n },\n {\n \"names\": [\n \"docker.io/rancher/local-path-provisioner:v0.0.14\"\n ],\n \"sizeBytes\": 13367922\n },\n {\n \"names\": [\n \"docker.io/projectcontour/contour@sha256:1b6849d5bda1f5b2f839dad799922a043b82debaba9fa907723b5eb4c49f2e9e\",\n \"docker.io/projectcontour/contour:v1.15.1\"\n ],\n \"sizeBytes\": 11888781\n },\n {\n \"names\": [\n \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"docker.io/library/nginx:1.14-alpine\"\n ],\n \"sizeBytes\": 6978806\n },\n {\n \"names\": [\n \"gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e\",\n \"gcr.io/google-samples/hello-go-gke:1.0\"\n ],\n \"sizeBytes\": 4381769\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411\",\n \"gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0\"\n ],\n \"sizeBytes\": 3054649\n },\n {\n \"names\": [\n \"docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb\",\n \"docker.io/appropriate/curl:edge\"\n ],\n \"sizeBytes\": 2854657\n },\n {\n \"names\": [\n \"docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475\",\n \"docker.io/library/alpine:3.6\"\n ],\n \"sizeBytes\": 2021226\n },\n {\n \"names\": [\n \"gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc\",\n \"gcr.io/kubernetes-e2e-test-images/nautilus:1.0\"\n ],\n \"sizeBytes\": 1804628\n },\n {\n \"names\": [\n \"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796\",\n \"docker.io/library/busybox:1.29\"\n ],\n \"sizeBytes\": 732685\n },\n {\n \"names\": [\n \"docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47\",\n \"docker.io/library/busybox:1.28\"\n ],\n \"sizeBytes\": 727869\n },\n {\n \"names\": [\n \"k8s.gcr.io/pause:3.4.1\"\n ],\n \"sizeBytes\": 301268\n },\n {\n \"names\": [\n \"k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f\",\n \"k8s.gcr.io/pause:3.2\"\n ],\n \"sizeBytes\": 299513\n },\n {\n \"names\": [\n \"k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa\",\n \"k8s.gcr.io/pause:3.3\"\n ],\n \"sizeBytes\": 299480\n }\n ]\n }\n }\n ]\n}\n{\n \"kind\": \"EventList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/namespaces/kube-system/events\",\n \"resourceVersion\": \"48999\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"critical-pod.168120dfa2db21c2\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/critical-pod.168120dfa2db21c2\",\n \"uid\": \"c6faa530-cf9d-4f86-837f-cadeb04a777d\",\n \"resourceVersion\": \"32509\",\n \"creationTimestamp\": \"2021-05-21T16:11:25Z\",\n \"managedFields\": [\n {\n \"manager\": \"kube-scheduler\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T16:11:25Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:count\": {},\n \"f:firstTimestamp\": {},\n \"f:involvedObject\": {\n \"f:apiVersion\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:namespace\": {},\n \"f:resourceVersion\": {},\n \"f:uid\": {}\n },\n \"f:lastTimestamp\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:source\": {\n \"f:component\": {}\n },\n \"f:type\": {}\n }\n }\n ]\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"critical-pod\",\n \"uid\": \"6009cfa9-7dc0-4358-a99a-4c7e5cdb0d1e\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"32500\"\n },\n \"reason\": \"FailedScheduling\",\n \"message\": \"0/3 nodes are available: 3 Insufficient scheduling.k8s.io/foo.\",\n \"source\": {\n \"component\": \"default-scheduler\"\n },\n \"firstTimestamp\": \"2021-05-21T16:11:25Z\",\n \"lastTimestamp\": \"2021-05-21T16:11:25Z\",\n \"count\": 2,\n \"type\": \"Warning\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"critical-pod.168120e302dc0827\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/critical-pod.168120e302dc0827\",\n \"uid\": \"b198ea03-5f1e-4219-9eb9-d506d360d99d\",\n \"resourceVersion\": \"32562\",\n \"creationTimestamp\": \"2021-05-21T16:11:40Z\",\n \"managedFields\": [\n {\n \"manager\": \"kube-scheduler\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T16:11:40Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:count\": {},\n \"f:firstTimestamp\": {},\n \"f:involvedObject\": {\n \"f:apiVersion\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:namespace\": {},\n \"f:resourceVersion\": {},\n \"f:uid\": {}\n },\n \"f:lastTimestamp\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:source\": {\n \"f:component\": {}\n },\n \"f:type\": {}\n }\n }\n ]\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"critical-pod\",\n \"uid\": \"6009cfa9-7dc0-4358-a99a-4c7e5cdb0d1e\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"32505\"\n },\n \"reason\": \"Scheduled\",\n \"message\": \"Successfully assigned kube-system/critical-pod to kali-worker\",\n \"source\": {\n \"component\": \"default-scheduler\"\n },\n \"firstTimestamp\": \"2021-05-21T16:11:40Z\",\n \"lastTimestamp\": \"2021-05-21T16:11:40Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"critical-pod.168120e31e484c10\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/critical-pod.168120e31e484c10\",\n \"uid\": \"68767029-8dca-48b3-beca-3f7b481c75fe\",\n \"resourceVersion\": \"32566\",\n \"creationTimestamp\": \"2021-05-21T16:11:40Z\",\n \"managedFields\": [\n {\n \"manager\": \"multus\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T16:11:40Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:count\": {},\n \"f:firstTimestamp\": {},\n \"f:involvedObject\": {\n \"f:apiVersion\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:namespace\": {},\n \"f:resourceVersion\": {},\n \"f:uid\": {}\n },\n \"f:lastTimestamp\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:source\": {\n \"f:component\": {}\n },\n \"f:type\": {}\n }\n }\n ]\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"critical-pod\",\n \"uid\": \"6009cfa9-7dc0-4358-a99a-4c7e5cdb0d1e\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"32563\"\n },\n \"reason\": \"AddedInterface\",\n \"message\": \"Add eth0 [10.244.1.20/24]\",\n \"source\": {\n \"component\": \"multus\"\n },\n \"firstTimestamp\": \"2021-05-21T16:11:40Z\",\n \"lastTimestamp\": \"2021-05-21T16:11:40Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"critical-pod.168120e32a347757\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/critical-pod.168120e32a347757\",\n \"uid\": \"5a214836-27ee-4ef2-a1be-0e2cfbcca5e7\",\n \"resourceVersion\": \"32568\",\n \"creationTimestamp\": \"2021-05-21T16:11:40Z\",\n \"managedFields\": [\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T16:11:40Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:count\": {},\n \"f:firstTimestamp\": {},\n \"f:involvedObject\": {\n \"f:apiVersion\": {},\n \"f:fieldPath\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:namespace\": {},\n \"f:resourceVersion\": {},\n \"f:uid\": {}\n },\n \"f:lastTimestamp\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:source\": {\n \"f:component\": {},\n \"f:host\": {}\n },\n \"f:type\": {}\n }\n }\n ]\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"critical-pod\",\n \"uid\": \"6009cfa9-7dc0-4358-a99a-4c7e5cdb0d1e\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"32561\",\n \"fieldPath\": \"spec.containers{critical-pod}\"\n },\n \"reason\": \"Pulled\",\n \"message\": \"Container image \\\"k8s.gcr.io/pause:3.2\\\" already present on machine\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kali-worker\"\n },\n \"firstTimestamp\": \"2021-05-21T16:11:40Z\",\n \"lastTimestamp\": \"2021-05-21T16:11:40Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"critical-pod.168120e32b5ad2de\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/critical-pod.168120e32b5ad2de\",\n \"uid\": \"166a92aa-14ad-4727-ab1c-eabb09f545f0\",\n \"resourceVersion\": \"32569\",\n \"creationTimestamp\": \"2021-05-21T16:11:40Z\",\n \"managedFields\": [\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T16:11:40Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:count\": {},\n \"f:firstTimestamp\": {},\n \"f:involvedObject\": {\n \"f:apiVersion\": {},\n \"f:fieldPath\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:namespace\": {},\n \"f:resourceVersion\": {},\n \"f:uid\": {}\n },\n \"f:lastTimestamp\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:source\": {\n \"f:component\": {},\n \"f:host\": {}\n },\n \"f:type\": {}\n }\n }\n ]\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"critical-pod\",\n \"uid\": \"6009cfa9-7dc0-4358-a99a-4c7e5cdb0d1e\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"32561\",\n \"fieldPath\": \"spec.containers{critical-pod}\"\n },\n \"reason\": \"Created\",\n \"message\": \"Created container critical-pod\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kali-worker\"\n },\n \"firstTimestamp\": \"2021-05-21T16:11:40Z\",\n \"lastTimestamp\": \"2021-05-21T16:11:40Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"critical-pod.168120e3332c98c8\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/critical-pod.168120e3332c98c8\",\n \"uid\": \"8d193b15-971a-4bc7-aa26-1c727ad71695\",\n \"resourceVersion\": \"32570\",\n \"creationTimestamp\": \"2021-05-21T16:11:41Z\",\n \"managedFields\": [\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T16:11:41Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:count\": {},\n \"f:firstTimestamp\": {},\n \"f:involvedObject\": {\n \"f:apiVersion\": {},\n \"f:fieldPath\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:namespace\": {},\n \"f:resourceVersion\": {},\n \"f:uid\": {}\n },\n \"f:lastTimestamp\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:source\": {\n \"f:component\": {},\n \"f:host\": {}\n },\n \"f:type\": {}\n }\n }\n ]\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"critical-pod\",\n \"uid\": \"6009cfa9-7dc0-4358-a99a-4c7e5cdb0d1e\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"32561\",\n \"fieldPath\": \"spec.containers{critical-pod}\"\n },\n \"reason\": \"Started\",\n \"message\": \"Started container critical-pod\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kali-worker\"\n },\n \"firstTimestamp\": \"2021-05-21T16:11:41Z\",\n \"lastTimestamp\": \"2021-05-21T16:11:41Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"critical-pod.168120e35d99f8bd\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/critical-pod.168120e35d99f8bd\",\n \"uid\": \"b60523bf-a092-486d-941c-b4afc9fb10c3\",\n \"resourceVersion\": \"32575\",\n \"creationTimestamp\": \"2021-05-21T16:11:41Z\",\n \"managedFields\": [\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T16:11:41Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:count\": {},\n \"f:firstTimestamp\": {},\n \"f:involvedObject\": {\n \"f:apiVersion\": {},\n \"f:fieldPath\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:namespace\": {},\n \"f:resourceVersion\": {},\n \"f:uid\": {}\n },\n \"f:lastTimestamp\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:source\": {\n \"f:component\": {},\n \"f:host\": {}\n },\n \"f:type\": {}\n }\n }\n ]\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"critical-pod\",\n \"uid\": \"6009cfa9-7dc0-4358-a99a-4c7e5cdb0d1e\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"32561\",\n \"fieldPath\": \"spec.containers{critical-pod}\"\n },\n \"reason\": \"Killing\",\n \"message\": \"Stopping container critical-pod\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kali-worker\"\n },\n \"firstTimestamp\": \"2021-05-21T16:11:41Z\",\n \"lastTimestamp\": \"2021-05-21T16:11:41Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"critical-pod.168120e386e49705\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/critical-pod.168120e386e49705\",\n \"uid\": \"d300f349-062d-497a-a00f-1eaddd9651db\",\n \"resourceVersion\": \"32618\",\n \"creationTimestamp\": \"2021-05-21T16:11:42Z\",\n \"managedFields\": [\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T16:11:42Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:count\": {},\n \"f:firstTimestamp\": {},\n \"f:involvedObject\": {\n \"f:apiVersion\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:namespace\": {},\n \"f:resourceVersion\": {},\n \"f:uid\": {}\n },\n \"f:lastTimestamp\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:source\": {\n \"f:component\": {},\n \"f:host\": {}\n },\n \"f:type\": {}\n }\n }\n ]\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"critical-pod\",\n \"uid\": \"6009cfa9-7dc0-4358-a99a-4c7e5cdb0d1e\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"32561\"\n },\n \"reason\": \"SandboxChanged\",\n \"message\": \"Pod sandbox changed, it will be killed and re-created.\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kali-worker\"\n },\n \"firstTimestamp\": \"2021-05-21T16:11:42Z\",\n \"lastTimestamp\": \"2021-05-21T16:11:42Z\",\n \"count\": 1,\n \"type\": \"Normal\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n },\n {\n \"metadata\": {\n \"name\": \"critical-pod.168120e38dfd5b16\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/events/critical-pod.168120e38dfd5b16\",\n \"uid\": \"1c7405c0-41c4-47e6-ba6c-ee5598e8b215\",\n \"resourceVersion\": \"32623\",\n \"creationTimestamp\": \"2021-05-21T16:11:42Z\",\n \"managedFields\": [\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T16:11:42Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:count\": {},\n \"f:firstTimestamp\": {},\n \"f:involvedObject\": {\n \"f:apiVersion\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:namespace\": {},\n \"f:resourceVersion\": {},\n \"f:uid\": {}\n },\n \"f:lastTimestamp\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:source\": {\n \"f:component\": {},\n \"f:host\": {}\n },\n \"f:type\": {}\n }\n }\n ]\n },\n \"involvedObject\": {\n \"kind\": \"Pod\",\n \"namespace\": \"kube-system\",\n \"name\": \"critical-pod\",\n \"uid\": \"6009cfa9-7dc0-4358-a99a-4c7e5cdb0d1e\",\n \"apiVersion\": \"v1\",\n \"resourceVersion\": \"32561\"\n },\n \"reason\": \"FailedCreatePodSandBox\",\n \"message\": \"Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28d4a6ae95d139f8aa271a8f5f9365906c19a4872dc4980883334561c366298c\\\": Multus: [kube-system/critical-pod]: error getting pod: pods \\\"critical-pod\\\" not found\",\n \"source\": {\n \"component\": \"kubelet\",\n \"host\": \"kali-worker\"\n },\n \"firstTimestamp\": \"2021-05-21T16:11:42Z\",\n \"lastTimestamp\": \"2021-05-21T16:11:42Z\",\n \"count\": 1,\n \"type\": \"Warning\",\n \"eventTime\": null,\n \"reportingComponent\": \"\",\n \"reportingInstance\": \"\"\n }\n ]\n}\n{\n \"kind\": \"ReplicationControllerList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/namespaces/kube-system/replicationcontrollers\",\n \"resourceVersion\": \"49000\"\n },\n \"items\": []\n}\n{\n \"kind\": \"ServiceList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/namespaces/kube-system/services\",\n \"resourceVersion\": \"49000\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"kube-dns\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/services/kube-dns\",\n \"uid\": \"5bbe3a33-f71c-4606-ba35-a80b2e056509\",\n \"resourceVersion\": \"190\",\n \"creationTimestamp\": \"2021-05-21T15:13:17Z\",\n \"labels\": {\n \"k8s-app\": \"kube-dns\",\n \"kubernetes.io/cluster-service\": \"true\",\n \"kubernetes.io/name\": \"KubeDNS\"\n },\n \"annotations\": {\n \"prometheus.io/port\": \"9153\",\n \"prometheus.io/scrape\": \"true\"\n },\n \"managedFields\": [\n {\n \"manager\": \"kubeadm\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:13:17Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \".\": {},\n \"f:prometheus.io/port\": {},\n \"f:prometheus.io/scrape\": {}\n },\n \"f:labels\": {\n \".\": {},\n \"f:k8s-app\": {},\n \"f:kubernetes.io/cluster-service\": {},\n \"f:kubernetes.io/name\": {}\n }\n },\n \"f:spec\": {\n \"f:clusterIP\": {},\n \"f:ports\": {\n \".\": {},\n \"k:{\\\"port\\\":53,\\\"protocol\\\":\\\"TCP\\\"}\": {\n \".\": {},\n \"f:name\": {},\n \"f:port\": {},\n \"f:protocol\": {},\n \"f:targetPort\": {}\n },\n \"k:{\\\"port\\\":53,\\\"protocol\\\":\\\"UDP\\\"}\": {\n \".\": {},\n \"f:name\": {},\n \"f:port\": {},\n \"f:protocol\": {},\n \"f:targetPort\": {}\n },\n \"k:{\\\"port\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}\": {\n \".\": {},\n \"f:name\": {},\n \"f:port\": {},\n \"f:protocol\": {},\n \"f:targetPort\": {}\n }\n },\n \"f:selector\": {\n \".\": {},\n \"f:k8s-app\": {}\n },\n \"f:sessionAffinity\": {},\n \"f:type\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"ports\": [\n {\n \"name\": \"dns\",\n \"protocol\": \"UDP\",\n \"port\": 53,\n \"targetPort\": 53\n },\n {\n \"name\": \"dns-tcp\",\n \"protocol\": \"TCP\",\n \"port\": 53,\n \"targetPort\": 53\n },\n {\n \"name\": \"metrics\",\n \"protocol\": \"TCP\",\n \"port\": 9153,\n \"targetPort\": 9153\n }\n ],\n \"selector\": {\n \"k8s-app\": \"kube-dns\"\n },\n \"clusterIP\": \"10.96.0.10\",\n \"type\": \"ClusterIP\",\n \"sessionAffinity\": \"None\"\n },\n \"status\": {\n \"loadBalancer\": {}\n }\n }\n ]\n}\n{\n \"kind\": \"DaemonSetList\",\n \"apiVersion\": \"apps/v1\",\n \"metadata\": {\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/daemonsets\",\n \"resourceVersion\": \"49000\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"create-loop-devs\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/daemonsets/create-loop-devs\",\n \"uid\": \"14b411a5-28ad-4d2c-8713-6de6f2f844d8\",\n \"resourceVersion\": \"1390\",\n \"generation\": 1,\n \"creationTimestamp\": \"2021-05-21T15:16:01Z\",\n \"labels\": {\n \"app\": \"create-loop-devs\"\n },\n \"annotations\": {\n \"deprecated.daemonset.template.generation\": \"1\",\n \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"create-loop-devs\\\"},\\\"name\\\":\\\"create-loop-devs\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"name\\\":\\\"create-loop-devs\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"name\\\":\\\"create-loop-devs\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"sh\\\",\\\"-c\\\",\\\"while true; do\\\\n for i in $(seq 0 1000); do\\\\n if ! [ -e /dev/loop$i ]; then\\\\n mknod /dev/loop$i b 7 $i\\\\n fi\\\\n done\\\\n sleep 100000000\\\\ndone\\\\n\\\"],\\\"image\\\":\\\"alpine:3.6\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"loopdev\\\",\\\"resources\\\":{},\\\"securityContext\\\":{\\\"privileged\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/dev\\\",\\\"name\\\":\\\"dev\\\"}]}],\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/dev\\\"},\\\"name\\\":\\\"dev\\\"}]}}}}\\n\"\n },\n \"managedFields\": [\n {\n \"manager\": \"kubectl-client-side-apply\",\n \"operation\": \"Update\",\n \"apiVersion\": \"apps/v1\",\n \"time\": \"2021-05-21T15:16:01Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \".\": {},\n \"f:deprecated.daemonset.template.generation\": {},\n \"f:kubectl.kubernetes.io/last-applied-configuration\": {}\n },\n \"f:labels\": {\n \".\": {},\n \"f:app\": {}\n }\n },\n \"f:spec\": {\n \"f:revisionHistoryLimit\": {},\n \"f:selector\": {\n \"f:matchLabels\": {\n \".\": {},\n \"f:name\": {}\n }\n },\n \"f:template\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:name\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"loopdev\\\"}\": {\n \".\": {},\n \"f:command\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:securityContext\": {\n \".\": {},\n \"f:privileged\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/dev\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"dev\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n }\n }\n }\n },\n \"f:updateStrategy\": {\n \"f:rollingUpdate\": {\n \".\": {},\n \"f:maxUnavailable\": {}\n },\n \"f:type\": {}\n }\n }\n }\n },\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"apps/v1\",\n \"time\": \"2021-05-21T15:16:06Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:currentNumberScheduled\": {},\n \"f:desiredNumberScheduled\": {},\n \"f:numberAvailable\": {},\n \"f:numberReady\": {},\n \"f:observedGeneration\": {},\n \"f:updatedNumberScheduled\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"selector\": {\n \"matchLabels\": {\n \"name\": \"create-loop-devs\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"name\": \"create-loop-devs\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"dev\",\n \"hostPath\": {\n \"path\": \"/dev\",\n \"type\": \"\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"loopdev\",\n \"image\": \"alpine:3.6\",\n \"command\": [\n \"sh\",\n \"-c\",\n \"while true; do\\n for i in $(seq 0 1000); do\\n if ! [ -e /dev/loop$i ]; then\\n mknod /dev/loop$i b 7 $i\\n fi\\n done\\n sleep 100000000\\ndone\\n\"\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"dev\",\n \"mountPath\": \"/dev\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ]\n }\n },\n \"updateStrategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": 1\n }\n },\n \"revisionHistoryLimit\": 10\n },\n \"status\": {\n \"currentNumberScheduled\": 3,\n \"numberMisscheduled\": 0,\n \"desiredNumberScheduled\": 3,\n \"numberReady\": 3,\n \"observedGeneration\": 1,\n \"updatedNumberScheduled\": 3,\n \"numberAvailable\": 3\n }\n },\n {\n \"metadata\": {\n \"name\": \"kindnet\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet\",\n \"uid\": \"4475fe22-8df5-4436-bc1d-18482df5a443\",\n \"resourceVersion\": \"646\",\n \"generation\": 1,\n \"creationTimestamp\": \"2021-05-21T15:13:19Z\",\n \"labels\": {\n \"app\": \"kindnet\",\n \"k8s-app\": \"kindnet\",\n \"tier\": \"node\"\n },\n \"annotations\": {\n \"deprecated.daemonset.template.generation\": \"1\"\n },\n \"managedFields\": [\n {\n \"manager\": \"kubectl-create\",\n \"operation\": \"Update\",\n \"apiVersion\": \"apps/v1\",\n \"time\": \"2021-05-21T15:13:19Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \".\": {},\n \"f:deprecated.daemonset.template.generation\": {}\n },\n \"f:labels\": {\n \".\": {},\n \"f:app\": {},\n \"f:k8s-app\": {},\n \"f:tier\": {}\n }\n },\n \"f:spec\": {\n \"f:revisionHistoryLimit\": {},\n \"f:selector\": {\n \"f:matchLabels\": {\n \".\": {},\n \"f:app\": {}\n }\n },\n \"f:template\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:app\": {},\n \"f:k8s-app\": {},\n \"f:tier\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"kindnet-cni\\\"}\": {\n \".\": {},\n \"f:env\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"CONTROL_PLANE_ENDPOINT\\\"}\": {\n \".\": {},\n \"f:name\": {},\n \"f:value\": {}\n },\n \"k:{\\\"name\\\":\\\"HOST_IP\\\"}\": {\n \".\": {},\n \"f:name\": {},\n \"f:valueFrom\": {\n \".\": {},\n \"f:fieldRef\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:fieldPath\": {}\n }\n }\n },\n \"k:{\\\"name\\\":\\\"POD_IP\\\"}\": {\n \".\": {},\n \"f:name\": {},\n \"f:valueFrom\": {\n \".\": {},\n \"f:fieldRef\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:fieldPath\": {}\n }\n }\n },\n \"k:{\\\"name\\\":\\\"POD_SUBNET\\\"}\": {\n \".\": {},\n \"f:name\": {},\n \"f:value\": {}\n }\n },\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {\n \".\": {},\n \"f:limits\": {\n \".\": {},\n \"f:cpu\": {},\n \"f:memory\": {}\n },\n \"f:requests\": {\n \".\": {},\n \"f:cpu\": {},\n \"f:memory\": {}\n }\n },\n \"f:securityContext\": {\n \".\": {},\n \"f:capabilities\": {\n \".\": {},\n \"f:add\": {}\n },\n \"f:privileged\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/lib/modules\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {},\n \"f:readOnly\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/run/xtables.lock\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:hostNetwork\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:serviceAccount\": {},\n \"f:serviceAccountName\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"cni-cfg\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"lib-modules\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"xtables-lock\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n }\n }\n }\n },\n \"f:updateStrategy\": {\n \"f:rollingUpdate\": {\n \".\": {},\n \"f:maxUnavailable\": {}\n },\n \"f:type\": {}\n }\n }\n }\n },\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"apps/v1\",\n \"time\": \"2021-05-21T15:13:55Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:currentNumberScheduled\": {},\n \"f:desiredNumberScheduled\": {},\n \"f:numberAvailable\": {},\n \"f:numberReady\": {},\n \"f:observedGeneration\": {},\n \"f:updatedNumberScheduled\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"selector\": {\n \"matchLabels\": {\n \"app\": \"kindnet\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"app\": \"kindnet\",\n \"k8s-app\": \"kindnet\",\n \"tier\": \"node\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"cni-cfg\",\n \"hostPath\": {\n \"path\": \"/etc/cni/net.d\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kindnet-cni\",\n \"image\": \"docker.io/kindest/kindnetd:v20210326-1e038dc5\",\n \"env\": [\n {\n \"name\": \"HOST_IP\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"status.hostIP\"\n }\n }\n },\n {\n \"name\": \"POD_IP\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"status.podIP\"\n }\n }\n },\n {\n \"name\": \"POD_SUBNET\",\n \"value\": \"10.244.0.0/16\"\n },\n {\n \"name\": \"CONTROL_PLANE_ENDPOINT\",\n \"value\": \"kali-control-plane:6443\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"cni-cfg\",\n \"mountPath\": \"/etc/cni/net.d\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_RAW\",\n \"NET_ADMIN\"\n ]\n },\n \"privileged\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"kindnet\",\n \"serviceAccount\": \"kindnet\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\"\n }\n ]\n }\n },\n \"updateStrategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": 1\n }\n },\n \"revisionHistoryLimit\": 10\n },\n \"status\": {\n \"currentNumberScheduled\": 3,\n \"numberMisscheduled\": 0,\n \"desiredNumberScheduled\": 3,\n \"numberReady\": 3,\n \"observedGeneration\": 1,\n \"updatedNumberScheduled\": 3,\n \"numberAvailable\": 3\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-multus-ds\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-multus-ds\",\n \"uid\": \"928bc64f-c0c9-475a-b436-4ec77811dd11\",\n \"resourceVersion\": \"1615\",\n \"generation\": 1,\n \"creationTimestamp\": \"2021-05-21T15:16:02Z\",\n \"labels\": {\n \"app\": \"multus\",\n \"name\": \"multus\",\n \"tier\": \"node\"\n },\n \"annotations\": {\n \"deprecated.daemonset.template.generation\": \"1\",\n \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"multus\\\",\\\"name\\\":\\\"multus\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kube-multus-ds\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"name\\\":\\\"multus\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"multus\\\",\\\"name\\\":\\\"multus\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"args\\\":[\\\"--multus-conf-file=auto\\\",\\\"--cni-version=0.3.1\\\"],\\\"command\\\":[\\\"/entrypoint.sh\\\"],\\\"image\\\":\\\"ghcr.io/k8snetworkplumbingwg/multus-cni:stable\\\",\\\"name\\\":\\\"kube-multus\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"privileged\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"cni\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/tmp/multus-conf\\\",\\\"name\\\":\\\"multus-cfg\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"multus\\\",\\\"terminationGracePeriodSeconds\\\":10,\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\"},\\\"name\\\":\\\"cni\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/opt/cni/bin\\\"},\\\"name\\\":\\\"cnibin\\\"},{\\\"configMap\\\":{\\\"items\\\":[{\\\"key\\\":\\\"cni-conf.json\\\",\\\"path\\\":\\\"70-multus.conf\\\"}],\\\"name\\\":\\\"multus-cni-config\\\"},\\\"name\\\":\\\"multus-cfg\\\"}]}},\\\"updateStrategy\\\":{\\\"type\\\":\\\"RollingUpdate\\\"}}}\\n\"\n },\n \"managedFields\": [\n {\n \"manager\": \"kubectl-client-side-apply\",\n \"operation\": \"Update\",\n \"apiVersion\": \"apps/v1\",\n \"time\": \"2021-05-21T15:16:02Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \".\": {},\n \"f:deprecated.daemonset.template.generation\": {},\n \"f:kubectl.kubernetes.io/last-applied-configuration\": {}\n },\n \"f:labels\": {\n \".\": {},\n \"f:app\": {},\n \"f:name\": {},\n \"f:tier\": {}\n }\n },\n \"f:spec\": {\n \"f:revisionHistoryLimit\": {},\n \"f:selector\": {\n \"f:matchLabels\": {\n \".\": {},\n \"f:name\": {}\n }\n },\n \"f:template\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:app\": {},\n \"f:name\": {},\n \"f:tier\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"kube-multus\\\"}\": {\n \".\": {},\n \"f:args\": {},\n \"f:command\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {\n \".\": {},\n \"f:limits\": {\n \".\": {},\n \"f:cpu\": {},\n \"f:memory\": {}\n },\n \"f:requests\": {\n \".\": {},\n \"f:cpu\": {},\n \"f:memory\": {}\n }\n },\n \"f:securityContext\": {\n \".\": {},\n \"f:privileged\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/tmp/multus-conf\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:hostNetwork\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:serviceAccount\": {},\n \"f:serviceAccountName\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"cni\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"cnibin\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"multus-cfg\\\"}\": {\n \".\": {},\n \"f:configMap\": {\n \".\": {},\n \"f:defaultMode\": {},\n \"f:items\": {},\n \"f:name\": {}\n },\n \"f:name\": {}\n }\n }\n }\n },\n \"f:updateStrategy\": {\n \"f:rollingUpdate\": {\n \".\": {},\n \"f:maxUnavailable\": {}\n },\n \"f:type\": {}\n }\n }\n }\n },\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"apps/v1\",\n \"time\": \"2021-05-21T15:16:41Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:currentNumberScheduled\": {},\n \"f:desiredNumberScheduled\": {},\n \"f:numberAvailable\": {},\n \"f:numberReady\": {},\n \"f:observedGeneration\": {},\n \"f:updatedNumberScheduled\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"selector\": {\n \"matchLabels\": {\n \"name\": \"multus\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"app\": \"multus\",\n \"name\": \"multus\",\n \"tier\": \"node\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"cni\",\n \"hostPath\": {\n \"path\": \"/etc/cni/net.d\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"cnibin\",\n \"hostPath\": {\n \"path\": \"/opt/cni/bin\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"multus-cfg\",\n \"configMap\": {\n \"name\": \"multus-cni-config\",\n \"items\": [\n {\n \"key\": \"cni-conf.json\",\n \"path\": \"70-multus.conf\"\n }\n ],\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-multus\",\n \"image\": \"ghcr.io/k8snetworkplumbingwg/multus-cni:stable\",\n \"command\": [\n \"/entrypoint.sh\"\n ],\n \"args\": [\n \"--multus-conf-file=auto\",\n \"--cni-version=0.3.1\"\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"cni\",\n \"mountPath\": \"/host/etc/cni/net.d\"\n },\n {\n \"name\": \"cnibin\",\n \"mountPath\": \"/host/opt/cni/bin\"\n },\n {\n \"name\": \"multus-cfg\",\n \"mountPath\": \"/tmp/multus-conf\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 10,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"multus\",\n \"serviceAccount\": \"multus\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ]\n }\n },\n \"updateStrategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": 1\n }\n },\n \"revisionHistoryLimit\": 10\n },\n \"status\": {\n \"currentNumberScheduled\": 3,\n \"numberMisscheduled\": 0,\n \"desiredNumberScheduled\": 3,\n \"numberReady\": 3,\n \"observedGeneration\": 1,\n \"updatedNumberScheduled\": 3,\n \"numberAvailable\": 3\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy\",\n \"uid\": \"41b3104f-a576-4641-b321-1d0dfa73f9da\",\n \"resourceVersion\": \"613\",\n \"generation\": 1,\n \"creationTimestamp\": \"2021-05-21T15:13:17Z\",\n \"labels\": {\n \"k8s-app\": \"kube-proxy\"\n },\n \"annotations\": {\n \"deprecated.daemonset.template.generation\": \"1\"\n },\n \"managedFields\": [\n {\n \"manager\": \"kubeadm\",\n \"operation\": \"Update\",\n \"apiVersion\": \"apps/v1\",\n \"time\": \"2021-05-21T15:13:17Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \".\": {},\n \"f:deprecated.daemonset.template.generation\": {}\n },\n \"f:labels\": {\n \".\": {},\n \"f:k8s-app\": {}\n }\n },\n \"f:spec\": {\n \"f:revisionHistoryLimit\": {},\n \"f:selector\": {\n \"f:matchLabels\": {\n \".\": {},\n \"f:k8s-app\": {}\n }\n },\n \"f:template\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:k8s-app\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"kube-proxy\\\"}\": {\n \".\": {},\n \"f:command\": {},\n \"f:env\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"NODE_NAME\\\"}\": {\n \".\": {},\n \"f:name\": {},\n \"f:valueFrom\": {\n \".\": {},\n \"f:fieldRef\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:fieldPath\": {}\n }\n }\n }\n },\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:securityContext\": {\n \".\": {},\n \"f:privileged\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/lib/modules\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {},\n \"f:readOnly\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/run/xtables.lock\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/var/lib/kube-proxy\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:hostNetwork\": {},\n \"f:nodeSelector\": {\n \".\": {},\n \"f:kubernetes.io/os\": {}\n },\n \"f:priorityClassName\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:serviceAccount\": {},\n \"f:serviceAccountName\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"kube-proxy\\\"}\": {\n \".\": {},\n \"f:configMap\": {\n \".\": {},\n \"f:defaultMode\": {},\n \"f:name\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"lib-modules\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"xtables-lock\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n }\n }\n }\n },\n \"f:updateStrategy\": {\n \"f:rollingUpdate\": {\n \".\": {},\n \"f:maxUnavailable\": {}\n },\n \"f:type\": {}\n }\n }\n }\n },\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"apps/v1\",\n \"time\": \"2021-05-21T15:13:53Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:currentNumberScheduled\": {},\n \"f:desiredNumberScheduled\": {},\n \"f:numberAvailable\": {},\n \"f:numberReady\": {},\n \"f:observedGeneration\": {},\n \"f:updatedNumberScheduled\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"selector\": {\n \"matchLabels\": {\n \"k8s-app\": \"kube-proxy\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"k8s-app\": \"kube-proxy\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"kube-proxy\",\n \"configMap\": {\n \"name\": \"kube-proxy\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-proxy\",\n \"image\": \"k8s.gcr.io/kube-proxy:v1.19.11\",\n \"command\": [\n \"/usr/local/bin/kube-proxy\",\n \"--config=/var/lib/kube-proxy/config.conf\",\n \"--hostname-override=$(NODE_NAME)\"\n ],\n \"env\": [\n {\n \"name\": \"NODE_NAME\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.nodeName\"\n }\n }\n }\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"kube-proxy\",\n \"mountPath\": \"/var/lib/kube-proxy\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeSelector\": {\n \"kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"kube-proxy\",\n \"serviceAccount\": \"kube-proxy\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\"\n }\n ],\n \"priorityClassName\": \"system-node-critical\"\n }\n },\n \"updateStrategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": 1\n }\n },\n \"revisionHistoryLimit\": 10\n },\n \"status\": {\n \"currentNumberScheduled\": 3,\n \"numberMisscheduled\": 0,\n \"desiredNumberScheduled\": 3,\n \"numberReady\": 3,\n \"observedGeneration\": 1,\n \"updatedNumberScheduled\": 3,\n \"numberAvailable\": 3\n }\n },\n {\n \"metadata\": {\n \"name\": \"tune-sysctls\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/daemonsets/tune-sysctls\",\n \"uid\": \"6a0ccf82-b00e-4003-bf5f-ef1ddd0bf984\",\n \"resourceVersion\": \"1382\",\n \"generation\": 1,\n \"creationTimestamp\": \"2021-05-21T15:16:01Z\",\n \"labels\": {\n \"app\": \"tune-sysctls\"\n },\n \"annotations\": {\n \"deprecated.daemonset.template.generation\": \"1\",\n \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"tune-sysctls\\\"},\\\"name\\\":\\\"tune-sysctls\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"name\\\":\\\"tune-sysctls\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"name\\\":\\\"tune-sysctls\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"sh\\\",\\\"-c\\\",\\\"while true; do\\\\n sysctl -w fs.inotify.max_user_watches=524288\\\\n sleep 10\\\\ndone\\\\n\\\"],\\\"image\\\":\\\"alpine:3.6\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"setsysctls\\\",\\\"resources\\\":{},\\\"securityContext\\\":{\\\"privileged\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/sys\\\",\\\"name\\\":\\\"sys\\\"}]}],\\\"hostIPC\\\":true,\\\"hostNetwork\\\":true,\\\"hostPID\\\":true,\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/sys\\\"},\\\"name\\\":\\\"sys\\\"}]}}}}\\n\"\n },\n \"managedFields\": [\n {\n \"manager\": \"kubectl-client-side-apply\",\n \"operation\": \"Update\",\n \"apiVersion\": \"apps/v1\",\n \"time\": \"2021-05-21T15:16:01Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \".\": {},\n \"f:deprecated.daemonset.template.generation\": {},\n \"f:kubectl.kubernetes.io/last-applied-configuration\": {}\n },\n \"f:labels\": {\n \".\": {},\n \"f:app\": {}\n }\n },\n \"f:spec\": {\n \"f:revisionHistoryLimit\": {},\n \"f:selector\": {\n \"f:matchLabels\": {\n \".\": {},\n \"f:name\": {}\n }\n },\n \"f:template\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:name\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"setsysctls\\\"}\": {\n \".\": {},\n \"f:command\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:securityContext\": {\n \".\": {},\n \"f:privileged\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/sys\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:hostIPC\": {},\n \"f:hostNetwork\": {},\n \"f:hostPID\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"sys\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n }\n }\n }\n },\n \"f:updateStrategy\": {\n \"f:rollingUpdate\": {\n \".\": {},\n \"f:maxUnavailable\": {}\n },\n \"f:type\": {}\n }\n }\n }\n },\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"apps/v1\",\n \"time\": \"2021-05-21T15:16:06Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:currentNumberScheduled\": {},\n \"f:desiredNumberScheduled\": {},\n \"f:numberAvailable\": {},\n \"f:numberReady\": {},\n \"f:observedGeneration\": {},\n \"f:updatedNumberScheduled\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"selector\": {\n \"matchLabels\": {\n \"name\": \"tune-sysctls\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"name\": \"tune-sysctls\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"sys\",\n \"hostPath\": {\n \"path\": \"/sys\",\n \"type\": \"\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"setsysctls\",\n \"image\": \"alpine:3.6\",\n \"command\": [\n \"sh\",\n \"-c\",\n \"while true; do\\n sysctl -w fs.inotify.max_user_watches=524288\\n sleep 10\\ndone\\n\"\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"sys\",\n \"mountPath\": \"/sys\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"hostNetwork\": true,\n \"hostPID\": true,\n \"hostIPC\": true,\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ]\n }\n },\n \"updateStrategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": 1\n }\n },\n \"revisionHistoryLimit\": 10\n },\n \"status\": {\n \"currentNumberScheduled\": 3,\n \"numberMisscheduled\": 0,\n \"desiredNumberScheduled\": 3,\n \"numberReady\": 3,\n \"observedGeneration\": 1,\n \"updatedNumberScheduled\": 3,\n \"numberAvailable\": 3\n }\n }\n ]\n}\n{\n \"kind\": \"DeploymentList\",\n \"apiVersion\": \"apps/v1\",\n \"metadata\": {\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/deployments\",\n \"resourceVersion\": \"49000\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"coredns\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/deployments/coredns\",\n \"uid\": \"479c98a9-6bae-4ed5-b08d-de4d0008b4de\",\n \"resourceVersion\": \"695\",\n \"generation\": 1,\n \"creationTimestamp\": \"2021-05-21T15:13:17Z\",\n \"labels\": {\n \"k8s-app\": \"kube-dns\"\n },\n \"annotations\": {\n \"deployment.kubernetes.io/revision\": \"1\"\n },\n \"managedFields\": [\n {\n \"manager\": \"kubeadm\",\n \"operation\": \"Update\",\n \"apiVersion\": \"apps/v1\",\n \"time\": \"2021-05-21T15:13:17Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:k8s-app\": {}\n }\n },\n \"f:spec\": {\n \"f:progressDeadlineSeconds\": {},\n \"f:replicas\": {},\n \"f:revisionHistoryLimit\": {},\n \"f:selector\": {\n \"f:matchLabels\": {\n \".\": {},\n \"f:k8s-app\": {}\n }\n },\n \"f:strategy\": {\n \"f:rollingUpdate\": {\n \".\": {},\n \"f:maxSurge\": {},\n \"f:maxUnavailable\": {}\n },\n \"f:type\": {}\n },\n \"f:template\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:k8s-app\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"coredns\\\"}\": {\n \".\": {},\n \"f:args\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:livenessProbe\": {\n \".\": {},\n \"f:failureThreshold\": {},\n \"f:httpGet\": {\n \".\": {},\n \"f:path\": {},\n \"f:port\": {},\n \"f:scheme\": {}\n },\n \"f:initialDelaySeconds\": {},\n \"f:periodSeconds\": {},\n \"f:successThreshold\": {},\n \"f:timeoutSeconds\": {}\n },\n \"f:name\": {},\n \"f:ports\": {\n \".\": {},\n \"k:{\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}\": {\n \".\": {},\n \"f:containerPort\": {},\n \"f:name\": {},\n \"f:protocol\": {}\n },\n \"k:{\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"}\": {\n \".\": {},\n \"f:containerPort\": {},\n \"f:name\": {},\n \"f:protocol\": {}\n },\n \"k:{\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}\": {\n \".\": {},\n \"f:containerPort\": {},\n \"f:name\": {},\n \"f:protocol\": {}\n }\n },\n \"f:readinessProbe\": {\n \".\": {},\n \"f:failureThreshold\": {},\n \"f:httpGet\": {\n \".\": {},\n \"f:path\": {},\n \"f:port\": {},\n \"f:scheme\": {}\n },\n \"f:periodSeconds\": {},\n \"f:successThreshold\": {},\n \"f:timeoutSeconds\": {}\n },\n \"f:resources\": {\n \".\": {},\n \"f:limits\": {\n \".\": {},\n \"f:memory\": {}\n },\n \"f:requests\": {\n \".\": {},\n \"f:cpu\": {},\n \"f:memory\": {}\n }\n },\n \"f:securityContext\": {\n \".\": {},\n \"f:allowPrivilegeEscalation\": {},\n \"f:capabilities\": {\n \".\": {},\n \"f:add\": {},\n \"f:drop\": {}\n },\n \"f:readOnlyRootFilesystem\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/etc/coredns\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {},\n \"f:readOnly\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:nodeSelector\": {\n \".\": {},\n \"f:kubernetes.io/os\": {}\n },\n \"f:priorityClassName\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:serviceAccount\": {},\n \"f:serviceAccountName\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"config-volume\\\"}\": {\n \".\": {},\n \"f:configMap\": {\n \".\": {},\n \"f:defaultMode\": {},\n \"f:items\": {},\n \"f:name\": {}\n },\n \"f:name\": {}\n }\n }\n }\n }\n }\n }\n },\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"apps/v1\",\n \"time\": \"2021-05-21T15:14:02Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \".\": {},\n \"f:deployment.kubernetes.io/revision\": {}\n }\n },\n \"f:status\": {\n \"f:availableReplicas\": {},\n \"f:conditions\": {\n \".\": {},\n \"k:{\\\"type\\\":\\\"Available\\\"}\": {\n \".\": {},\n \"f:lastTransitionTime\": {},\n \"f:lastUpdateTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Progressing\\\"}\": {\n \".\": {},\n \"f:lastTransitionTime\": {},\n \"f:lastUpdateTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:observedGeneration\": {},\n \"f:readyReplicas\": {},\n \"f:replicas\": {},\n \"f:updatedReplicas\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"replicas\": 2,\n \"selector\": {\n \"matchLabels\": {\n \"k8s-app\": \"kube-dns\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"k8s-app\": \"kube-dns\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"config-volume\",\n \"configMap\": {\n \"name\": \"coredns\",\n \"items\": [\n {\n \"key\": \"Corefile\",\n \"path\": \"Corefile\"\n }\n ],\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"coredns\",\n \"image\": \"k8s.gcr.io/coredns:1.7.0\",\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"ports\": [\n {\n \"name\": \"dns\",\n \"containerPort\": 53,\n \"protocol\": \"UDP\"\n },\n {\n \"name\": \"dns-tcp\",\n \"containerPort\": 53,\n \"protocol\": \"TCP\"\n },\n {\n \"name\": \"metrics\",\n \"containerPort\": 9153,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"memory\": \"170Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"70Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"config-volume\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/coredns\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/health\",\n \"port\": 8080,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 60,\n \"timeoutSeconds\": 5,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 5\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/ready\",\n \"port\": 8181,\n \"scheme\": \"HTTP\"\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_BIND_SERVICE\"\n ],\n \"drop\": [\n \"all\"\n ]\n },\n \"readOnlyRootFilesystem\": true,\n \"allowPrivilegeEscalation\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"Default\",\n \"nodeSelector\": {\n \"kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"coredns\",\n \"serviceAccount\": \"coredns\",\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node-role.kubernetes.io/master\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\"\n }\n },\n \"strategy\": {\n \"type\": \"RollingUpdate\",\n \"rollingUpdate\": {\n \"maxUnavailable\": 1,\n \"maxSurge\": \"25%\"\n }\n },\n \"revisionHistoryLimit\": 10,\n \"progressDeadlineSeconds\": 600\n },\n \"status\": {\n \"observedGeneration\": 1,\n \"replicas\": 2,\n \"updatedReplicas\": 2,\n \"readyReplicas\": 2,\n \"availableReplicas\": 2,\n \"conditions\": [\n {\n \"type\": \"Available\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2021-05-21T15:14:01Z\",\n \"lastTransitionTime\": \"2021-05-21T15:14:01Z\",\n \"reason\": \"MinimumReplicasAvailable\",\n \"message\": \"Deployment has minimum availability.\"\n },\n {\n \"type\": \"Progressing\",\n \"status\": \"True\",\n \"lastUpdateTime\": \"2021-05-21T15:14:02Z\",\n \"lastTransitionTime\": \"2021-05-21T15:13:35Z\",\n \"reason\": \"NewReplicaSetAvailable\",\n \"message\": \"ReplicaSet \\\"coredns-f9fd979d6\\\" has successfully progressed.\"\n }\n ]\n }\n }\n ]\n}\n{\n \"kind\": \"ReplicaSetList\",\n \"apiVersion\": \"apps/v1\",\n \"metadata\": {\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/replicasets\",\n \"resourceVersion\": \"49000\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"coredns-f9fd979d6\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/replicasets/coredns-f9fd979d6\",\n \"uid\": \"06796322-5a08-44d0-af68-4723a96a6342\",\n \"resourceVersion\": \"694\",\n \"generation\": 1,\n \"creationTimestamp\": \"2021-05-21T15:13:35Z\",\n \"labels\": {\n \"k8s-app\": \"kube-dns\",\n \"pod-template-hash\": \"f9fd979d6\"\n },\n \"annotations\": {\n \"deployment.kubernetes.io/desired-replicas\": \"2\",\n \"deployment.kubernetes.io/max-replicas\": \"3\",\n \"deployment.kubernetes.io/revision\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"Deployment\",\n \"name\": \"coredns\",\n \"uid\": \"479c98a9-6bae-4ed5-b08d-de4d0008b4de\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ],\n \"managedFields\": [\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"apps/v1\",\n \"time\": \"2021-05-21T15:14:01Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \".\": {},\n \"f:deployment.kubernetes.io/desired-replicas\": {},\n \"f:deployment.kubernetes.io/max-replicas\": {},\n \"f:deployment.kubernetes.io/revision\": {}\n },\n \"f:labels\": {\n \".\": {},\n \"f:k8s-app\": {},\n \"f:pod-template-hash\": {}\n },\n \"f:ownerReferences\": {\n \".\": {},\n \"k:{\\\"uid\\\":\\\"479c98a9-6bae-4ed5-b08d-de4d0008b4de\\\"}\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:blockOwnerDeletion\": {},\n \"f:controller\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:uid\": {}\n }\n }\n },\n \"f:spec\": {\n \"f:replicas\": {},\n \"f:selector\": {\n \"f:matchLabels\": {\n \".\": {},\n \"f:k8s-app\": {},\n \"f:pod-template-hash\": {}\n }\n },\n \"f:template\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:k8s-app\": {},\n \"f:pod-template-hash\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"coredns\\\"}\": {\n \".\": {},\n \"f:args\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:livenessProbe\": {\n \".\": {},\n \"f:failureThreshold\": {},\n \"f:httpGet\": {\n \".\": {},\n \"f:path\": {},\n \"f:port\": {},\n \"f:scheme\": {}\n },\n \"f:initialDelaySeconds\": {},\n \"f:periodSeconds\": {},\n \"f:successThreshold\": {},\n \"f:timeoutSeconds\": {}\n },\n \"f:name\": {},\n \"f:ports\": {\n \".\": {},\n \"k:{\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}\": {\n \".\": {},\n \"f:containerPort\": {},\n \"f:name\": {},\n \"f:protocol\": {}\n },\n \"k:{\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"}\": {\n \".\": {},\n \"f:containerPort\": {},\n \"f:name\": {},\n \"f:protocol\": {}\n },\n \"k:{\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}\": {\n \".\": {},\n \"f:containerPort\": {},\n \"f:name\": {},\n \"f:protocol\": {}\n }\n },\n \"f:readinessProbe\": {\n \".\": {},\n \"f:failureThreshold\": {},\n \"f:httpGet\": {\n \".\": {},\n \"f:path\": {},\n \"f:port\": {},\n \"f:scheme\": {}\n },\n \"f:periodSeconds\": {},\n \"f:successThreshold\": {},\n \"f:timeoutSeconds\": {}\n },\n \"f:resources\": {\n \".\": {},\n \"f:limits\": {\n \".\": {},\n \"f:memory\": {}\n },\n \"f:requests\": {\n \".\": {},\n \"f:cpu\": {},\n \"f:memory\": {}\n }\n },\n \"f:securityContext\": {\n \".\": {},\n \"f:allowPrivilegeEscalation\": {},\n \"f:capabilities\": {\n \".\": {},\n \"f:add\": {},\n \"f:drop\": {}\n },\n \"f:readOnlyRootFilesystem\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/etc/coredns\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {},\n \"f:readOnly\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:nodeSelector\": {\n \".\": {},\n \"f:kubernetes.io/os\": {}\n },\n \"f:priorityClassName\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:serviceAccount\": {},\n \"f:serviceAccountName\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"config-volume\\\"}\": {\n \".\": {},\n \"f:configMap\": {\n \".\": {},\n \"f:defaultMode\": {},\n \"f:items\": {},\n \"f:name\": {}\n },\n \"f:name\": {}\n }\n }\n }\n }\n },\n \"f:status\": {\n \"f:availableReplicas\": {},\n \"f:fullyLabeledReplicas\": {},\n \"f:observedGeneration\": {},\n \"f:readyReplicas\": {},\n \"f:replicas\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"replicas\": 2,\n \"selector\": {\n \"matchLabels\": {\n \"k8s-app\": \"kube-dns\",\n \"pod-template-hash\": \"f9fd979d6\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"k8s-app\": \"kube-dns\",\n \"pod-template-hash\": \"f9fd979d6\"\n }\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"config-volume\",\n \"configMap\": {\n \"name\": \"coredns\",\n \"items\": [\n {\n \"key\": \"Corefile\",\n \"path\": \"Corefile\"\n }\n ],\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"coredns\",\n \"image\": \"k8s.gcr.io/coredns:1.7.0\",\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"ports\": [\n {\n \"name\": \"dns\",\n \"containerPort\": 53,\n \"protocol\": \"UDP\"\n },\n {\n \"name\": \"dns-tcp\",\n \"containerPort\": 53,\n \"protocol\": \"TCP\"\n },\n {\n \"name\": \"metrics\",\n \"containerPort\": 9153,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"memory\": \"170Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"70Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"config-volume\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/coredns\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/health\",\n \"port\": 8080,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 60,\n \"timeoutSeconds\": 5,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 5\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/ready\",\n \"port\": 8181,\n \"scheme\": \"HTTP\"\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_BIND_SERVICE\"\n ],\n \"drop\": [\n \"all\"\n ]\n },\n \"readOnlyRootFilesystem\": true,\n \"allowPrivilegeEscalation\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"Default\",\n \"nodeSelector\": {\n \"kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"coredns\",\n \"serviceAccount\": \"coredns\",\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node-role.kubernetes.io/master\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\"\n }\n }\n },\n \"status\": {\n \"replicas\": 2,\n \"fullyLabeledReplicas\": 2,\n \"readyReplicas\": 2,\n \"availableReplicas\": 2,\n \"observedGeneration\": 1\n }\n }\n ]\n}\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods\",\n \"resourceVersion\": \"49000\"\n },\n \"items\": [\n {\n \"metadata\": {\n \"name\": \"coredns-f9fd979d6-mpnsm\",\n \"generateName\": \"coredns-f9fd979d6-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/coredns-f9fd979d6-mpnsm\",\n \"uid\": \"f29f4f80-1201-488f-801f-65d5cbf16b8c\",\n \"resourceVersion\": \"691\",\n \"creationTimestamp\": \"2021-05-21T15:13:35Z\",\n \"labels\": {\n \"k8s-app\": \"kube-dns\",\n \"pod-template-hash\": \"f9fd979d6\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"ReplicaSet\",\n \"name\": \"coredns-f9fd979d6\",\n \"uid\": \"06796322-5a08-44d0-af68-4723a96a6342\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ],\n \"managedFields\": [\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:13:35Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:generateName\": {},\n \"f:labels\": {\n \".\": {},\n \"f:k8s-app\": {},\n \"f:pod-template-hash\": {}\n },\n \"f:ownerReferences\": {\n \".\": {},\n \"k:{\\\"uid\\\":\\\"06796322-5a08-44d0-af68-4723a96a6342\\\"}\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:blockOwnerDeletion\": {},\n \"f:controller\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:uid\": {}\n }\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"coredns\\\"}\": {\n \".\": {},\n \"f:args\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:livenessProbe\": {\n \".\": {},\n \"f:failureThreshold\": {},\n \"f:httpGet\": {\n \".\": {},\n \"f:path\": {},\n \"f:port\": {},\n \"f:scheme\": {}\n },\n \"f:initialDelaySeconds\": {},\n \"f:periodSeconds\": {},\n \"f:successThreshold\": {},\n \"f:timeoutSeconds\": {}\n },\n \"f:name\": {},\n \"f:ports\": {\n \".\": {},\n \"k:{\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}\": {\n \".\": {},\n \"f:containerPort\": {},\n \"f:name\": {},\n \"f:protocol\": {}\n },\n \"k:{\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"}\": {\n \".\": {},\n \"f:containerPort\": {},\n \"f:name\": {},\n \"f:protocol\": {}\n },\n \"k:{\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}\": {\n \".\": {},\n \"f:containerPort\": {},\n \"f:name\": {},\n \"f:protocol\": {}\n }\n },\n \"f:readinessProbe\": {\n \".\": {},\n \"f:failureThreshold\": {},\n \"f:httpGet\": {\n \".\": {},\n \"f:path\": {},\n \"f:port\": {},\n \"f:scheme\": {}\n },\n \"f:periodSeconds\": {},\n \"f:successThreshold\": {},\n \"f:timeoutSeconds\": {}\n },\n \"f:resources\": {\n \".\": {},\n \"f:limits\": {\n \".\": {},\n \"f:memory\": {}\n },\n \"f:requests\": {\n \".\": {},\n \"f:cpu\": {},\n \"f:memory\": {}\n }\n },\n \"f:securityContext\": {\n \".\": {},\n \"f:allowPrivilegeEscalation\": {},\n \"f:capabilities\": {\n \".\": {},\n \"f:add\": {},\n \"f:drop\": {}\n },\n \"f:readOnlyRootFilesystem\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/etc/coredns\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {},\n \"f:readOnly\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:nodeSelector\": {\n \".\": {},\n \"f:kubernetes.io/os\": {}\n },\n \"f:priorityClassName\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:serviceAccount\": {},\n \"f:serviceAccountName\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"config-volume\\\"}\": {\n \".\": {},\n \"f:configMap\": {\n \".\": {},\n \"f:defaultMode\": {},\n \"f:items\": {},\n \"f:name\": {}\n },\n \"f:name\": {}\n }\n }\n }\n }\n },\n {\n \"manager\": \"kube-scheduler\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:13:35Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \".\": {},\n \"k:{\\\"type\\\":\\\"PodScheduled\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n }\n }\n }\n },\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:14:02Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.0.3\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"config-volume\",\n \"configMap\": {\n \"name\": \"coredns\",\n \"items\": [\n {\n \"key\": \"Corefile\",\n \"path\": \"Corefile\"\n }\n ],\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"coredns-token-gv47j\",\n \"secret\": {\n \"secretName\": \"coredns-token-gv47j\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"coredns\",\n \"image\": \"k8s.gcr.io/coredns:1.7.0\",\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"ports\": [\n {\n \"name\": \"dns\",\n \"containerPort\": 53,\n \"protocol\": \"UDP\"\n },\n {\n \"name\": \"dns-tcp\",\n \"containerPort\": 53,\n \"protocol\": \"TCP\"\n },\n {\n \"name\": \"metrics\",\n \"containerPort\": 9153,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"memory\": \"170Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"70Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"config-volume\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/coredns\"\n },\n {\n \"name\": \"coredns-token-gv47j\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/health\",\n \"port\": 8080,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 60,\n \"timeoutSeconds\": 5,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 5\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/ready\",\n \"port\": 8181,\n \"scheme\": \"HTTP\"\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_BIND_SERVICE\"\n ],\n \"drop\": [\n \"all\"\n ]\n },\n \"readOnlyRootFilesystem\": true,\n \"allowPrivilegeEscalation\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"Default\",\n \"nodeSelector\": {\n \"kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"coredns\",\n \"serviceAccount\": \"coredns\",\n \"nodeName\": \"kali-control-plane\",\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node-role.kubernetes.io/master\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"priority\": 2000000000,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:53Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:14:02Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:14:02Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:53Z\"\n }\n ],\n \"hostIP\": \"172.18.0.3\",\n \"podIP\": \"10.244.0.3\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.0.3\"\n }\n ],\n \"startTime\": \"2021-05-21T15:13:53Z\",\n \"containerStatuses\": [\n {\n \"name\": \"coredns\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-21T15:13:55Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"k8s.gcr.io/coredns:1.7.0\",\n \"imageID\": \"sha256:bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16\",\n \"containerID\": \"containerd://f57ab21bee2094b48f8ba3295b0f04e97e7cae6ce799ce7a03409b94e5b9f472\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"coredns-f9fd979d6-nfqfd\",\n \"generateName\": \"coredns-f9fd979d6-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/coredns-f9fd979d6-nfqfd\",\n \"uid\": \"6827196c-0e5d-45ea-a439-1570598dcd22\",\n \"resourceVersion\": \"686\",\n \"creationTimestamp\": \"2021-05-21T15:13:35Z\",\n \"labels\": {\n \"k8s-app\": \"kube-dns\",\n \"pod-template-hash\": \"f9fd979d6\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"ReplicaSet\",\n \"name\": \"coredns-f9fd979d6\",\n \"uid\": \"06796322-5a08-44d0-af68-4723a96a6342\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ],\n \"managedFields\": [\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:13:35Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:generateName\": {},\n \"f:labels\": {\n \".\": {},\n \"f:k8s-app\": {},\n \"f:pod-template-hash\": {}\n },\n \"f:ownerReferences\": {\n \".\": {},\n \"k:{\\\"uid\\\":\\\"06796322-5a08-44d0-af68-4723a96a6342\\\"}\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:blockOwnerDeletion\": {},\n \"f:controller\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:uid\": {}\n }\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"coredns\\\"}\": {\n \".\": {},\n \"f:args\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:livenessProbe\": {\n \".\": {},\n \"f:failureThreshold\": {},\n \"f:httpGet\": {\n \".\": {},\n \"f:path\": {},\n \"f:port\": {},\n \"f:scheme\": {}\n },\n \"f:initialDelaySeconds\": {},\n \"f:periodSeconds\": {},\n \"f:successThreshold\": {},\n \"f:timeoutSeconds\": {}\n },\n \"f:name\": {},\n \"f:ports\": {\n \".\": {},\n \"k:{\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}\": {\n \".\": {},\n \"f:containerPort\": {},\n \"f:name\": {},\n \"f:protocol\": {}\n },\n \"k:{\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"}\": {\n \".\": {},\n \"f:containerPort\": {},\n \"f:name\": {},\n \"f:protocol\": {}\n },\n \"k:{\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}\": {\n \".\": {},\n \"f:containerPort\": {},\n \"f:name\": {},\n \"f:protocol\": {}\n }\n },\n \"f:readinessProbe\": {\n \".\": {},\n \"f:failureThreshold\": {},\n \"f:httpGet\": {\n \".\": {},\n \"f:path\": {},\n \"f:port\": {},\n \"f:scheme\": {}\n },\n \"f:periodSeconds\": {},\n \"f:successThreshold\": {},\n \"f:timeoutSeconds\": {}\n },\n \"f:resources\": {\n \".\": {},\n \"f:limits\": {\n \".\": {},\n \"f:memory\": {}\n },\n \"f:requests\": {\n \".\": {},\n \"f:cpu\": {},\n \"f:memory\": {}\n }\n },\n \"f:securityContext\": {\n \".\": {},\n \"f:allowPrivilegeEscalation\": {},\n \"f:capabilities\": {\n \".\": {},\n \"f:add\": {},\n \"f:drop\": {}\n },\n \"f:readOnlyRootFilesystem\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/etc/coredns\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {},\n \"f:readOnly\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:nodeSelector\": {\n \".\": {},\n \"f:kubernetes.io/os\": {}\n },\n \"f:priorityClassName\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:serviceAccount\": {},\n \"f:serviceAccountName\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"config-volume\\\"}\": {\n \".\": {},\n \"f:configMap\": {\n \".\": {},\n \"f:defaultMode\": {},\n \"f:items\": {},\n \"f:name\": {}\n },\n \"f:name\": {}\n }\n }\n }\n }\n },\n {\n \"manager\": \"kube-scheduler\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:13:35Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \".\": {},\n \"k:{\\\"type\\\":\\\"PodScheduled\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n }\n }\n }\n },\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:14:01Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.0.2\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"config-volume\",\n \"configMap\": {\n \"name\": \"coredns\",\n \"items\": [\n {\n \"key\": \"Corefile\",\n \"path\": \"Corefile\"\n }\n ],\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"coredns-token-gv47j\",\n \"secret\": {\n \"secretName\": \"coredns-token-gv47j\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"coredns\",\n \"image\": \"k8s.gcr.io/coredns:1.7.0\",\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"ports\": [\n {\n \"name\": \"dns\",\n \"containerPort\": 53,\n \"protocol\": \"UDP\"\n },\n {\n \"name\": \"dns-tcp\",\n \"containerPort\": 53,\n \"protocol\": \"TCP\"\n },\n {\n \"name\": \"metrics\",\n \"containerPort\": 9153,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"memory\": \"170Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"70Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"config-volume\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/coredns\"\n },\n {\n \"name\": \"coredns-token-gv47j\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/health\",\n \"port\": 8080,\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 60,\n \"timeoutSeconds\": 5,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 5\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/ready\",\n \"port\": 8181,\n \"scheme\": \"HTTP\"\n },\n \"timeoutSeconds\": 1,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_BIND_SERVICE\"\n ],\n \"drop\": [\n \"all\"\n ]\n },\n \"readOnlyRootFilesystem\": true,\n \"allowPrivilegeEscalation\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"Default\",\n \"nodeSelector\": {\n \"kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"coredns\",\n \"serviceAccount\": \"coredns\",\n \"nodeName\": \"kali-control-plane\",\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node-role.kubernetes.io/master\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\",\n \"tolerationSeconds\": 300\n }\n ],\n \"priorityClassName\": \"system-cluster-critical\",\n \"priority\": 2000000000,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:53Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:14:01Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:14:01Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:53Z\"\n }\n ],\n \"hostIP\": \"172.18.0.3\",\n \"podIP\": \"10.244.0.2\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.0.2\"\n }\n ],\n \"startTime\": \"2021-05-21T15:13:53Z\",\n \"containerStatuses\": [\n {\n \"name\": \"coredns\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-21T15:13:55Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"k8s.gcr.io/coredns:1.7.0\",\n \"imageID\": \"sha256:bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16\",\n \"containerID\": \"containerd://6f0722f16094340d9b6edf39199cd282bb63b119beee475b2e2f819cc67292c6\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"create-loop-devs-26xt8\",\n \"generateName\": \"create-loop-devs-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/create-loop-devs-26xt8\",\n \"uid\": \"6e227dd7-08f3-4ad7-ad61-0350d57bfda1\",\n \"resourceVersion\": \"1385\",\n \"creationTimestamp\": \"2021-05-21T15:16:01Z\",\n \"labels\": {\n \"controller-revision-hash\": \"69d76dbff8\",\n \"name\": \"create-loop-devs\",\n \"pod-template-generation\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"create-loop-devs\",\n \"uid\": \"14b411a5-28ad-4d2c-8713-6de6f2f844d8\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ],\n \"managedFields\": [\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:16:01Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:generateName\": {},\n \"f:labels\": {\n \".\": {},\n \"f:controller-revision-hash\": {},\n \"f:name\": {},\n \"f:pod-template-generation\": {}\n },\n \"f:ownerReferences\": {\n \".\": {},\n \"k:{\\\"uid\\\":\\\"14b411a5-28ad-4d2c-8713-6de6f2f844d8\\\"}\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:blockOwnerDeletion\": {},\n \"f:controller\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:uid\": {}\n }\n }\n },\n \"f:spec\": {\n \"f:affinity\": {\n \".\": {},\n \"f:nodeAffinity\": {\n \".\": {},\n \"f:requiredDuringSchedulingIgnoredDuringExecution\": {\n \".\": {},\n \"f:nodeSelectorTerms\": {}\n }\n }\n },\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"loopdev\\\"}\": {\n \".\": {},\n \"f:command\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:securityContext\": {\n \".\": {},\n \"f:privileged\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/dev\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"dev\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n }\n }\n }\n }\n },\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:16:06Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.2\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"dev\",\n \"hostPath\": {\n \"path\": \"/dev\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"default-token-96nkg\",\n \"secret\": {\n \"secretName\": \"default-token-96nkg\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"loopdev\",\n \"image\": \"alpine:3.6\",\n \"command\": [\n \"sh\",\n \"-c\",\n \"while true; do\\n for i in $(seq 0 1000); do\\n if ! [ -e /dev/loop$i ]; then\\n mknod /dev/loop$i b 7 $i\\n fi\\n done\\n sleep 100000000\\ndone\\n\"\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"dev\",\n \"mountPath\": \"/dev\"\n },\n {\n \"name\": \"default-token-96nkg\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"default\",\n \"serviceAccount\": \"default\",\n \"nodeName\": \"kali-worker2\",\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"kali-worker2\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:01Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:06Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:06Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:01Z\"\n }\n ],\n \"hostIP\": \"172.18.0.4\",\n \"podIP\": \"10.244.2.2\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.2\"\n }\n ],\n \"startTime\": \"2021-05-21T15:16:01Z\",\n \"containerStatuses\": [\n {\n \"name\": \"loopdev\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-21T15:16:06Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"docker.io/library/alpine:3.6\",\n \"imageID\": \"docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475\",\n \"containerID\": \"containerd://e4f90e14b6826936bc191de0609a48836b752ccfaaed68bae16a70fe31ea18a7\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"create-loop-devs-8l686\",\n \"generateName\": \"create-loop-devs-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/create-loop-devs-8l686\",\n \"uid\": \"7e1a5020-a3db-42c1-ae73-f2a4ba40cd20\",\n \"resourceVersion\": \"1389\",\n \"creationTimestamp\": \"2021-05-21T15:16:01Z\",\n \"labels\": {\n \"controller-revision-hash\": \"69d76dbff8\",\n \"name\": \"create-loop-devs\",\n \"pod-template-generation\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"create-loop-devs\",\n \"uid\": \"14b411a5-28ad-4d2c-8713-6de6f2f844d8\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ],\n \"managedFields\": [\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:16:01Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:generateName\": {},\n \"f:labels\": {\n \".\": {},\n \"f:controller-revision-hash\": {},\n \"f:name\": {},\n \"f:pod-template-generation\": {}\n },\n \"f:ownerReferences\": {\n \".\": {},\n \"k:{\\\"uid\\\":\\\"14b411a5-28ad-4d2c-8713-6de6f2f844d8\\\"}\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:blockOwnerDeletion\": {},\n \"f:controller\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:uid\": {}\n }\n }\n },\n \"f:spec\": {\n \"f:affinity\": {\n \".\": {},\n \"f:nodeAffinity\": {\n \".\": {},\n \"f:requiredDuringSchedulingIgnoredDuringExecution\": {\n \".\": {},\n \"f:nodeSelectorTerms\": {}\n }\n }\n },\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"loopdev\\\"}\": {\n \".\": {},\n \"f:command\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:securityContext\": {\n \".\": {},\n \"f:privileged\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/dev\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"dev\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n }\n }\n }\n }\n },\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:16:06Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.2\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"dev\",\n \"hostPath\": {\n \"path\": \"/dev\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"default-token-96nkg\",\n \"secret\": {\n \"secretName\": \"default-token-96nkg\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"loopdev\",\n \"image\": \"alpine:3.6\",\n \"command\": [\n \"sh\",\n \"-c\",\n \"while true; do\\n for i in $(seq 0 1000); do\\n if ! [ -e /dev/loop$i ]; then\\n mknod /dev/loop$i b 7 $i\\n fi\\n done\\n sleep 100000000\\ndone\\n\"\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"dev\",\n \"mountPath\": \"/dev\"\n },\n {\n \"name\": \"default-token-96nkg\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"default\",\n \"serviceAccount\": \"default\",\n \"nodeName\": \"kali-worker\",\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"kali-worker\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:01Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:06Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:06Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:01Z\"\n }\n ],\n \"hostIP\": \"172.18.0.2\",\n \"podIP\": \"10.244.1.2\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.2\"\n }\n ],\n \"startTime\": \"2021-05-21T15:16:01Z\",\n \"containerStatuses\": [\n {\n \"name\": \"loopdev\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-21T15:16:05Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"docker.io/library/alpine:3.6\",\n \"imageID\": \"docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475\",\n \"containerID\": \"containerd://ca227e4c6715ac213cbd2ba39c0d599b2c3210415bf1e7cdb0d2f924ed780c1f\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"create-loop-devs-cwbn4\",\n \"generateName\": \"create-loop-devs-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/create-loop-devs-cwbn4\",\n \"uid\": \"a80ef051-1371-483f-96c4-ac8846fdebb0\",\n \"resourceVersion\": \"1301\",\n \"creationTimestamp\": \"2021-05-21T15:16:01Z\",\n \"labels\": {\n \"controller-revision-hash\": \"69d76dbff8\",\n \"name\": \"create-loop-devs\",\n \"pod-template-generation\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"create-loop-devs\",\n \"uid\": \"14b411a5-28ad-4d2c-8713-6de6f2f844d8\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ],\n \"managedFields\": [\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:16:01Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:generateName\": {},\n \"f:labels\": {\n \".\": {},\n \"f:controller-revision-hash\": {},\n \"f:name\": {},\n \"f:pod-template-generation\": {}\n },\n \"f:ownerReferences\": {\n \".\": {},\n \"k:{\\\"uid\\\":\\\"14b411a5-28ad-4d2c-8713-6de6f2f844d8\\\"}\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:blockOwnerDeletion\": {},\n \"f:controller\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:uid\": {}\n }\n }\n },\n \"f:spec\": {\n \"f:affinity\": {\n \".\": {},\n \"f:nodeAffinity\": {\n \".\": {},\n \"f:requiredDuringSchedulingIgnoredDuringExecution\": {\n \".\": {},\n \"f:nodeSelectorTerms\": {}\n }\n }\n },\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"loopdev\\\"}\": {\n \".\": {},\n \"f:command\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:securityContext\": {\n \".\": {},\n \"f:privileged\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/dev\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"dev\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n }\n }\n }\n }\n },\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:16:05Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.0.5\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"dev\",\n \"hostPath\": {\n \"path\": \"/dev\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"default-token-96nkg\",\n \"secret\": {\n \"secretName\": \"default-token-96nkg\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"loopdev\",\n \"image\": \"alpine:3.6\",\n \"command\": [\n \"sh\",\n \"-c\",\n \"while true; do\\n for i in $(seq 0 1000); do\\n if ! [ -e /dev/loop$i ]; then\\n mknod /dev/loop$i b 7 $i\\n fi\\n done\\n sleep 100000000\\ndone\\n\"\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"dev\",\n \"mountPath\": \"/dev\"\n },\n {\n \"name\": \"default-token-96nkg\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"default\",\n \"serviceAccount\": \"default\",\n \"nodeName\": \"kali-control-plane\",\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"kali-control-plane\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:01Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:05Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:05Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:01Z\"\n }\n ],\n \"hostIP\": \"172.18.0.3\",\n \"podIP\": \"10.244.0.5\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.0.5\"\n }\n ],\n \"startTime\": \"2021-05-21T15:16:01Z\",\n \"containerStatuses\": [\n {\n \"name\": \"loopdev\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-21T15:16:05Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"docker.io/library/alpine:3.6\",\n \"imageID\": \"docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475\",\n \"containerID\": \"containerd://6d03f9d6d3f58e947c15be22fe8e57c0205c0ec9238f13a9316bb4dcca85e622\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"etcd-kali-control-plane\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/etcd-kali-control-plane\",\n \"uid\": \"d254d3f5-b938-41e3-aa4d-f0e093d10c38\",\n \"resourceVersion\": \"785\",\n \"creationTimestamp\": \"2021-05-21T15:13:29Z\",\n \"labels\": {\n \"component\": \"etcd\",\n \"tier\": \"control-plane\"\n },\n \"annotations\": {\n \"kubeadm.kubernetes.io/etcd.advertise-client-urls\": \"https://172.18.0.3:2379\",\n \"kubernetes.io/config.hash\": \"55b651534b53fa0d1ab155d44d29ea41\",\n \"kubernetes.io/config.mirror\": \"55b651534b53fa0d1ab155d44d29ea41\",\n \"kubernetes.io/config.seen\": \"2021-05-21T15:13:22.926425789Z\",\n \"kubernetes.io/config.source\": \"file\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"v1\",\n \"kind\": \"Node\",\n \"name\": \"kali-control-plane\",\n \"uid\": \"3bc732d5-d94d-4ab3-a172-a75c133ce66c\",\n \"controller\": true\n }\n ],\n \"managedFields\": [\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:14:31Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \".\": {},\n \"f:kubeadm.kubernetes.io/etcd.advertise-client-urls\": {},\n \"f:kubernetes.io/config.hash\": {},\n \"f:kubernetes.io/config.mirror\": {},\n \"f:kubernetes.io/config.seen\": {},\n \"f:kubernetes.io/config.source\": {}\n },\n \"f:labels\": {\n \".\": {},\n \"f:component\": {},\n \"f:tier\": {}\n },\n \"f:ownerReferences\": {\n \".\": {},\n \"k:{\\\"uid\\\":\\\"3bc732d5-d94d-4ab3-a172-a75c133ce66c\\\"}\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:controller\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:uid\": {}\n }\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"etcd\\\"}\": {\n \".\": {},\n \"f:command\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:livenessProbe\": {\n \".\": {},\n \"f:failureThreshold\": {},\n \"f:httpGet\": {\n \".\": {},\n \"f:host\": {},\n \"f:path\": {},\n \"f:port\": {},\n \"f:scheme\": {}\n },\n \"f:initialDelaySeconds\": {},\n \"f:periodSeconds\": {},\n \"f:successThreshold\": {},\n \"f:timeoutSeconds\": {}\n },\n \"f:name\": {},\n \"f:resources\": {},\n \"f:startupProbe\": {\n \".\": {},\n \"f:failureThreshold\": {},\n \"f:httpGet\": {\n \".\": {},\n \"f:host\": {},\n \"f:path\": {},\n \"f:port\": {},\n \"f:scheme\": {}\n },\n \"f:initialDelaySeconds\": {},\n \"f:periodSeconds\": {},\n \"f:successThreshold\": {},\n \"f:timeoutSeconds\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/etc/kubernetes/pki/etcd\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/var/lib/etcd\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:hostNetwork\": {},\n \"f:nodeName\": {},\n \"f:priorityClassName\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"etcd-certs\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"etcd-data\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n }\n }\n },\n \"f:status\": {\n \"f:conditions\": {\n \".\": {},\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"PodScheduled\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"172.18.0.3\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"etcd-certs\",\n \"hostPath\": {\n \"path\": \"/etc/kubernetes/pki/etcd\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"etcd-data\",\n \"hostPath\": {\n \"path\": \"/var/lib/etcd\",\n \"type\": \"DirectoryOrCreate\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"etcd\",\n \"image\": \"k8s.gcr.io/etcd:3.4.13-0\",\n \"command\": [\n \"etcd\",\n \"--advertise-client-urls=https://172.18.0.3:2379\",\n \"--cert-file=/etc/kubernetes/pki/etcd/server.crt\",\n \"--client-cert-auth=true\",\n \"--data-dir=/var/lib/etcd\",\n \"--initial-advertise-peer-urls=https://172.18.0.3:2380\",\n \"--initial-cluster=kali-control-plane=https://172.18.0.3:2380\",\n \"--key-file=/etc/kubernetes/pki/etcd/server.key\",\n \"--listen-client-urls=https://127.0.0.1:2379,https://172.18.0.3:2379\",\n \"--listen-metrics-urls=http://127.0.0.1:2381\",\n \"--listen-peer-urls=https://172.18.0.3:2380\",\n \"--name=kali-control-plane\",\n \"--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt\",\n \"--peer-client-cert-auth=true\",\n \"--peer-key-file=/etc/kubernetes/pki/etcd/peer.key\",\n \"--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt\",\n \"--snapshot-count=10000\",\n \"--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt\"\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"etcd-data\",\n \"mountPath\": \"/var/lib/etcd\"\n },\n {\n \"name\": \"etcd-certs\",\n \"mountPath\": \"/etc/kubernetes/pki/etcd\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/health\",\n \"port\": 2381,\n \"host\": \"127.0.0.1\",\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 10,\n \"timeoutSeconds\": 15,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 8\n },\n \"startupProbe\": {\n \"httpGet\": {\n \"path\": \"/health\",\n \"port\": 2381,\n \"host\": \"127.0.0.1\",\n \"scheme\": \"HTTP\"\n },\n \"initialDelaySeconds\": 10,\n \"timeoutSeconds\": 15,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 24\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeName\": \"kali-control-plane\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n }\n ],\n \"priorityClassName\": \"system-node-critical\",\n \"priority\": 2000001000,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:29Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:14:31Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:14:31Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:29Z\"\n }\n ],\n \"hostIP\": \"172.18.0.3\",\n \"podIP\": \"172.18.0.3\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.3\"\n }\n ],\n \"startTime\": \"2021-05-21T15:13:29Z\",\n \"containerStatuses\": [\n {\n \"name\": \"etcd\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-21T15:13:09Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"k8s.gcr.io/etcd:3.4.13-0\",\n \"imageID\": \"sha256:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934\",\n \"containerID\": \"containerd://c259e6cd481abb32c22f2522c4e207b6c277b4d7cfad68724973a50244dc193d\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kindnet-7b2zs\",\n \"generateName\": \"kindnet-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kindnet-7b2zs\",\n \"uid\": \"5fa67879-2470-42da-8248-afb9f60717ad\",\n \"resourceVersion\": \"484\",\n \"creationTimestamp\": \"2021-05-21T15:13:35Z\",\n \"labels\": {\n \"app\": \"kindnet\",\n \"controller-revision-hash\": \"b85b97576\",\n \"k8s-app\": \"kindnet\",\n \"pod-template-generation\": \"1\",\n \"tier\": \"node\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"kindnet\",\n \"uid\": \"4475fe22-8df5-4436-bc1d-18482df5a443\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ],\n \"managedFields\": [\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:13:35Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:generateName\": {},\n \"f:labels\": {\n \".\": {},\n \"f:app\": {},\n \"f:controller-revision-hash\": {},\n \"f:k8s-app\": {},\n \"f:pod-template-generation\": {},\n \"f:tier\": {}\n },\n \"f:ownerReferences\": {\n \".\": {},\n \"k:{\\\"uid\\\":\\\"4475fe22-8df5-4436-bc1d-18482df5a443\\\"}\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:blockOwnerDeletion\": {},\n \"f:controller\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:uid\": {}\n }\n }\n },\n \"f:spec\": {\n \"f:affinity\": {\n \".\": {},\n \"f:nodeAffinity\": {\n \".\": {},\n \"f:requiredDuringSchedulingIgnoredDuringExecution\": {\n \".\": {},\n \"f:nodeSelectorTerms\": {}\n }\n }\n },\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"kindnet-cni\\\"}\": {\n \".\": {},\n \"f:env\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"CONTROL_PLANE_ENDPOINT\\\"}\": {\n \".\": {},\n \"f:name\": {},\n \"f:value\": {}\n },\n \"k:{\\\"name\\\":\\\"HOST_IP\\\"}\": {\n \".\": {},\n \"f:name\": {},\n \"f:valueFrom\": {\n \".\": {},\n \"f:fieldRef\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:fieldPath\": {}\n }\n }\n },\n \"k:{\\\"name\\\":\\\"POD_IP\\\"}\": {\n \".\": {},\n \"f:name\": {},\n \"f:valueFrom\": {\n \".\": {},\n \"f:fieldRef\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:fieldPath\": {}\n }\n }\n },\n \"k:{\\\"name\\\":\\\"POD_SUBNET\\\"}\": {\n \".\": {},\n \"f:name\": {},\n \"f:value\": {}\n }\n },\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {\n \".\": {},\n \"f:limits\": {\n \".\": {},\n \"f:cpu\": {},\n \"f:memory\": {}\n },\n \"f:requests\": {\n \".\": {},\n \"f:cpu\": {},\n \"f:memory\": {}\n }\n },\n \"f:securityContext\": {\n \".\": {},\n \"f:capabilities\": {\n \".\": {},\n \"f:add\": {}\n },\n \"f:privileged\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/lib/modules\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {},\n \"f:readOnly\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/run/xtables.lock\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:hostNetwork\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:serviceAccount\": {},\n \"f:serviceAccountName\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"cni-cfg\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"lib-modules\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"xtables-lock\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n }\n }\n }\n }\n },\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:13:39Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"172.18.0.3\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"cni-cfg\",\n \"hostPath\": {\n \"path\": \"/etc/cni/net.d\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kindnet-token-nkq4h\",\n \"secret\": {\n \"secretName\": \"kindnet-token-nkq4h\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kindnet-cni\",\n \"image\": \"docker.io/kindest/kindnetd:v20210326-1e038dc5\",\n \"env\": [\n {\n \"name\": \"HOST_IP\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"status.hostIP\"\n }\n }\n },\n {\n \"name\": \"POD_IP\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"status.podIP\"\n }\n }\n },\n {\n \"name\": \"POD_SUBNET\",\n \"value\": \"10.244.0.0/16\"\n },\n {\n \"name\": \"CONTROL_PLANE_ENDPOINT\",\n \"value\": \"kali-control-plane:6443\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"cni-cfg\",\n \"mountPath\": \"/etc/cni/net.d\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n },\n {\n \"name\": \"kindnet-token-nkq4h\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_RAW\",\n \"NET_ADMIN\"\n ]\n },\n \"privileged\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"kindnet\",\n \"serviceAccount\": \"kindnet\",\n \"nodeName\": \"kali-control-plane\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"kali-control-plane\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:35Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:39Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:39Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:35Z\"\n }\n ],\n \"hostIP\": \"172.18.0.3\",\n \"podIP\": \"172.18.0.3\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.3\"\n }\n ],\n \"startTime\": \"2021-05-21T15:13:35Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kindnet-cni\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-21T15:13:38Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"docker.io/kindest/kindnetd:v20210326-1e038dc5\",\n \"imageID\": \"sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb\",\n \"containerID\": \"containerd://93e3d42b88e11dfc7f45498cc46753fa272cd7af0e0907ed69a646133205c29c\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Guaranteed\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kindnet-n7f64\",\n \"generateName\": \"kindnet-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kindnet-n7f64\",\n \"uid\": \"8eb627e4-032b-481d-bc16-abbb6f977025\",\n \"resourceVersion\": \"645\",\n \"creationTimestamp\": \"2021-05-21T15:13:50Z\",\n \"labels\": {\n \"app\": \"kindnet\",\n \"controller-revision-hash\": \"b85b97576\",\n \"k8s-app\": \"kindnet\",\n \"pod-template-generation\": \"1\",\n \"tier\": \"node\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"kindnet\",\n \"uid\": \"4475fe22-8df5-4436-bc1d-18482df5a443\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ],\n \"managedFields\": [\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:13:50Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:generateName\": {},\n \"f:labels\": {\n \".\": {},\n \"f:app\": {},\n \"f:controller-revision-hash\": {},\n \"f:k8s-app\": {},\n \"f:pod-template-generation\": {},\n \"f:tier\": {}\n },\n \"f:ownerReferences\": {\n \".\": {},\n \"k:{\\\"uid\\\":\\\"4475fe22-8df5-4436-bc1d-18482df5a443\\\"}\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:blockOwnerDeletion\": {},\n \"f:controller\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:uid\": {}\n }\n }\n },\n \"f:spec\": {\n \"f:affinity\": {\n \".\": {},\n \"f:nodeAffinity\": {\n \".\": {},\n \"f:requiredDuringSchedulingIgnoredDuringExecution\": {\n \".\": {},\n \"f:nodeSelectorTerms\": {}\n }\n }\n },\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"kindnet-cni\\\"}\": {\n \".\": {},\n \"f:env\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"CONTROL_PLANE_ENDPOINT\\\"}\": {\n \".\": {},\n \"f:name\": {},\n \"f:value\": {}\n },\n \"k:{\\\"name\\\":\\\"HOST_IP\\\"}\": {\n \".\": {},\n \"f:name\": {},\n \"f:valueFrom\": {\n \".\": {},\n \"f:fieldRef\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:fieldPath\": {}\n }\n }\n },\n \"k:{\\\"name\\\":\\\"POD_IP\\\"}\": {\n \".\": {},\n \"f:name\": {},\n \"f:valueFrom\": {\n \".\": {},\n \"f:fieldRef\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:fieldPath\": {}\n }\n }\n },\n \"k:{\\\"name\\\":\\\"POD_SUBNET\\\"}\": {\n \".\": {},\n \"f:name\": {},\n \"f:value\": {}\n }\n },\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {\n \".\": {},\n \"f:limits\": {\n \".\": {},\n \"f:cpu\": {},\n \"f:memory\": {}\n },\n \"f:requests\": {\n \".\": {},\n \"f:cpu\": {},\n \"f:memory\": {}\n }\n },\n \"f:securityContext\": {\n \".\": {},\n \"f:capabilities\": {\n \".\": {},\n \"f:add\": {}\n },\n \"f:privileged\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/lib/modules\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {},\n \"f:readOnly\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/run/xtables.lock\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:hostNetwork\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:serviceAccount\": {},\n \"f:serviceAccountName\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"cni-cfg\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"lib-modules\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"xtables-lock\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n }\n }\n }\n }\n },\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:13:55Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"172.18.0.4\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"cni-cfg\",\n \"hostPath\": {\n \"path\": \"/etc/cni/net.d\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kindnet-token-nkq4h\",\n \"secret\": {\n \"secretName\": \"kindnet-token-nkq4h\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kindnet-cni\",\n \"image\": \"docker.io/kindest/kindnetd:v20210326-1e038dc5\",\n \"env\": [\n {\n \"name\": \"HOST_IP\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"status.hostIP\"\n }\n }\n },\n {\n \"name\": \"POD_IP\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"status.podIP\"\n }\n }\n },\n {\n \"name\": \"POD_SUBNET\",\n \"value\": \"10.244.0.0/16\"\n },\n {\n \"name\": \"CONTROL_PLANE_ENDPOINT\",\n \"value\": \"kali-control-plane:6443\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"cni-cfg\",\n \"mountPath\": \"/etc/cni/net.d\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n },\n {\n \"name\": \"kindnet-token-nkq4h\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_RAW\",\n \"NET_ADMIN\"\n ]\n },\n \"privileged\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"kindnet\",\n \"serviceAccount\": \"kindnet\",\n \"nodeName\": \"kali-worker2\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"kali-worker2\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:50Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:55Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:55Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:50Z\"\n }\n ],\n \"hostIP\": \"172.18.0.4\",\n \"podIP\": \"172.18.0.4\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.4\"\n }\n ],\n \"startTime\": \"2021-05-21T15:13:50Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kindnet-cni\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-21T15:13:54Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"docker.io/kindest/kindnetd:v20210326-1e038dc5\",\n \"imageID\": \"sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb\",\n \"containerID\": \"containerd://a5db5ffeb30d717fd22cb3e01ec8f62968e864e93e9e98f82e52cf31170e20e6\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Guaranteed\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kindnet-vlqfv\",\n \"generateName\": \"kindnet-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kindnet-vlqfv\",\n \"uid\": \"d71cfbf4-d154-49e5-bfbc-fc9706a0f468\",\n \"resourceVersion\": \"631\",\n \"creationTimestamp\": \"2021-05-21T15:13:50Z\",\n \"labels\": {\n \"app\": \"kindnet\",\n \"controller-revision-hash\": \"b85b97576\",\n \"k8s-app\": \"kindnet\",\n \"pod-template-generation\": \"1\",\n \"tier\": \"node\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"kindnet\",\n \"uid\": \"4475fe22-8df5-4436-bc1d-18482df5a443\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ],\n \"managedFields\": [\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:13:50Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:generateName\": {},\n \"f:labels\": {\n \".\": {},\n \"f:app\": {},\n \"f:controller-revision-hash\": {},\n \"f:k8s-app\": {},\n \"f:pod-template-generation\": {},\n \"f:tier\": {}\n },\n \"f:ownerReferences\": {\n \".\": {},\n \"k:{\\\"uid\\\":\\\"4475fe22-8df5-4436-bc1d-18482df5a443\\\"}\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:blockOwnerDeletion\": {},\n \"f:controller\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:uid\": {}\n }\n }\n },\n \"f:spec\": {\n \"f:affinity\": {\n \".\": {},\n \"f:nodeAffinity\": {\n \".\": {},\n \"f:requiredDuringSchedulingIgnoredDuringExecution\": {\n \".\": {},\n \"f:nodeSelectorTerms\": {}\n }\n }\n },\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"kindnet-cni\\\"}\": {\n \".\": {},\n \"f:env\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"CONTROL_PLANE_ENDPOINT\\\"}\": {\n \".\": {},\n \"f:name\": {},\n \"f:value\": {}\n },\n \"k:{\\\"name\\\":\\\"HOST_IP\\\"}\": {\n \".\": {},\n \"f:name\": {},\n \"f:valueFrom\": {\n \".\": {},\n \"f:fieldRef\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:fieldPath\": {}\n }\n }\n },\n \"k:{\\\"name\\\":\\\"POD_IP\\\"}\": {\n \".\": {},\n \"f:name\": {},\n \"f:valueFrom\": {\n \".\": {},\n \"f:fieldRef\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:fieldPath\": {}\n }\n }\n },\n \"k:{\\\"name\\\":\\\"POD_SUBNET\\\"}\": {\n \".\": {},\n \"f:name\": {},\n \"f:value\": {}\n }\n },\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {\n \".\": {},\n \"f:limits\": {\n \".\": {},\n \"f:cpu\": {},\n \"f:memory\": {}\n },\n \"f:requests\": {\n \".\": {},\n \"f:cpu\": {},\n \"f:memory\": {}\n }\n },\n \"f:securityContext\": {\n \".\": {},\n \"f:capabilities\": {\n \".\": {},\n \"f:add\": {}\n },\n \"f:privileged\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/lib/modules\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {},\n \"f:readOnly\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/run/xtables.lock\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:hostNetwork\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:serviceAccount\": {},\n \"f:serviceAccountName\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"cni-cfg\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"lib-modules\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"xtables-lock\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n }\n }\n }\n }\n },\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:13:54Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"172.18.0.2\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"cni-cfg\",\n \"hostPath\": {\n \"path\": \"/etc/cni/net.d\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kindnet-token-nkq4h\",\n \"secret\": {\n \"secretName\": \"kindnet-token-nkq4h\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kindnet-cni\",\n \"image\": \"docker.io/kindest/kindnetd:v20210326-1e038dc5\",\n \"env\": [\n {\n \"name\": \"HOST_IP\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"status.hostIP\"\n }\n }\n },\n {\n \"name\": \"POD_IP\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"status.podIP\"\n }\n }\n },\n {\n \"name\": \"POD_SUBNET\",\n \"value\": \"10.244.0.0/16\"\n },\n {\n \"name\": \"CONTROL_PLANE_ENDPOINT\",\n \"value\": \"kali-control-plane:6443\"\n }\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"cni-cfg\",\n \"mountPath\": \"/etc/cni/net.d\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n },\n {\n \"name\": \"kindnet-token-nkq4h\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"capabilities\": {\n \"add\": [\n \"NET_RAW\",\n \"NET_ADMIN\"\n ]\n },\n \"privileged\": false\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"kindnet\",\n \"serviceAccount\": \"kindnet\",\n \"nodeName\": \"kali-worker\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"kali-worker\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:50Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:54Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:54Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:50Z\"\n }\n ],\n \"hostIP\": \"172.18.0.2\",\n \"podIP\": \"172.18.0.2\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.2\"\n }\n ],\n \"startTime\": \"2021-05-21T15:13:50Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kindnet-cni\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-21T15:13:54Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"docker.io/kindest/kindnetd:v20210326-1e038dc5\",\n \"imageID\": \"sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb\",\n \"containerID\": \"containerd://8947e9668ba01685c7b0b05c2cc7b608c25e9cb3211b7b4dc7a549e26f086d0a\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Guaranteed\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-apiserver-kali-control-plane\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-apiserver-kali-control-plane\",\n \"uid\": \"53e89f37-ca2d-4dcc-96aa-b2ad181db6c9\",\n \"resourceVersion\": \"459\",\n \"creationTimestamp\": \"2021-05-21T15:13:29Z\",\n \"labels\": {\n \"component\": \"kube-apiserver\",\n \"tier\": \"control-plane\"\n },\n \"annotations\": {\n \"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\": \"172.18.0.3:6443\",\n \"kubernetes.io/config.hash\": \"de70dfbae78885555074e8c9eec7f016\",\n \"kubernetes.io/config.mirror\": \"de70dfbae78885555074e8c9eec7f016\",\n \"kubernetes.io/config.seen\": \"2021-05-21T15:13:22.926431830Z\",\n \"kubernetes.io/config.source\": \"file\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"v1\",\n \"kind\": \"Node\",\n \"name\": \"kali-control-plane\",\n \"uid\": \"3bc732d5-d94d-4ab3-a172-a75c133ce66c\",\n \"controller\": true\n }\n ],\n \"managedFields\": [\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:13:35Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \".\": {},\n \"f:kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\": {},\n \"f:kubernetes.io/config.hash\": {},\n \"f:kubernetes.io/config.mirror\": {},\n \"f:kubernetes.io/config.seen\": {},\n \"f:kubernetes.io/config.source\": {}\n },\n \"f:labels\": {\n \".\": {},\n \"f:component\": {},\n \"f:tier\": {}\n },\n \"f:ownerReferences\": {\n \".\": {},\n \"k:{\\\"uid\\\":\\\"3bc732d5-d94d-4ab3-a172-a75c133ce66c\\\"}\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:controller\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:uid\": {}\n }\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"kube-apiserver\\\"}\": {\n \".\": {},\n \"f:command\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:livenessProbe\": {\n \".\": {},\n \"f:failureThreshold\": {},\n \"f:httpGet\": {\n \".\": {},\n \"f:host\": {},\n \"f:path\": {},\n \"f:port\": {},\n \"f:scheme\": {}\n },\n \"f:initialDelaySeconds\": {},\n \"f:periodSeconds\": {},\n \"f:successThreshold\": {},\n \"f:timeoutSeconds\": {}\n },\n \"f:name\": {},\n \"f:readinessProbe\": {\n \".\": {},\n \"f:failureThreshold\": {},\n \"f:httpGet\": {\n \".\": {},\n \"f:host\": {},\n \"f:path\": {},\n \"f:port\": {},\n \"f:scheme\": {}\n },\n \"f:periodSeconds\": {},\n \"f:successThreshold\": {},\n \"f:timeoutSeconds\": {}\n },\n \"f:resources\": {\n \".\": {},\n \"f:requests\": {\n \".\": {},\n \"f:cpu\": {}\n }\n },\n \"f:startupProbe\": {\n \".\": {},\n \"f:failureThreshold\": {},\n \"f:httpGet\": {\n \".\": {},\n \"f:host\": {},\n \"f:path\": {},\n \"f:port\": {},\n \"f:scheme\": {}\n },\n \"f:initialDelaySeconds\": {},\n \"f:periodSeconds\": {},\n \"f:successThreshold\": {},\n \"f:timeoutSeconds\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/etc/ca-certificates\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {},\n \"f:readOnly\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/etc/kubernetes/pki\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {},\n \"f:readOnly\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/etc/ssl/certs\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {},\n \"f:readOnly\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/usr/local/share/ca-certificates\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {},\n \"f:readOnly\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/usr/share/ca-certificates\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {},\n \"f:readOnly\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:hostNetwork\": {},\n \"f:nodeName\": {},\n \"f:priorityClassName\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"ca-certs\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"etc-ca-certificates\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"k8s-certs\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"usr-local-share-ca-certificates\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"usr-share-ca-certificates\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n }\n }\n },\n \"f:status\": {\n \"f:conditions\": {\n \".\": {},\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"PodScheduled\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"172.18.0.3\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"ca-certs\",\n \"hostPath\": {\n \"path\": \"/etc/ssl/certs\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"etc-ca-certificates\",\n \"hostPath\": {\n \"path\": \"/etc/ca-certificates\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"k8s-certs\",\n \"hostPath\": {\n \"path\": \"/etc/kubernetes/pki\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"usr-local-share-ca-certificates\",\n \"hostPath\": {\n \"path\": \"/usr/local/share/ca-certificates\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"usr-share-ca-certificates\",\n \"hostPath\": {\n \"path\": \"/usr/share/ca-certificates\",\n \"type\": \"DirectoryOrCreate\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-apiserver\",\n \"image\": \"k8s.gcr.io/kube-apiserver:v1.19.11\",\n \"command\": [\n \"kube-apiserver\",\n \"--advertise-address=172.18.0.3\",\n \"--allow-privileged=true\",\n \"--authorization-mode=Node,RBAC\",\n \"--client-ca-file=/etc/kubernetes/pki/ca.crt\",\n \"--enable-admission-plugins=NodeRestriction\",\n \"--enable-bootstrap-token-auth=true\",\n \"--etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt\",\n \"--etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt\",\n \"--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key\",\n \"--etcd-servers=https://127.0.0.1:2379\",\n \"--insecure-port=0\",\n \"--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt\",\n \"--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key\",\n \"--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname\",\n \"--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt\",\n \"--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key\",\n \"--requestheader-allowed-names=front-proxy-client\",\n \"--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt\",\n \"--requestheader-extra-headers-prefix=X-Remote-Extra-\",\n \"--requestheader-group-headers=X-Remote-Group\",\n \"--requestheader-username-headers=X-Remote-User\",\n \"--runtime-config=\",\n \"--secure-port=6443\",\n \"--service-account-key-file=/etc/kubernetes/pki/sa.pub\",\n \"--service-cluster-ip-range=10.96.0.0/16\",\n \"--tls-cert-file=/etc/kubernetes/pki/apiserver.crt\",\n \"--tls-private-key-file=/etc/kubernetes/pki/apiserver.key\"\n ],\n \"resources\": {\n \"requests\": {\n \"cpu\": \"250m\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"ca-certs\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/ssl/certs\"\n },\n {\n \"name\": \"etc-ca-certificates\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/ca-certificates\"\n },\n {\n \"name\": \"k8s-certs\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/kubernetes/pki\"\n },\n {\n \"name\": \"usr-local-share-ca-certificates\",\n \"readOnly\": true,\n \"mountPath\": \"/usr/local/share/ca-certificates\"\n },\n {\n \"name\": \"usr-share-ca-certificates\",\n \"readOnly\": true,\n \"mountPath\": \"/usr/share/ca-certificates\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/livez\",\n \"port\": 6443,\n \"host\": \"172.18.0.3\",\n \"scheme\": \"HTTPS\"\n },\n \"initialDelaySeconds\": 10,\n \"timeoutSeconds\": 15,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 8\n },\n \"readinessProbe\": {\n \"httpGet\": {\n \"path\": \"/readyz\",\n \"port\": 6443,\n \"host\": \"172.18.0.3\",\n \"scheme\": \"HTTPS\"\n },\n \"timeoutSeconds\": 15,\n \"periodSeconds\": 1,\n \"successThreshold\": 1,\n \"failureThreshold\": 3\n },\n \"startupProbe\": {\n \"httpGet\": {\n \"path\": \"/livez\",\n \"port\": 6443,\n \"host\": \"172.18.0.3\",\n \"scheme\": \"HTTPS\"\n },\n \"initialDelaySeconds\": 10,\n \"timeoutSeconds\": 15,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 24\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeName\": \"kali-control-plane\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n }\n ],\n \"priorityClassName\": \"system-node-critical\",\n \"priority\": 2000001000,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:29Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:35Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:35Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:29Z\"\n }\n ],\n \"hostIP\": \"172.18.0.3\",\n \"podIP\": \"172.18.0.3\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.3\"\n }\n ],\n \"startTime\": \"2021-05-21T15:13:29Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kube-apiserver\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-21T15:13:07Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"k8s.gcr.io/kube-apiserver:v1.19.11\",\n \"imageID\": \"sha256:6f01f4148afce285a444afe2d771b5793e5f4bc75413297a8dad6cdc58d7065a\",\n \"containerID\": \"containerd://f365cf6ce5ec4ac1327bf75dbee48f7cf2b5bf1740686cf37362f73c7298525a\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-controller-manager-kali-control-plane\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-controller-manager-kali-control-plane\",\n \"uid\": \"fe241589-952e-4272-a7ef-5c132150893e\",\n \"resourceVersion\": \"861\",\n \"creationTimestamp\": \"2021-05-21T15:13:29Z\",\n \"labels\": {\n \"component\": \"kube-controller-manager\",\n \"tier\": \"control-plane\"\n },\n \"annotations\": {\n \"kubernetes.io/config.hash\": \"7cdb8173b858fe2eac20f747c22c95f2\",\n \"kubernetes.io/config.mirror\": \"7cdb8173b858fe2eac20f747c22c95f2\",\n \"kubernetes.io/config.seen\": \"2021-05-21T15:13:22.926433917Z\",\n \"kubernetes.io/config.source\": \"file\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"v1\",\n \"kind\": \"Node\",\n \"name\": \"kali-control-plane\",\n \"uid\": \"3bc732d5-d94d-4ab3-a172-a75c133ce66c\",\n \"controller\": true\n }\n ],\n \"managedFields\": [\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:14:57Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \".\": {},\n \"f:kubernetes.io/config.hash\": {},\n \"f:kubernetes.io/config.mirror\": {},\n \"f:kubernetes.io/config.seen\": {},\n \"f:kubernetes.io/config.source\": {}\n },\n \"f:labels\": {\n \".\": {},\n \"f:component\": {},\n \"f:tier\": {}\n },\n \"f:ownerReferences\": {\n \".\": {},\n \"k:{\\\"uid\\\":\\\"3bc732d5-d94d-4ab3-a172-a75c133ce66c\\\"}\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:controller\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:uid\": {}\n }\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"kube-controller-manager\\\"}\": {\n \".\": {},\n \"f:command\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:livenessProbe\": {\n \".\": {},\n \"f:failureThreshold\": {},\n \"f:httpGet\": {\n \".\": {},\n \"f:host\": {},\n \"f:path\": {},\n \"f:port\": {},\n \"f:scheme\": {}\n },\n \"f:initialDelaySeconds\": {},\n \"f:periodSeconds\": {},\n \"f:successThreshold\": {},\n \"f:timeoutSeconds\": {}\n },\n \"f:name\": {},\n \"f:resources\": {\n \".\": {},\n \"f:requests\": {\n \".\": {},\n \"f:cpu\": {}\n }\n },\n \"f:startupProbe\": {\n \".\": {},\n \"f:failureThreshold\": {},\n \"f:httpGet\": {\n \".\": {},\n \"f:host\": {},\n \"f:path\": {},\n \"f:port\": {},\n \"f:scheme\": {}\n },\n \"f:initialDelaySeconds\": {},\n \"f:periodSeconds\": {},\n \"f:successThreshold\": {},\n \"f:timeoutSeconds\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/etc/ca-certificates\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {},\n \"f:readOnly\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/etc/kubernetes/controller-manager.conf\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {},\n \"f:readOnly\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/etc/kubernetes/pki\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {},\n \"f:readOnly\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/etc/ssl/certs\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {},\n \"f:readOnly\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/usr/local/share/ca-certificates\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {},\n \"f:readOnly\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/usr/share/ca-certificates\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {},\n \"f:readOnly\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:hostNetwork\": {},\n \"f:nodeName\": {},\n \"f:priorityClassName\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"ca-certs\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"etc-ca-certificates\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"flexvolume-dir\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"k8s-certs\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"kubeconfig\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"usr-local-share-ca-certificates\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"usr-share-ca-certificates\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n }\n }\n },\n \"f:status\": {\n \"f:conditions\": {\n \".\": {},\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"PodScheduled\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"172.18.0.3\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"ca-certs\",\n \"hostPath\": {\n \"path\": \"/etc/ssl/certs\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"etc-ca-certificates\",\n \"hostPath\": {\n \"path\": \"/etc/ca-certificates\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"flexvolume-dir\",\n \"hostPath\": {\n \"path\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"k8s-certs\",\n \"hostPath\": {\n \"path\": \"/etc/kubernetes/pki\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"kubeconfig\",\n \"hostPath\": {\n \"path\": \"/etc/kubernetes/controller-manager.conf\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"usr-local-share-ca-certificates\",\n \"hostPath\": {\n \"path\": \"/usr/local/share/ca-certificates\",\n \"type\": \"DirectoryOrCreate\"\n }\n },\n {\n \"name\": \"usr-share-ca-certificates\",\n \"hostPath\": {\n \"path\": \"/usr/share/ca-certificates\",\n \"type\": \"DirectoryOrCreate\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-controller-manager\",\n \"image\": \"k8s.gcr.io/kube-controller-manager:v1.19.11\",\n \"command\": [\n \"kube-controller-manager\",\n \"--allocate-node-cidrs=true\",\n \"--authentication-kubeconfig=/etc/kubernetes/controller-manager.conf\",\n \"--authorization-kubeconfig=/etc/kubernetes/controller-manager.conf\",\n \"--bind-address=127.0.0.1\",\n \"--client-ca-file=/etc/kubernetes/pki/ca.crt\",\n \"--cluster-cidr=10.244.0.0/16\",\n \"--cluster-name=kali\",\n \"--cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt\",\n \"--cluster-signing-key-file=/etc/kubernetes/pki/ca.key\",\n \"--controllers=*,bootstrapsigner,tokencleaner\",\n \"--enable-hostpath-provisioner=true\",\n \"--kubeconfig=/etc/kubernetes/controller-manager.conf\",\n \"--leader-elect=true\",\n \"--node-cidr-mask-size=24\",\n \"--port=0\",\n \"--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt\",\n \"--root-ca-file=/etc/kubernetes/pki/ca.crt\",\n \"--service-account-private-key-file=/etc/kubernetes/pki/sa.key\",\n \"--service-cluster-ip-range=10.96.0.0/16\",\n \"--use-service-account-credentials=true\"\n ],\n \"resources\": {\n \"requests\": {\n \"cpu\": \"200m\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"ca-certs\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/ssl/certs\"\n },\n {\n \"name\": \"etc-ca-certificates\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/ca-certificates\"\n },\n {\n \"name\": \"flexvolume-dir\",\n \"mountPath\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\"\n },\n {\n \"name\": \"k8s-certs\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/kubernetes/pki\"\n },\n {\n \"name\": \"kubeconfig\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/kubernetes/controller-manager.conf\"\n },\n {\n \"name\": \"usr-local-share-ca-certificates\",\n \"readOnly\": true,\n \"mountPath\": \"/usr/local/share/ca-certificates\"\n },\n {\n \"name\": \"usr-share-ca-certificates\",\n \"readOnly\": true,\n \"mountPath\": \"/usr/share/ca-certificates\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/healthz\",\n \"port\": 10257,\n \"host\": \"127.0.0.1\",\n \"scheme\": \"HTTPS\"\n },\n \"initialDelaySeconds\": 10,\n \"timeoutSeconds\": 15,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 8\n },\n \"startupProbe\": {\n \"httpGet\": {\n \"path\": \"/healthz\",\n \"port\": 10257,\n \"host\": \"127.0.0.1\",\n \"scheme\": \"HTTPS\"\n },\n \"initialDelaySeconds\": 10,\n \"timeoutSeconds\": 15,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 24\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeName\": \"kali-control-plane\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n }\n ],\n \"priorityClassName\": \"system-node-critical\",\n \"priority\": 2000001000,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:29Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:14:57Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:14:57Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:29Z\"\n }\n ],\n \"hostIP\": \"172.18.0.3\",\n \"podIP\": \"172.18.0.3\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.3\"\n }\n ],\n \"startTime\": \"2021-05-21T15:13:29Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kube-controller-manager\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-21T15:13:07Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"k8s.gcr.io/kube-controller-manager:v1.19.11\",\n \"imageID\": \"sha256:6f55f627f24d9d0ff48c0303b410982cca9225d3f4beb0029f58e9f7206cf771\",\n \"containerID\": \"containerd://c7e5ee7b746f32b9d82f2665d7c049e6dde5e1a9075f9cb6394d68d42b608f6b\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-multus-ds-f4mr9\",\n \"generateName\": \"kube-multus-ds-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-multus-ds-f4mr9\",\n \"uid\": \"ef2e2296-8969-40dd-88ab-265b3a61afa8\",\n \"resourceVersion\": \"1600\",\n \"creationTimestamp\": \"2021-05-21T15:16:02Z\",\n \"labels\": {\n \"app\": \"multus\",\n \"controller-revision-hash\": \"97f447c9f\",\n \"name\": \"multus\",\n \"pod-template-generation\": \"1\",\n \"tier\": \"node\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"kube-multus-ds\",\n \"uid\": \"928bc64f-c0c9-475a-b436-4ec77811dd11\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ],\n \"managedFields\": [\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:16:02Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:generateName\": {},\n \"f:labels\": {\n \".\": {},\n \"f:app\": {},\n \"f:controller-revision-hash\": {},\n \"f:name\": {},\n \"f:pod-template-generation\": {},\n \"f:tier\": {}\n },\n \"f:ownerReferences\": {\n \".\": {},\n \"k:{\\\"uid\\\":\\\"928bc64f-c0c9-475a-b436-4ec77811dd11\\\"}\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:blockOwnerDeletion\": {},\n \"f:controller\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:uid\": {}\n }\n }\n },\n \"f:spec\": {\n \"f:affinity\": {\n \".\": {},\n \"f:nodeAffinity\": {\n \".\": {},\n \"f:requiredDuringSchedulingIgnoredDuringExecution\": {\n \".\": {},\n \"f:nodeSelectorTerms\": {}\n }\n }\n },\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"kube-multus\\\"}\": {\n \".\": {},\n \"f:args\": {},\n \"f:command\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {\n \".\": {},\n \"f:limits\": {\n \".\": {},\n \"f:cpu\": {},\n \"f:memory\": {}\n },\n \"f:requests\": {\n \".\": {},\n \"f:cpu\": {},\n \"f:memory\": {}\n }\n },\n \"f:securityContext\": {\n \".\": {},\n \"f:privileged\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/tmp/multus-conf\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:hostNetwork\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:serviceAccount\": {},\n \"f:serviceAccountName\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"cni\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"cnibin\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"multus-cfg\\\"}\": {\n \".\": {},\n \"f:configMap\": {\n \".\": {},\n \"f:defaultMode\": {},\n \"f:items\": {},\n \"f:name\": {}\n },\n \"f:name\": {}\n }\n }\n }\n }\n },\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:16:39Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"172.18.0.2\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"cni\",\n \"hostPath\": {\n \"path\": \"/etc/cni/net.d\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"cnibin\",\n \"hostPath\": {\n \"path\": \"/opt/cni/bin\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"multus-cfg\",\n \"configMap\": {\n \"name\": \"multus-cni-config\",\n \"items\": [\n {\n \"key\": \"cni-conf.json\",\n \"path\": \"70-multus.conf\"\n }\n ],\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"multus-token-w8qxg\",\n \"secret\": {\n \"secretName\": \"multus-token-w8qxg\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-multus\",\n \"image\": \"ghcr.io/k8snetworkplumbingwg/multus-cni:stable\",\n \"command\": [\n \"/entrypoint.sh\"\n ],\n \"args\": [\n \"--multus-conf-file=auto\",\n \"--cni-version=0.3.1\"\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"cni\",\n \"mountPath\": \"/host/etc/cni/net.d\"\n },\n {\n \"name\": \"cnibin\",\n \"mountPath\": \"/host/opt/cni/bin\"\n },\n {\n \"name\": \"multus-cfg\",\n \"mountPath\": \"/tmp/multus-conf\"\n },\n {\n \"name\": \"multus-token-w8qxg\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 10,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"multus\",\n \"serviceAccount\": \"multus\",\n \"nodeName\": \"kali-worker\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"kali-worker\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:02Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:39Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:39Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:02Z\"\n }\n ],\n \"hostIP\": \"172.18.0.2\",\n \"podIP\": \"172.18.0.2\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.2\"\n }\n ],\n \"startTime\": \"2021-05-21T15:16:02Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kube-multus\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-21T15:16:39Z\"\n }\n },\n \"lastState\": {\n \"terminated\": {\n \"exitCode\": 1,\n \"reason\": \"Error\",\n \"startedAt\": \"2021-05-21T15:16:24Z\",\n \"finishedAt\": \"2021-05-21T15:16:26Z\",\n \"containerID\": \"containerd://ec29304bc6535ddf0c0749bf525c3b28e7d6996e837949e7f1d192eda815dc51\"\n }\n },\n \"ready\": true,\n \"restartCount\": 2,\n \"image\": \"ghcr.io/k8snetworkplumbingwg/multus-cni:stable\",\n \"imageID\": \"ghcr.io/k8snetworkplumbingwg/multus-cni@sha256:e72aa733faf24d1f62b69ded1126b9b8da0144d35c8a410c4fa0a860006f9eed\",\n \"containerID\": \"containerd://ebe998bf05c6cc75b8c98b69d28c1fc037137484e39b7f985b97239b3d0327b3\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Guaranteed\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-multus-ds-xtw9p\",\n \"generateName\": \"kube-multus-ds-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-multus-ds-xtw9p\",\n \"uid\": \"01d9ee39-7547-4d0f-a7ba-305a3725c1d9\",\n \"resourceVersion\": \"1614\",\n \"creationTimestamp\": \"2021-05-21T15:16:02Z\",\n \"labels\": {\n \"app\": \"multus\",\n \"controller-revision-hash\": \"97f447c9f\",\n \"name\": \"multus\",\n \"pod-template-generation\": \"1\",\n \"tier\": \"node\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"kube-multus-ds\",\n \"uid\": \"928bc64f-c0c9-475a-b436-4ec77811dd11\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ],\n \"managedFields\": [\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:16:02Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:generateName\": {},\n \"f:labels\": {\n \".\": {},\n \"f:app\": {},\n \"f:controller-revision-hash\": {},\n \"f:name\": {},\n \"f:pod-template-generation\": {},\n \"f:tier\": {}\n },\n \"f:ownerReferences\": {\n \".\": {},\n \"k:{\\\"uid\\\":\\\"928bc64f-c0c9-475a-b436-4ec77811dd11\\\"}\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:blockOwnerDeletion\": {},\n \"f:controller\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:uid\": {}\n }\n }\n },\n \"f:spec\": {\n \"f:affinity\": {\n \".\": {},\n \"f:nodeAffinity\": {\n \".\": {},\n \"f:requiredDuringSchedulingIgnoredDuringExecution\": {\n \".\": {},\n \"f:nodeSelectorTerms\": {}\n }\n }\n },\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"kube-multus\\\"}\": {\n \".\": {},\n \"f:args\": {},\n \"f:command\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {\n \".\": {},\n \"f:limits\": {\n \".\": {},\n \"f:cpu\": {},\n \"f:memory\": {}\n },\n \"f:requests\": {\n \".\": {},\n \"f:cpu\": {},\n \"f:memory\": {}\n }\n },\n \"f:securityContext\": {\n \".\": {},\n \"f:privileged\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/tmp/multus-conf\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:hostNetwork\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:serviceAccount\": {},\n \"f:serviceAccountName\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"cni\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"cnibin\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"multus-cfg\\\"}\": {\n \".\": {},\n \"f:configMap\": {\n \".\": {},\n \"f:defaultMode\": {},\n \"f:items\": {},\n \"f:name\": {}\n },\n \"f:name\": {}\n }\n }\n }\n }\n },\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:16:41Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"172.18.0.3\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"cni\",\n \"hostPath\": {\n \"path\": \"/etc/cni/net.d\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"cnibin\",\n \"hostPath\": {\n \"path\": \"/opt/cni/bin\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"multus-cfg\",\n \"configMap\": {\n \"name\": \"multus-cni-config\",\n \"items\": [\n {\n \"key\": \"cni-conf.json\",\n \"path\": \"70-multus.conf\"\n }\n ],\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"multus-token-w8qxg\",\n \"secret\": {\n \"secretName\": \"multus-token-w8qxg\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-multus\",\n \"image\": \"ghcr.io/k8snetworkplumbingwg/multus-cni:stable\",\n \"command\": [\n \"/entrypoint.sh\"\n ],\n \"args\": [\n \"--multus-conf-file=auto\",\n \"--cni-version=0.3.1\"\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"cni\",\n \"mountPath\": \"/host/etc/cni/net.d\"\n },\n {\n \"name\": \"cnibin\",\n \"mountPath\": \"/host/opt/cni/bin\"\n },\n {\n \"name\": \"multus-cfg\",\n \"mountPath\": \"/tmp/multus-conf\"\n },\n {\n \"name\": \"multus-token-w8qxg\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 10,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"multus\",\n \"serviceAccount\": \"multus\",\n \"nodeName\": \"kali-control-plane\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"kali-control-plane\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:02Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:41Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:41Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:02Z\"\n }\n ],\n \"hostIP\": \"172.18.0.3\",\n \"podIP\": \"172.18.0.3\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.3\"\n }\n ],\n \"startTime\": \"2021-05-21T15:16:02Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kube-multus\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-21T15:16:41Z\"\n }\n },\n \"lastState\": {\n \"terminated\": {\n \"exitCode\": 1,\n \"reason\": \"Error\",\n \"startedAt\": \"2021-05-21T15:16:24Z\",\n \"finishedAt\": \"2021-05-21T15:16:26Z\",\n \"containerID\": \"containerd://6c9f26b1254924a7a3dbc6431dfa067c6e6791549e23b5453f7b9c2dfda0c7fa\"\n }\n },\n \"ready\": true,\n \"restartCount\": 2,\n \"image\": \"ghcr.io/k8snetworkplumbingwg/multus-cni:stable\",\n \"imageID\": \"ghcr.io/k8snetworkplumbingwg/multus-cni@sha256:e72aa733faf24d1f62b69ded1126b9b8da0144d35c8a410c4fa0a860006f9eed\",\n \"containerID\": \"containerd://59f63f0225b8f2773b6d25c502ebeed2719e046599c7a1392543e472c2d4b7f1\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Guaranteed\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-multus-ds-zr9pd\",\n \"generateName\": \"kube-multus-ds-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-multus-ds-zr9pd\",\n \"uid\": \"98da7398-3d75-4c0e-9b99-cae491d59e21\",\n \"resourceVersion\": \"1465\",\n \"creationTimestamp\": \"2021-05-21T15:16:02Z\",\n \"labels\": {\n \"app\": \"multus\",\n \"controller-revision-hash\": \"97f447c9f\",\n \"name\": \"multus\",\n \"pod-template-generation\": \"1\",\n \"tier\": \"node\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"kube-multus-ds\",\n \"uid\": \"928bc64f-c0c9-475a-b436-4ec77811dd11\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ],\n \"managedFields\": [\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:16:02Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:generateName\": {},\n \"f:labels\": {\n \".\": {},\n \"f:app\": {},\n \"f:controller-revision-hash\": {},\n \"f:name\": {},\n \"f:pod-template-generation\": {},\n \"f:tier\": {}\n },\n \"f:ownerReferences\": {\n \".\": {},\n \"k:{\\\"uid\\\":\\\"928bc64f-c0c9-475a-b436-4ec77811dd11\\\"}\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:blockOwnerDeletion\": {},\n \"f:controller\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:uid\": {}\n }\n }\n },\n \"f:spec\": {\n \"f:affinity\": {\n \".\": {},\n \"f:nodeAffinity\": {\n \".\": {},\n \"f:requiredDuringSchedulingIgnoredDuringExecution\": {\n \".\": {},\n \"f:nodeSelectorTerms\": {}\n }\n }\n },\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"kube-multus\\\"}\": {\n \".\": {},\n \"f:args\": {},\n \"f:command\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {\n \".\": {},\n \"f:limits\": {\n \".\": {},\n \"f:cpu\": {},\n \"f:memory\": {}\n },\n \"f:requests\": {\n \".\": {},\n \"f:cpu\": {},\n \"f:memory\": {}\n }\n },\n \"f:securityContext\": {\n \".\": {},\n \"f:privileged\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/tmp/multus-conf\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:hostNetwork\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:serviceAccount\": {},\n \"f:serviceAccountName\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"cni\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"cnibin\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"multus-cfg\\\"}\": {\n \".\": {},\n \"f:configMap\": {\n \".\": {},\n \"f:defaultMode\": {},\n \"f:items\": {},\n \"f:name\": {}\n },\n \"f:name\": {}\n }\n }\n }\n }\n },\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:16:20Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"172.18.0.4\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"cni\",\n \"hostPath\": {\n \"path\": \"/etc/cni/net.d\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"cnibin\",\n \"hostPath\": {\n \"path\": \"/opt/cni/bin\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"multus-cfg\",\n \"configMap\": {\n \"name\": \"multus-cni-config\",\n \"items\": [\n {\n \"key\": \"cni-conf.json\",\n \"path\": \"70-multus.conf\"\n }\n ],\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"multus-token-w8qxg\",\n \"secret\": {\n \"secretName\": \"multus-token-w8qxg\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-multus\",\n \"image\": \"ghcr.io/k8snetworkplumbingwg/multus-cni:stable\",\n \"command\": [\n \"/entrypoint.sh\"\n ],\n \"args\": [\n \"--multus-conf-file=auto\",\n \"--cni-version=0.3.1\"\n ],\n \"resources\": {\n \"limits\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n },\n \"requests\": {\n \"cpu\": \"100m\",\n \"memory\": \"50Mi\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"cni\",\n \"mountPath\": \"/host/etc/cni/net.d\"\n },\n {\n \"name\": \"cnibin\",\n \"mountPath\": \"/host/opt/cni/bin\"\n },\n {\n \"name\": \"multus-cfg\",\n \"mountPath\": \"/tmp/multus-conf\"\n },\n {\n \"name\": \"multus-token-w8qxg\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 10,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"multus\",\n \"serviceAccount\": \"multus\",\n \"nodeName\": \"kali-worker2\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"kali-worker2\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:02Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:20Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:20Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:02Z\"\n }\n ],\n \"hostIP\": \"172.18.0.4\",\n \"podIP\": \"172.18.0.4\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.4\"\n }\n ],\n \"startTime\": \"2021-05-21T15:16:02Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kube-multus\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-21T15:16:19Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"ghcr.io/k8snetworkplumbingwg/multus-cni:stable\",\n \"imageID\": \"ghcr.io/k8snetworkplumbingwg/multus-cni@sha256:e72aa733faf24d1f62b69ded1126b9b8da0144d35c8a410c4fa0a860006f9eed\",\n \"containerID\": \"containerd://dadf287d69e8c5418e37a57a4f2813a90a7ca4cfdf11f220c18c46091fb121c0\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Guaranteed\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy-87457\",\n \"generateName\": \"kube-proxy-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-proxy-87457\",\n \"uid\": \"d6bb6d7c-1b24-4457-8261-5d532e00e88d\",\n \"resourceVersion\": \"612\",\n \"creationTimestamp\": \"2021-05-21T15:13:50Z\",\n \"labels\": {\n \"controller-revision-hash\": \"cc7bbc766\",\n \"k8s-app\": \"kube-proxy\",\n \"pod-template-generation\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"kube-proxy\",\n \"uid\": \"41b3104f-a576-4641-b321-1d0dfa73f9da\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ],\n \"managedFields\": [\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:13:50Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:generateName\": {},\n \"f:labels\": {\n \".\": {},\n \"f:controller-revision-hash\": {},\n \"f:k8s-app\": {},\n \"f:pod-template-generation\": {}\n },\n \"f:ownerReferences\": {\n \".\": {},\n \"k:{\\\"uid\\\":\\\"41b3104f-a576-4641-b321-1d0dfa73f9da\\\"}\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:blockOwnerDeletion\": {},\n \"f:controller\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:uid\": {}\n }\n }\n },\n \"f:spec\": {\n \"f:affinity\": {\n \".\": {},\n \"f:nodeAffinity\": {\n \".\": {},\n \"f:requiredDuringSchedulingIgnoredDuringExecution\": {\n \".\": {},\n \"f:nodeSelectorTerms\": {}\n }\n }\n },\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"kube-proxy\\\"}\": {\n \".\": {},\n \"f:command\": {},\n \"f:env\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"NODE_NAME\\\"}\": {\n \".\": {},\n \"f:name\": {},\n \"f:valueFrom\": {\n \".\": {},\n \"f:fieldRef\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:fieldPath\": {}\n }\n }\n }\n },\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:securityContext\": {\n \".\": {},\n \"f:privileged\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/lib/modules\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {},\n \"f:readOnly\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/run/xtables.lock\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/var/lib/kube-proxy\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:hostNetwork\": {},\n \"f:nodeSelector\": {\n \".\": {},\n \"f:kubernetes.io/os\": {}\n },\n \"f:priorityClassName\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:serviceAccount\": {},\n \"f:serviceAccountName\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"kube-proxy\\\"}\": {\n \".\": {},\n \"f:configMap\": {\n \".\": {},\n \"f:defaultMode\": {},\n \"f:name\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"lib-modules\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"xtables-lock\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n }\n }\n }\n }\n },\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:13:53Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"172.18.0.4\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"kube-proxy\",\n \"configMap\": {\n \"name\": \"kube-proxy\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kube-proxy-token-lnthf\",\n \"secret\": {\n \"secretName\": \"kube-proxy-token-lnthf\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-proxy\",\n \"image\": \"k8s.gcr.io/kube-proxy:v1.19.11\",\n \"command\": [\n \"/usr/local/bin/kube-proxy\",\n \"--config=/var/lib/kube-proxy/config.conf\",\n \"--hostname-override=$(NODE_NAME)\"\n ],\n \"env\": [\n {\n \"name\": \"NODE_NAME\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.nodeName\"\n }\n }\n }\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"kube-proxy\",\n \"mountPath\": \"/var/lib/kube-proxy\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n },\n {\n \"name\": \"kube-proxy-token-lnthf\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeSelector\": {\n \"kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"kube-proxy\",\n \"serviceAccount\": \"kube-proxy\",\n \"nodeName\": \"kali-worker2\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"kali-worker2\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priorityClassName\": \"system-node-critical\",\n \"priority\": 2000001000,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:50Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:53Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:53Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:50Z\"\n }\n ],\n \"hostIP\": \"172.18.0.4\",\n \"podIP\": \"172.18.0.4\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.4\"\n }\n ],\n \"startTime\": \"2021-05-21T15:13:50Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kube-proxy\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-21T15:13:52Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"k8s.gcr.io/kube-proxy:v1.19.11\",\n \"imageID\": \"sha256:6711ebcb77c822dd2b35b79d0d5c8b9d7db4913588136f179f9c1a6f310bcf00\",\n \"containerID\": \"containerd://81b21007dc3713fddf7c2b2ae0b75d5833e5df312ddee813dfab9c2e6ace85b7\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy-c6n8g\",\n \"generateName\": \"kube-proxy-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-proxy-c6n8g\",\n \"uid\": \"a7262e6d-b76b-4061-813b-b6ddd859b335\",\n \"resourceVersion\": \"468\",\n \"creationTimestamp\": \"2021-05-21T15:13:35Z\",\n \"labels\": {\n \"controller-revision-hash\": \"cc7bbc766\",\n \"k8s-app\": \"kube-proxy\",\n \"pod-template-generation\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"kube-proxy\",\n \"uid\": \"41b3104f-a576-4641-b321-1d0dfa73f9da\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ],\n \"managedFields\": [\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:13:35Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:generateName\": {},\n \"f:labels\": {\n \".\": {},\n \"f:controller-revision-hash\": {},\n \"f:k8s-app\": {},\n \"f:pod-template-generation\": {}\n },\n \"f:ownerReferences\": {\n \".\": {},\n \"k:{\\\"uid\\\":\\\"41b3104f-a576-4641-b321-1d0dfa73f9da\\\"}\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:blockOwnerDeletion\": {},\n \"f:controller\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:uid\": {}\n }\n }\n },\n \"f:spec\": {\n \"f:affinity\": {\n \".\": {},\n \"f:nodeAffinity\": {\n \".\": {},\n \"f:requiredDuringSchedulingIgnoredDuringExecution\": {\n \".\": {},\n \"f:nodeSelectorTerms\": {}\n }\n }\n },\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"kube-proxy\\\"}\": {\n \".\": {},\n \"f:command\": {},\n \"f:env\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"NODE_NAME\\\"}\": {\n \".\": {},\n \"f:name\": {},\n \"f:valueFrom\": {\n \".\": {},\n \"f:fieldRef\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:fieldPath\": {}\n }\n }\n }\n },\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:securityContext\": {\n \".\": {},\n \"f:privileged\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/lib/modules\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {},\n \"f:readOnly\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/run/xtables.lock\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/var/lib/kube-proxy\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:hostNetwork\": {},\n \"f:nodeSelector\": {\n \".\": {},\n \"f:kubernetes.io/os\": {}\n },\n \"f:priorityClassName\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:serviceAccount\": {},\n \"f:serviceAccountName\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"kube-proxy\\\"}\": {\n \".\": {},\n \"f:configMap\": {\n \".\": {},\n \"f:defaultMode\": {},\n \"f:name\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"lib-modules\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"xtables-lock\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n }\n }\n }\n }\n },\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:13:37Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"172.18.0.3\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"kube-proxy\",\n \"configMap\": {\n \"name\": \"kube-proxy\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kube-proxy-token-lnthf\",\n \"secret\": {\n \"secretName\": \"kube-proxy-token-lnthf\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-proxy\",\n \"image\": \"k8s.gcr.io/kube-proxy:v1.19.11\",\n \"command\": [\n \"/usr/local/bin/kube-proxy\",\n \"--config=/var/lib/kube-proxy/config.conf\",\n \"--hostname-override=$(NODE_NAME)\"\n ],\n \"env\": [\n {\n \"name\": \"NODE_NAME\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.nodeName\"\n }\n }\n }\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"kube-proxy\",\n \"mountPath\": \"/var/lib/kube-proxy\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n },\n {\n \"name\": \"kube-proxy-token-lnthf\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeSelector\": {\n \"kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"kube-proxy\",\n \"serviceAccount\": \"kube-proxy\",\n \"nodeName\": \"kali-control-plane\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"kali-control-plane\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priorityClassName\": \"system-node-critical\",\n \"priority\": 2000001000,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:35Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:37Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:37Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:35Z\"\n }\n ],\n \"hostIP\": \"172.18.0.3\",\n \"podIP\": \"172.18.0.3\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.3\"\n }\n ],\n \"startTime\": \"2021-05-21T15:13:35Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kube-proxy\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-21T15:13:37Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"k8s.gcr.io/kube-proxy:v1.19.11\",\n \"imageID\": \"sha256:6711ebcb77c822dd2b35b79d0d5c8b9d7db4913588136f179f9c1a6f310bcf00\",\n \"containerID\": \"containerd://d9db11ea8399e1db6790521a93f4564f42212824d392104a8571673d8d55d683\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-proxy-ggwmf\",\n \"generateName\": \"kube-proxy-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-proxy-ggwmf\",\n \"uid\": \"22210ccc-84d9-4021-9377-8b8f6e1d12da\",\n \"resourceVersion\": \"604\",\n \"creationTimestamp\": \"2021-05-21T15:13:50Z\",\n \"labels\": {\n \"controller-revision-hash\": \"cc7bbc766\",\n \"k8s-app\": \"kube-proxy\",\n \"pod-template-generation\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"kube-proxy\",\n \"uid\": \"41b3104f-a576-4641-b321-1d0dfa73f9da\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ],\n \"managedFields\": [\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:13:50Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:generateName\": {},\n \"f:labels\": {\n \".\": {},\n \"f:controller-revision-hash\": {},\n \"f:k8s-app\": {},\n \"f:pod-template-generation\": {}\n },\n \"f:ownerReferences\": {\n \".\": {},\n \"k:{\\\"uid\\\":\\\"41b3104f-a576-4641-b321-1d0dfa73f9da\\\"}\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:blockOwnerDeletion\": {},\n \"f:controller\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:uid\": {}\n }\n }\n },\n \"f:spec\": {\n \"f:affinity\": {\n \".\": {},\n \"f:nodeAffinity\": {\n \".\": {},\n \"f:requiredDuringSchedulingIgnoredDuringExecution\": {\n \".\": {},\n \"f:nodeSelectorTerms\": {}\n }\n }\n },\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"kube-proxy\\\"}\": {\n \".\": {},\n \"f:command\": {},\n \"f:env\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"NODE_NAME\\\"}\": {\n \".\": {},\n \"f:name\": {},\n \"f:valueFrom\": {\n \".\": {},\n \"f:fieldRef\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:fieldPath\": {}\n }\n }\n }\n },\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:securityContext\": {\n \".\": {},\n \"f:privileged\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/lib/modules\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {},\n \"f:readOnly\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/run/xtables.lock\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n },\n \"k:{\\\"mountPath\\\":\\\"/var/lib/kube-proxy\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:hostNetwork\": {},\n \"f:nodeSelector\": {\n \".\": {},\n \"f:kubernetes.io/os\": {}\n },\n \"f:priorityClassName\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:serviceAccount\": {},\n \"f:serviceAccountName\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"kube-proxy\\\"}\": {\n \".\": {},\n \"f:configMap\": {\n \".\": {},\n \"f:defaultMode\": {},\n \"f:name\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"lib-modules\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n },\n \"k:{\\\"name\\\":\\\"xtables-lock\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n }\n }\n }\n }\n },\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:13:53Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"172.18.0.2\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"kube-proxy\",\n \"configMap\": {\n \"name\": \"kube-proxy\",\n \"defaultMode\": 420\n }\n },\n {\n \"name\": \"xtables-lock\",\n \"hostPath\": {\n \"path\": \"/run/xtables.lock\",\n \"type\": \"FileOrCreate\"\n }\n },\n {\n \"name\": \"lib-modules\",\n \"hostPath\": {\n \"path\": \"/lib/modules\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"kube-proxy-token-lnthf\",\n \"secret\": {\n \"secretName\": \"kube-proxy-token-lnthf\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-proxy\",\n \"image\": \"k8s.gcr.io/kube-proxy:v1.19.11\",\n \"command\": [\n \"/usr/local/bin/kube-proxy\",\n \"--config=/var/lib/kube-proxy/config.conf\",\n \"--hostname-override=$(NODE_NAME)\"\n ],\n \"env\": [\n {\n \"name\": \"NODE_NAME\",\n \"valueFrom\": {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"spec.nodeName\"\n }\n }\n }\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"kube-proxy\",\n \"mountPath\": \"/var/lib/kube-proxy\"\n },\n {\n \"name\": \"xtables-lock\",\n \"mountPath\": \"/run/xtables.lock\"\n },\n {\n \"name\": \"lib-modules\",\n \"readOnly\": true,\n \"mountPath\": \"/lib/modules\"\n },\n {\n \"name\": \"kube-proxy-token-lnthf\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeSelector\": {\n \"kubernetes.io/os\": \"linux\"\n },\n \"serviceAccountName\": \"kube-proxy\",\n \"serviceAccount\": \"kube-proxy\",\n \"nodeName\": \"kali-worker\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"kali-worker\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"key\": \"CriticalAddonsOnly\",\n \"operator\": \"Exists\"\n },\n {\n \"operator\": \"Exists\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priorityClassName\": \"system-node-critical\",\n \"priority\": 2000001000,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:50Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:53Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:53Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:50Z\"\n }\n ],\n \"hostIP\": \"172.18.0.2\",\n \"podIP\": \"172.18.0.2\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.2\"\n }\n ],\n \"startTime\": \"2021-05-21T15:13:50Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kube-proxy\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-21T15:13:52Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"k8s.gcr.io/kube-proxy:v1.19.11\",\n \"imageID\": \"sha256:6711ebcb77c822dd2b35b79d0d5c8b9d7db4913588136f179f9c1a6f310bcf00\",\n \"containerID\": \"containerd://1518491fa3ff3f82b8c4d6db5e7845a4c811c6da997301b79f9c156af9f827ff\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"kube-scheduler-kali-control-plane\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-scheduler-kali-control-plane\",\n \"uid\": \"f1fca316-d25f-4380-9d7f-87368e4e855b\",\n \"resourceVersion\": \"802\",\n \"creationTimestamp\": \"2021-05-21T15:13:29Z\",\n \"labels\": {\n \"component\": \"kube-scheduler\",\n \"tier\": \"control-plane\"\n },\n \"annotations\": {\n \"kubernetes.io/config.hash\": \"6d56e89701adc313ecbc93cdff46d6e6\",\n \"kubernetes.io/config.mirror\": \"6d56e89701adc313ecbc93cdff46d6e6\",\n \"kubernetes.io/config.seen\": \"2021-05-21T15:13:22.926435590Z\",\n \"kubernetes.io/config.source\": \"file\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"v1\",\n \"kind\": \"Node\",\n \"name\": \"kali-control-plane\",\n \"uid\": \"3bc732d5-d94d-4ab3-a172-a75c133ce66c\",\n \"controller\": true\n }\n ],\n \"managedFields\": [\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:14:37Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \".\": {},\n \"f:kubernetes.io/config.hash\": {},\n \"f:kubernetes.io/config.mirror\": {},\n \"f:kubernetes.io/config.seen\": {},\n \"f:kubernetes.io/config.source\": {}\n },\n \"f:labels\": {\n \".\": {},\n \"f:component\": {},\n \"f:tier\": {}\n },\n \"f:ownerReferences\": {\n \".\": {},\n \"k:{\\\"uid\\\":\\\"3bc732d5-d94d-4ab3-a172-a75c133ce66c\\\"}\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:controller\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:uid\": {}\n }\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"kube-scheduler\\\"}\": {\n \".\": {},\n \"f:command\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:livenessProbe\": {\n \".\": {},\n \"f:failureThreshold\": {},\n \"f:httpGet\": {\n \".\": {},\n \"f:host\": {},\n \"f:path\": {},\n \"f:port\": {},\n \"f:scheme\": {}\n },\n \"f:initialDelaySeconds\": {},\n \"f:periodSeconds\": {},\n \"f:successThreshold\": {},\n \"f:timeoutSeconds\": {}\n },\n \"f:name\": {},\n \"f:resources\": {\n \".\": {},\n \"f:requests\": {\n \".\": {},\n \"f:cpu\": {}\n }\n },\n \"f:startupProbe\": {\n \".\": {},\n \"f:failureThreshold\": {},\n \"f:httpGet\": {\n \".\": {},\n \"f:host\": {},\n \"f:path\": {},\n \"f:port\": {},\n \"f:scheme\": {}\n },\n \"f:initialDelaySeconds\": {},\n \"f:periodSeconds\": {},\n \"f:successThreshold\": {},\n \"f:timeoutSeconds\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/etc/kubernetes/scheduler.conf\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {},\n \"f:readOnly\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:hostNetwork\": {},\n \"f:nodeName\": {},\n \"f:priorityClassName\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"kubeconfig\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n }\n }\n },\n \"f:status\": {\n \"f:conditions\": {\n \".\": {},\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"PodScheduled\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"172.18.0.3\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"kubeconfig\",\n \"hostPath\": {\n \"path\": \"/etc/kubernetes/scheduler.conf\",\n \"type\": \"FileOrCreate\"\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"kube-scheduler\",\n \"image\": \"k8s.gcr.io/kube-scheduler:v1.19.11\",\n \"command\": [\n \"kube-scheduler\",\n \"--authentication-kubeconfig=/etc/kubernetes/scheduler.conf\",\n \"--authorization-kubeconfig=/etc/kubernetes/scheduler.conf\",\n \"--bind-address=127.0.0.1\",\n \"--kubeconfig=/etc/kubernetes/scheduler.conf\",\n \"--leader-elect=true\",\n \"--port=0\"\n ],\n \"resources\": {\n \"requests\": {\n \"cpu\": \"100m\"\n }\n },\n \"volumeMounts\": [\n {\n \"name\": \"kubeconfig\",\n \"readOnly\": true,\n \"mountPath\": \"/etc/kubernetes/scheduler.conf\"\n }\n ],\n \"livenessProbe\": {\n \"httpGet\": {\n \"path\": \"/healthz\",\n \"port\": 10259,\n \"host\": \"127.0.0.1\",\n \"scheme\": \"HTTPS\"\n },\n \"initialDelaySeconds\": 10,\n \"timeoutSeconds\": 15,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 8\n },\n \"startupProbe\": {\n \"httpGet\": {\n \"path\": \"/healthz\",\n \"port\": 10259,\n \"host\": \"127.0.0.1\",\n \"scheme\": \"HTTPS\"\n },\n \"initialDelaySeconds\": 10,\n \"timeoutSeconds\": 15,\n \"periodSeconds\": 10,\n \"successThreshold\": 1,\n \"failureThreshold\": 24\n },\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\"\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"nodeName\": \"kali-control-plane\",\n \"hostNetwork\": true,\n \"securityContext\": {},\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n }\n ],\n \"priorityClassName\": \"system-node-critical\",\n \"priority\": 2000001000,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:29Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:14:37Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:14:37Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:13:29Z\"\n }\n ],\n \"hostIP\": \"172.18.0.3\",\n \"podIP\": \"172.18.0.3\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.3\"\n }\n ],\n \"startTime\": \"2021-05-21T15:13:29Z\",\n \"containerStatuses\": [\n {\n \"name\": \"kube-scheduler\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-21T15:13:07Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"k8s.gcr.io/kube-scheduler:v1.19.11\",\n \"imageID\": \"sha256:c57bb60cf72fe3dea39515815356e7948f52d5af8d772f51405052586a6f15b4\",\n \"containerID\": \"containerd://ff0a678b8bea1e6258820a2f9c91f0df0819eb95da43d20fd00a7c9c60950cf1\",\n \"started\": true\n }\n ],\n \"qosClass\": \"Burstable\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"tune-sysctls-8m4jc\",\n \"generateName\": \"tune-sysctls-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/tune-sysctls-8m4jc\",\n \"uid\": \"3f242067-7b12-4f0d-8f44-eb99c4b1bdfd\",\n \"resourceVersion\": \"1355\",\n \"creationTimestamp\": \"2021-05-21T15:16:01Z\",\n \"labels\": {\n \"controller-revision-hash\": \"7b545968fb\",\n \"name\": \"tune-sysctls\",\n \"pod-template-generation\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"tune-sysctls\",\n \"uid\": \"6a0ccf82-b00e-4003-bf5f-ef1ddd0bf984\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ],\n \"managedFields\": [\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:16:01Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:generateName\": {},\n \"f:labels\": {\n \".\": {},\n \"f:controller-revision-hash\": {},\n \"f:name\": {},\n \"f:pod-template-generation\": {}\n },\n \"f:ownerReferences\": {\n \".\": {},\n \"k:{\\\"uid\\\":\\\"6a0ccf82-b00e-4003-bf5f-ef1ddd0bf984\\\"}\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:blockOwnerDeletion\": {},\n \"f:controller\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:uid\": {}\n }\n }\n },\n \"f:spec\": {\n \"f:affinity\": {\n \".\": {},\n \"f:nodeAffinity\": {\n \".\": {},\n \"f:requiredDuringSchedulingIgnoredDuringExecution\": {\n \".\": {},\n \"f:nodeSelectorTerms\": {}\n }\n }\n },\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"setsysctls\\\"}\": {\n \".\": {},\n \"f:command\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:securityContext\": {\n \".\": {},\n \"f:privileged\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/sys\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:hostIPC\": {},\n \"f:hostNetwork\": {},\n \"f:hostPID\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"sys\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n }\n }\n }\n }\n },\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:16:05Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"172.18.0.2\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"sys\",\n \"hostPath\": {\n \"path\": \"/sys\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"default-token-96nkg\",\n \"secret\": {\n \"secretName\": \"default-token-96nkg\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"setsysctls\",\n \"image\": \"alpine:3.6\",\n \"command\": [\n \"sh\",\n \"-c\",\n \"while true; do\\n sysctl -w fs.inotify.max_user_watches=524288\\n sleep 10\\ndone\\n\"\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"sys\",\n \"mountPath\": \"/sys\"\n },\n {\n \"name\": \"default-token-96nkg\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"default\",\n \"serviceAccount\": \"default\",\n \"nodeName\": \"kali-worker\",\n \"hostNetwork\": true,\n \"hostPID\": true,\n \"hostIPC\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"kali-worker\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:01Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:05Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:05Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:01Z\"\n }\n ],\n \"hostIP\": \"172.18.0.2\",\n \"podIP\": \"172.18.0.2\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.2\"\n }\n ],\n \"startTime\": \"2021-05-21T15:16:01Z\",\n \"containerStatuses\": [\n {\n \"name\": \"setsysctls\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-21T15:16:05Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"docker.io/library/alpine:3.6\",\n \"imageID\": \"docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475\",\n \"containerID\": \"containerd://d7a2f54f0abd21c35e6ca30056436aca7e8978aa9114a3aa382ddd7178cbc216\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"tune-sysctls-m54ts\",\n \"generateName\": \"tune-sysctls-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/tune-sysctls-m54ts\",\n \"uid\": \"c90ae66d-2825-49d1-9b8f-4109222f167d\",\n \"resourceVersion\": \"1303\",\n \"creationTimestamp\": \"2021-05-21T15:16:01Z\",\n \"labels\": {\n \"controller-revision-hash\": \"7b545968fb\",\n \"name\": \"tune-sysctls\",\n \"pod-template-generation\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"tune-sysctls\",\n \"uid\": \"6a0ccf82-b00e-4003-bf5f-ef1ddd0bf984\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ],\n \"managedFields\": [\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:16:01Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:generateName\": {},\n \"f:labels\": {\n \".\": {},\n \"f:controller-revision-hash\": {},\n \"f:name\": {},\n \"f:pod-template-generation\": {}\n },\n \"f:ownerReferences\": {\n \".\": {},\n \"k:{\\\"uid\\\":\\\"6a0ccf82-b00e-4003-bf5f-ef1ddd0bf984\\\"}\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:blockOwnerDeletion\": {},\n \"f:controller\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:uid\": {}\n }\n }\n },\n \"f:spec\": {\n \"f:affinity\": {\n \".\": {},\n \"f:nodeAffinity\": {\n \".\": {},\n \"f:requiredDuringSchedulingIgnoredDuringExecution\": {\n \".\": {},\n \"f:nodeSelectorTerms\": {}\n }\n }\n },\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"setsysctls\\\"}\": {\n \".\": {},\n \"f:command\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:securityContext\": {\n \".\": {},\n \"f:privileged\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/sys\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:hostIPC\": {},\n \"f:hostNetwork\": {},\n \"f:hostPID\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"sys\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n }\n }\n }\n }\n },\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:16:05Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"172.18.0.4\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"sys\",\n \"hostPath\": {\n \"path\": \"/sys\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"default-token-96nkg\",\n \"secret\": {\n \"secretName\": \"default-token-96nkg\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"setsysctls\",\n \"image\": \"alpine:3.6\",\n \"command\": [\n \"sh\",\n \"-c\",\n \"while true; do\\n sysctl -w fs.inotify.max_user_watches=524288\\n sleep 10\\ndone\\n\"\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"sys\",\n \"mountPath\": \"/sys\"\n },\n {\n \"name\": \"default-token-96nkg\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"default\",\n \"serviceAccount\": \"default\",\n \"nodeName\": \"kali-worker2\",\n \"hostNetwork\": true,\n \"hostPID\": true,\n \"hostIPC\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"kali-worker2\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:01Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:05Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:05Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:01Z\"\n }\n ],\n \"hostIP\": \"172.18.0.4\",\n \"podIP\": \"172.18.0.4\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.4\"\n }\n ],\n \"startTime\": \"2021-05-21T15:16:01Z\",\n \"containerStatuses\": [\n {\n \"name\": \"setsysctls\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-21T15:16:05Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"docker.io/library/alpine:3.6\",\n \"imageID\": \"docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475\",\n \"containerID\": \"containerd://d4e3cf1cf50b3e16fe0c43703cda784d5b13597e4dbe0f3bffe404e14ff8dd11\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n },\n {\n \"metadata\": {\n \"name\": \"tune-sysctls-zzq45\",\n \"generateName\": \"tune-sysctls-\",\n \"namespace\": \"kube-system\",\n \"selfLink\": \"/api/v1/namespaces/kube-system/pods/tune-sysctls-zzq45\",\n \"uid\": \"a0031878-86e2-4721-b5ea-8c70c9fa7a89\",\n \"resourceVersion\": \"1381\",\n \"creationTimestamp\": \"2021-05-21T15:16:01Z\",\n \"labels\": {\n \"controller-revision-hash\": \"7b545968fb\",\n \"name\": \"tune-sysctls\",\n \"pod-template-generation\": \"1\"\n },\n \"ownerReferences\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"DaemonSet\",\n \"name\": \"tune-sysctls\",\n \"uid\": \"6a0ccf82-b00e-4003-bf5f-ef1ddd0bf984\",\n \"controller\": true,\n \"blockOwnerDeletion\": true\n }\n ],\n \"managedFields\": [\n {\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:16:01Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:generateName\": {},\n \"f:labels\": {\n \".\": {},\n \"f:controller-revision-hash\": {},\n \"f:name\": {},\n \"f:pod-template-generation\": {}\n },\n \"f:ownerReferences\": {\n \".\": {},\n \"k:{\\\"uid\\\":\\\"6a0ccf82-b00e-4003-bf5f-ef1ddd0bf984\\\"}\": {\n \".\": {},\n \"f:apiVersion\": {},\n \"f:blockOwnerDeletion\": {},\n \"f:controller\": {},\n \"f:kind\": {},\n \"f:name\": {},\n \"f:uid\": {}\n }\n }\n },\n \"f:spec\": {\n \"f:affinity\": {\n \".\": {},\n \"f:nodeAffinity\": {\n \".\": {},\n \"f:requiredDuringSchedulingIgnoredDuringExecution\": {\n \".\": {},\n \"f:nodeSelectorTerms\": {}\n }\n }\n },\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"setsysctls\\\"}\": {\n \".\": {},\n \"f:command\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:securityContext\": {\n \".\": {},\n \"f:privileged\": {}\n },\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {},\n \"f:volumeMounts\": {\n \".\": {},\n \"k:{\\\"mountPath\\\":\\\"/sys\\\"}\": {\n \".\": {},\n \"f:mountPath\": {},\n \"f:name\": {}\n }\n }\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:hostIPC\": {},\n \"f:hostNetwork\": {},\n \"f:hostPID\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {},\n \"f:tolerations\": {},\n \"f:volumes\": {\n \".\": {},\n \"k:{\\\"name\\\":\\\"sys\\\"}\": {\n \".\": {},\n \"f:hostPath\": {\n \".\": {},\n \"f:path\": {},\n \"f:type\": {}\n },\n \"f:name\": {}\n }\n }\n }\n }\n },\n {\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"apiVersion\": \"v1\",\n \"time\": \"2021-05-21T15:16:06Z\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"172.18.0.3\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n }\n }\n ]\n },\n \"spec\": {\n \"volumes\": [\n {\n \"name\": \"sys\",\n \"hostPath\": {\n \"path\": \"/sys\",\n \"type\": \"\"\n }\n },\n {\n \"name\": \"default-token-96nkg\",\n \"secret\": {\n \"secretName\": \"default-token-96nkg\",\n \"defaultMode\": 420\n }\n }\n ],\n \"containers\": [\n {\n \"name\": \"setsysctls\",\n \"image\": \"alpine:3.6\",\n \"command\": [\n \"sh\",\n \"-c\",\n \"while true; do\\n sysctl -w fs.inotify.max_user_watches=524288\\n sleep 10\\ndone\\n\"\n ],\n \"resources\": {},\n \"volumeMounts\": [\n {\n \"name\": \"sys\",\n \"mountPath\": \"/sys\"\n },\n {\n \"name\": \"default-token-96nkg\",\n \"readOnly\": true,\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n }\n ],\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"securityContext\": {\n \"privileged\": true\n }\n }\n ],\n \"restartPolicy\": \"Always\",\n \"terminationGracePeriodSeconds\": 30,\n \"dnsPolicy\": \"ClusterFirst\",\n \"serviceAccountName\": \"default\",\n \"serviceAccount\": \"default\",\n \"nodeName\": \"kali-control-plane\",\n \"hostNetwork\": true,\n \"hostPID\": true,\n \"hostIPC\": true,\n \"securityContext\": {},\n \"affinity\": {\n \"nodeAffinity\": {\n \"requiredDuringSchedulingIgnoredDuringExecution\": {\n \"nodeSelectorTerms\": [\n {\n \"matchFields\": [\n {\n \"key\": \"metadata.name\",\n \"operator\": \"In\",\n \"values\": [\n \"kali-control-plane\"\n ]\n }\n ]\n }\n ]\n }\n }\n },\n \"schedulerName\": \"default-scheduler\",\n \"tolerations\": [\n {\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoExecute\"\n },\n {\n \"key\": \"node.kubernetes.io/disk-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/memory-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/pid-pressure\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/unschedulable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n },\n {\n \"key\": \"node.kubernetes.io/network-unavailable\",\n \"operator\": \"Exists\",\n \"effect\": \"NoSchedule\"\n }\n ],\n \"priority\": 0,\n \"enableServiceLinks\": true,\n \"preemptionPolicy\": \"PreemptLowerPriority\"\n },\n \"status\": {\n \"phase\": \"Running\",\n \"conditions\": [\n {\n \"type\": \"Initialized\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:01Z\"\n },\n {\n \"type\": \"Ready\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:06Z\"\n },\n {\n \"type\": \"ContainersReady\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:06Z\"\n },\n {\n \"type\": \"PodScheduled\",\n \"status\": \"True\",\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-05-21T15:16:01Z\"\n }\n ],\n \"hostIP\": \"172.18.0.3\",\n \"podIP\": \"172.18.0.3\",\n \"podIPs\": [\n {\n \"ip\": \"172.18.0.3\"\n }\n ],\n \"startTime\": \"2021-05-21T15:16:01Z\",\n \"containerStatuses\": [\n {\n \"name\": \"setsysctls\",\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-05-21T15:16:05Z\"\n }\n },\n \"lastState\": {},\n \"ready\": true,\n \"restartCount\": 0,\n \"image\": \"docker.io/library/alpine:3.6\",\n \"imageID\": \"docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475\",\n \"containerID\": \"containerd://38ab028c0ea23f852040aca224e635e707bc24eeb68909eaa2fa52b191f622c0\",\n \"started\": true\n }\n ],\n \"qosClass\": \"BestEffort\"\n }\n }\n ]\n}\n==== START logs for container coredns of pod kube-system/coredns-f9fd979d6-mpnsm ====\n.:53\n[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7\nCoreDNS-1.7.0\nlinux/amd64, go1.14.4, f59c03d\n==== END logs for container coredns of pod kube-system/coredns-f9fd979d6-mpnsm ====\n==== START logs for container coredns of pod kube-system/coredns-f9fd979d6-nfqfd ====\n.:53\n[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7\nCoreDNS-1.7.0\nlinux/amd64, go1.14.4, f59c03d\n==== END logs for container coredns of pod kube-system/coredns-f9fd979d6-nfqfd ====\n==== START logs for container loopdev of pod kube-system/create-loop-devs-26xt8 ====\n==== END logs for container loopdev of pod kube-system/create-loop-devs-26xt8 ====\n==== START logs for container loopdev of pod kube-system/create-loop-devs-8l686 ====\n==== END logs for container loopdev of pod kube-system/create-loop-devs-8l686 ====\n==== START logs for container loopdev of pod kube-system/create-loop-devs-cwbn4 ====\n==== END logs for container loopdev of pod kube-system/create-loop-devs-cwbn4 ====\n==== START logs for container etcd of pod kube-system/etcd-kali-control-plane ====\n[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead\n2021-05-21 15:13:09.539528 I | etcdmain: etcd Version: 3.4.13\n2021-05-21 15:13:09.539570 I | etcdmain: Git SHA: ae9734ed2\n2021-05-21 15:13:09.539575 I | etcdmain: Go Version: go1.12.17\n2021-05-21 15:13:09.539579 I | etcdmain: Go OS/Arch: linux/amd64\n2021-05-21 15:13:09.539600 I | etcdmain: setting maximum number of CPUs to 88, total number of available CPUs is 88\n[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead\n2021-05-21 15:13:09.539718 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = \n2021-05-21 15:13:09.540727 I | embed: name = kali-control-plane\n2021-05-21 15:13:09.540744 I | embed: data dir = /var/lib/etcd\n2021-05-21 15:13:09.540749 I | embed: member dir = /var/lib/etcd/member\n2021-05-21 15:13:09.540753 I | embed: heartbeat = 100ms\n2021-05-21 15:13:09.540757 I | embed: election = 1000ms\n2021-05-21 15:13:09.540761 I | embed: snapshot count = 10000\n2021-05-21 15:13:09.540769 I | embed: advertise client URLs = https://172.18.0.3:2379\n2021-05-21 15:13:09.550776 I | etcdserver: starting member 23da9c3f2594532a in cluster d4a51ce2d5480c89\nraft2021/05/21 15:13:09 INFO: 23da9c3f2594532a switched to configuration voters=()\nraft2021/05/21 15:13:09 INFO: 23da9c3f2594532a became follower at term 0\nraft2021/05/21 15:13:09 INFO: newRaft 23da9c3f2594532a [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]\nraft2021/05/21 15:13:09 INFO: 23da9c3f2594532a became follower at term 1\nraft2021/05/21 15:13:09 INFO: 23da9c3f2594532a switched to configuration voters=(2583549131277751082)\n2021-05-21 15:13:09.552710 W | auth: simple token is not cryptographically signed\n2021-05-21 15:13:09.556987 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]\n2021-05-21 15:13:09.557341 I | etcdserver: 23da9c3f2594532a as single-node; fast-forwarding 9 ticks (election ticks 10)\nraft2021/05/21 15:13:09 INFO: 23da9c3f2594532a switched to configuration voters=(2583549131277751082)\n2021-05-21 15:13:09.558222 I | etcdserver/membership: added member 23da9c3f2594532a [https://172.18.0.3:2380] to cluster d4a51ce2d5480c89\n2021-05-21 15:13:09.560013 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = \n2021-05-21 15:13:09.560129 I | embed: listening for peers on 172.18.0.3:2380\n2021-05-21 15:13:09.560235 I | embed: listening for metrics on http://127.0.0.1:2381\nraft2021/05/21 15:13:09 INFO: 23da9c3f2594532a is starting a new election at term 1\nraft2021/05/21 15:13:09 INFO: 23da9c3f2594532a became candidate at term 2\nraft2021/05/21 15:13:09 INFO: 23da9c3f2594532a received MsgVoteResp from 23da9c3f2594532a at term 2\nraft2021/05/21 15:13:09 INFO: 23da9c3f2594532a became leader at term 2\nraft2021/05/21 15:13:09 INFO: raft.node: 23da9c3f2594532a elected leader 23da9c3f2594532a at term 2\n2021-05-21 15:13:09.752536 I | etcdserver: setting up the initial cluster version to 3.4\n2021-05-21 15:13:09.752567 I | etcdserver: published {Name:kali-control-plane ClientURLs:[https://172.18.0.3:2379]} to cluster d4a51ce2d5480c89\n2021-05-21 15:13:09.752593 I | embed: ready to serve client requests\n2021-05-21 15:13:09.752795 I | embed: ready to serve client requests\n2021-05-21 15:13:09.752865 N | etcdserver/membership: set the initial cluster version to 3.4\n2021-05-21 15:13:09.752967 I | etcdserver/api: enabled capabilities for version 3.4\n2021-05-21 15:13:09.755702 I | embed: serving client requests on 172.18.0.3:2379\n2021-05-21 15:13:09.755770 I | embed: serving client requests on 127.0.0.1:2379\n2021-05-21 15:13:31.355210 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:13:36.617894 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:13:46.617891 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:13:56.617943 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:14:06.617858 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:14:16.617786 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:14:26.617786 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:14:36.617800 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:14:46.617771 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:14:56.618028 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:14:58.187998 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:517\" took too long (125.70903ms) to execute\n2021-05-21 15:15:06.617787 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:15:16.617827 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:15:26.617757 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:15:36.617638 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:15:46.617895 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:15:56.617990 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:16:06.617747 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:16:16.617793 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:16:26.617734 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:16:36.617722 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:16:46.618112 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:16:56.617717 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:17:06.617672 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:17:16.617908 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:17:26.617638 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:17:36.617851 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:17:46.617695 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:17:56.617796 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:18:06.617727 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:18:16.617907 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:18:26.617714 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:18:36.617934 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:18:46.617726 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:18:56.617856 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:19:06.620557 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:19:16.617876 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:19:26.617799 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:19:36.617896 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:19:46.617799 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:19:56.617834 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:20:06.617748 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:20:16.619379 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:20:26.617946 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:20:36.617849 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:20:46.617857 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:20:56.617865 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:21:06.618211 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:21:16.617845 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:21:26.618440 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:21:36.617769 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:21:46.618056 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:21:56.617732 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:22:06.617787 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:22:16.617852 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:22:26.617756 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:22:36.617945 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:22:46.617703 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:22:56.617865 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:23:06.617839 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:23:10.966996 I | mvcc: store.index: compact 1970\n2021-05-21 15:23:10.990696 I | mvcc: finished scheduled compaction at 1970 (took 22.550184ms)\n2021-05-21 15:23:16.617714 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:23:26.618401 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:23:36.617795 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:23:46.617754 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:23:56.617783 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:24:06.617705 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:24:16.617651 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:24:26.617760 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:24:36.617758 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:24:46.617867 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:24:56.617916 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:25:06.617935 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:25:16.617690 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:25:26.617867 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:25:36.617741 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:25:46.617849 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:25:56.617854 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:26:06.617789 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:26:16.617715 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:26:26.617797 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:26:36.617733 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:26:46.617767 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:26:56.617743 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:27:06.619688 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:27:16.617785 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:27:26.617921 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:27:36.617763 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:27:46.617740 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:27:56.617795 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:28:06.617756 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:28:10.971011 I | mvcc: store.index: compact 3755\n2021-05-21 15:28:11.001663 I | mvcc: finished scheduled compaction at 3755 (took 29.145438ms)\n2021-05-21 15:28:16.617796 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:28:26.617760 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:28:36.617765 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:28:46.617655 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:28:56.617880 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:29:06.617779 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:29:16.617781 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:29:26.617844 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:29:36.617762 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:29:46.617773 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:29:56.617763 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:30:06.617928 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:30:16.617767 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:30:26.617857 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:30:36.617870 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:30:46.617890 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:30:56.617750 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:31:06.617972 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:31:16.618153 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:31:26.617866 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:31:36.617922 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:31:46.617942 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:31:56.617784 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:32:06.617796 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:32:16.617797 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:32:26.617684 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:32:36.617791 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:32:46.617743 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:32:56.617915 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:33:06.617757 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:33:10.974866 I | mvcc: store.index: compact 5401\n2021-05-21 15:33:11.004475 I | mvcc: finished scheduled compaction at 5401 (took 28.130527ms)\n2021-05-21 15:33:16.617775 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:33:26.617729 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:33:36.617780 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:33:46.617742 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:33:56.617712 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:34:06.617771 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:34:16.617948 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:34:26.617744 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:34:36.617883 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:34:46.617877 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:34:56.617734 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:35:06.617837 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:35:16.617737 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:35:26.617790 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:35:36.618140 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:35:46.617756 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:35:56.617748 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:36:06.617684 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:36:16.617634 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:36:26.617862 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:36:36.617840 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:36:46.617845 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:36:56.617899 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:37:06.617797 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:37:16.617701 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:37:26.617839 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:37:36.617783 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:37:46.617862 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:37:56.617816 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:38:06.617886 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:38:10.978553 I | mvcc: store.index: compact 7466\n2021-05-21 15:38:11.008598 I | mvcc: finished scheduled compaction at 7466 (took 28.643115ms)\n2021-05-21 15:38:16.617799 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:38:26.617783 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:38:36.617750 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:38:46.617672 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:38:56.617835 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:39:06.617884 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:39:16.617629 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:39:26.617928 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:39:36.617788 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:39:46.617760 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:39:56.617749 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:40:06.617720 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:40:16.617779 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:40:26.617867 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:40:36.617674 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:40:46.618278 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:40:56.617741 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:41:06.617734 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:41:16.617884 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:41:26.617677 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:41:36.617719 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:41:46.617647 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:41:56.617703 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:42:06.617717 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:42:16.617852 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:42:26.617865 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:42:36.617739 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:42:36.618604 I | etcdserver: start to snapshot (applied: 10001, lastsnap: 0)\n2021-05-21 15:42:36.621197 I | etcdserver: saved snapshot at index 10001\n2021-05-21 15:42:36.621628 I | etcdserver: compacted raft log at 5001\n2021-05-21 15:42:46.617767 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:42:56.617847 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:43:06.617761 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:43:10.982780 I | mvcc: store.index: compact 8541\n2021-05-21 15:43:10.999212 I | mvcc: finished scheduled compaction at 8541 (took 15.403139ms)\n2021-05-21 15:43:16.617864 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:43:26.617757 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:43:36.617906 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:43:46.617930 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:43:56.617765 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:44:06.617616 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:44:16.617740 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:44:26.617792 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:44:36.617845 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:44:46.617797 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:44:56.617835 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:45:06.617746 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:45:16.617777 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:45:26.617836 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:45:36.617913 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:45:46.617702 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:45:56.617654 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:46:06.617878 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:46:16.617614 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:46:26.617679 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:46:36.617898 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:46:46.617827 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:46:56.617659 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:47:06.617635 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:47:16.617860 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:47:26.617657 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:47:36.617856 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:47:46.617751 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:47:56.617670 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:48:06.617725 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:48:10.986356 I | mvcc: store.index: compact 9559\n2021-05-21 15:48:11.001749 I | mvcc: finished scheduled compaction at 9559 (took 14.641628ms)\n2021-05-21 15:48:16.617728 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:48:26.617893 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:48:36.617928 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:48:46.617921 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:48:56.617858 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:49:06.617684 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:49:16.617742 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:49:26.617768 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:49:36.617937 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:49:46.619334 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:49:56.617843 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:50:06.617687 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:50:16.617746 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:50:26.617724 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:50:36.617735 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:50:46.617740 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:50:56.617764 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:51:06.617724 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:51:16.617869 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:51:26.617865 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:51:36.617707 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:51:46.617762 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:51:56.617710 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:52:06.617603 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:52:16.617723 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:52:26.617797 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:52:36.617730 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:52:46.617631 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:52:56.617745 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:53:06.617793 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:53:10.990892 I | mvcc: store.index: compact 10583\n2021-05-21 15:53:11.006097 I | mvcc: finished scheduled compaction at 10583 (took 14.42027ms)\n2021-05-21 15:53:16.617837 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:53:26.617860 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:53:36.617585 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:53:46.618172 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:53:56.617887 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:54:06.617751 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:54:16.617571 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:54:26.617746 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:54:36.617892 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:54:46.617748 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:54:56.617903 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:55:06.617659 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:55:16.617705 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:55:26.617728 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:55:36.617733 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:55:46.617875 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:55:56.617777 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:56:06.617856 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:56:16.617836 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:56:26.617766 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:56:36.617694 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:56:46.617758 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:56:56.617635 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:57:06.617784 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:57:16.617823 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:57:26.617787 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:57:36.617977 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:57:46.617903 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:57:56.617669 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:58:06.617836 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:58:10.995308 I | mvcc: store.index: compact 11605\n2021-05-21 15:58:11.011003 I | mvcc: finished scheduled compaction at 11605 (took 14.627171ms)\n2021-05-21 15:58:16.617771 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:58:26.617879 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:58:36.617475 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:58:46.617695 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:58:56.617506 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:59:06.617740 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:59:16.617667 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:59:26.617558 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:59:36.617794 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:59:46.617841 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 15:59:56.617696 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:00:06.617717 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:00:16.618081 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:00:26.617695 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:00:36.617946 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:00:46.617466 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:00:54.834326 I | etcdserver: start to snapshot (applied: 20002, lastsnap: 10001)\n2021-05-21 16:00:54.836531 I | etcdserver: saved snapshot at index 20002\n2021-05-21 16:00:54.836861 I | etcdserver: compacted raft log at 15002\n2021-05-21 16:00:56.617920 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:01:06.617679 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:01:16.617588 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:01:26.617604 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:01:36.617765 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:01:46.617796 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:01:56.617714 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:02:06.617779 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:02:16.617772 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:02:26.617856 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:02:36.617643 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:02:46.617458 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:02:56.617880 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:03:06.617885 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:03:10.999619 I | mvcc: store.index: compact 13242\n2021-05-21 16:03:11.032066 I | mvcc: finished scheduled compaction at 13242 (took 28.428846ms)\n2021-05-21 16:03:16.618048 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:03:26.617595 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:03:36.617994 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:03:46.617869 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:03:56.617716 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:04:05.579641 W | etcdserver/api/v3rpc: failed to send watch control response to gRPC stream (\"rpc error: code = Unavailable desc = transport is closing\")\n2021-05-21 16:04:06.617736 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:04:16.617632 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:04:26.617847 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:04:36.617905 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:04:46.618147 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:04:56.617533 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:05:03.696610 I | etcdserver: start to snapshot (applied: 30003, lastsnap: 20002)\n2021-05-21 16:05:03.699487 I | etcdserver: saved snapshot at index 30003\n2021-05-21 16:05:03.700046 I | etcdserver: compacted raft log at 25003\n2021-05-21 16:05:06.617981 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:05:16.617843 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:05:26.617721 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:05:36.617792 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:05:46.617709 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:05:56.617873 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:06:06.617844 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:06:16.617712 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:06:26.617874 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:06:36.617733 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:06:46.617798 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:06:56.617689 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:07:06.617627 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:07:16.617797 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:07:26.617671 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:07:36.617730 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:07:46.617642 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:07:56.617702 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:08:06.617763 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:08:11.003284 I | mvcc: store.index: compact 25013\n2021-05-21 16:08:11.188297 I | mvcc: finished scheduled compaction at 25013 (took 176.301979ms)\n2021-05-21 16:08:16.617742 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:08:26.617893 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:08:36.617769 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:08:46.617770 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:08:56.617715 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:09:06.617849 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:09:16.617722 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:09:26.617780 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:09:36.617777 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:09:46.617848 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:09:56.617770 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:10:06.619535 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:10:16.618165 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:10:26.617688 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:10:36.617732 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:10:46.617743 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:10:56.617728 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:11:06.617794 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:11:16.617784 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:11:26.617734 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:11:36.617756 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:11:46.617832 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:11:56.617667 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:12:06.617653 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:12:16.617699 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:12:26.617828 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:12:36.617873 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:12:46.617620 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:12:56.617680 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:13:06.617930 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:13:11.007261 I | mvcc: store.index: compact 30567\n2021-05-21 16:13:11.096627 I | mvcc: finished scheduled compaction at 30567 (took 85.486903ms)\n2021-05-21 16:13:16.617729 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:13:26.617797 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:13:36.617900 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:13:46.617779 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:13:56.617833 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:14:06.617769 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:14:16.617867 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:14:26.617752 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:14:36.617915 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:14:46.617798 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:14:56.617830 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:15:06.617646 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:15:16.617699 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:15:26.617732 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:15:36.617860 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:15:46.617793 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:15:56.617751 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:16:06.617752 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:16:16.617674 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:16:26.617742 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:16:36.617779 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:16:46.618067 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:16:56.617775 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:17:06.617853 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:17:16.617686 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:17:26.617741 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:17:36.617749 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:17:46.617786 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:17:56.617769 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:18:06.617761 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:18:11.010823 I | mvcc: store.index: compact 33062\n2021-05-21 16:18:11.055271 I | mvcc: finished scheduled compaction at 33062 (took 42.710099ms)\n2021-05-21 16:18:16.617614 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:18:26.617841 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:18:36.617666 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:18:46.617861 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:18:56.617823 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:19:06.617893 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:19:16.618145 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:19:26.617749 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:19:36.617771 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:19:46.617679 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:19:56.617741 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:20:06.617709 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:20:16.617654 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:20:26.617798 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:20:36.617876 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:20:46.617534 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:20:56.617782 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:21:06.617712 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:21:16.617658 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:21:26.617777 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:21:36.617675 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:21:46.617770 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:21:56.617911 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:22:06.617746 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:22:16.617694 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:22:26.617761 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:22:36.617692 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:22:46.617687 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:22:56.617887 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:23:06.617850 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:23:11.015153 I | mvcc: store.index: compact 34239\n2021-05-21 16:23:11.047501 I | mvcc: finished scheduled compaction at 34239 (took 30.237838ms)\n2021-05-21 16:23:16.617740 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:23:26.617755 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:23:36.617959 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:23:46.617641 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:23:56.617889 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:24:06.617755 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:24:16.617876 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:24:26.617834 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:24:36.617689 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:24:46.617692 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:24:56.617875 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:25:06.617664 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:25:16.617784 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:25:26.617741 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:25:36.617674 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:25:46.617701 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:25:56.617779 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:26:06.617735 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:26:16.617925 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:26:26.617748 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:26:36.617748 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:26:46.617734 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:26:56.617521 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:27:06.617784 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:27:16.617718 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:27:26.617707 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:27:36.617671 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:27:46.617656 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:27:56.617648 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:28:06.617837 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:28:11.018912 I | mvcc: store.index: compact 37395\n2021-05-21 16:28:11.066752 I | mvcc: finished scheduled compaction at 37395 (took 45.741941ms)\n2021-05-21 16:28:16.617784 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:28:26.617691 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:28:29.226731 I | etcdserver: start to snapshot (applied: 40004, lastsnap: 30003)\n2021-05-21 16:28:29.228903 I | etcdserver: saved snapshot at index 40004\n2021-05-21 16:28:29.229493 I | etcdserver: compacted raft log at 35004\n2021-05-21 16:28:36.617627 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:28:46.617796 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:28:56.617749 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:29:06.617666 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:29:16.617522 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:29:26.617828 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:29:36.617771 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:29:46.617610 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:29:56.617869 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:30:06.618153 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:30:16.617856 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:30:26.617724 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:30:36.617715 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:30:46.617779 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:30:56.617960 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:31:06.617745 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:31:16.617617 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:31:26.617716 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:31:36.617733 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:31:46.617796 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:31:56.617859 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:32:06.617673 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:32:16.618281 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:32:26.617674 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:32:36.617850 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:32:46.617747 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:32:56.617876 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:33:06.617934 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:33:11.022716 I | mvcc: store.index: compact 38415\n2021-05-21 16:33:11.041076 I | mvcc: finished scheduled compaction at 38415 (took 15.457102ms)\n2021-05-21 16:33:16.617884 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:33:26.617757 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:33:36.618113 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:33:46.617622 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:33:56.617761 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:34:06.617972 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:34:16.617715 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:34:26.617886 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:34:36.618159 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:34:46.617771 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:34:56.617876 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:35:06.617769 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:35:16.617680 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:35:26.617623 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:35:36.617786 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:35:46.617970 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:35:56.617768 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:36:06.617799 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:36:16.617734 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:36:26.617852 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:36:36.617673 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:36:46.617828 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:36:56.617701 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:37:06.617821 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:37:16.617786 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:37:26.618163 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:37:36.617735 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:37:46.617866 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:37:56.617682 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:38:06.618004 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:38:11.026583 I | mvcc: store.index: compact 46838\n2021-05-21 16:38:11.167507 I | mvcc: finished scheduled compaction at 46838 (took 136.207019ms)\n2021-05-21 16:38:16.617768 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:38:24.508294 I | etcdserver: start to snapshot (applied: 50005, lastsnap: 40004)\n2021-05-21 16:38:24.510779 I | etcdserver: saved snapshot at index 50005\n2021-05-21 16:38:24.511443 I | etcdserver: compacted raft log at 45005\n2021-05-21 16:38:26.617787 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:38:36.617754 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:38:46.617861 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-05-21 16:38:56.617755 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n==== END logs for container etcd of pod kube-system/etcd-kali-control-plane ====\n==== START logs for container kindnet-cni of pod kube-system/kindnet-7b2zs ====\nI0521 15:13:38.821462 1 main.go:316] probe TCP address kali-control-plane:6443\nI0521 15:13:38.824389 1 main.go:102] connected to apiserver: https://kali-control-plane:6443\nI0521 15:13:38.824407 1 main.go:107] hostIP = 172.18.0.3\npodIP = 172.18.0.3\nI0521 15:13:38.824617 1 main.go:116] setting mtu 1500 for CNI \nI0521 15:13:38.824637 1 main.go:146] kindnetd IP family: \"ipv4\"\nI0521 15:13:38.824654 1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]\nI0521 15:13:39.521314 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:13:39.521373 1 main.go:227] handling current node\nI0521 15:13:49.627498 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:13:49.627555 1 main.go:227] handling current node\nI0521 15:13:59.634354 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:13:59.634399 1 main.go:227] handling current node\nI0521 15:13:59.634424 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:13:59.634436 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:13:59.634773 1 routes.go:46] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: Gw: 172.18.0.2 Flags: [] Table: 0} \nI0521 15:13:59.634950 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:13:59.634973 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:13:59.635086 1 routes.go:46] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: Gw: 172.18.0.4 Flags: [] Table: 0} \nI0521 15:14:09.641628 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:14:09.641684 1 main.go:227] handling current node\nI0521 15:14:09.641709 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:14:09.641722 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:14:09.642017 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:14:09.642050 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:14:19.648601 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:14:19.648658 1 main.go:227] handling current node\nI0521 15:14:19.648684 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:14:19.648698 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:14:19.648893 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:14:19.648919 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:14:29.655567 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:14:29.655622 1 main.go:227] handling current node\nI0521 15:14:29.655647 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:14:29.655661 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:14:29.655886 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:14:29.655914 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:14:39.662122 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:14:39.662179 1 main.go:227] handling current node\nI0521 15:14:39.662203 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:14:39.662217 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:14:39.662417 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:14:39.662445 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:14:49.668489 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:14:49.668534 1 main.go:227] handling current node\nI0521 15:14:49.668557 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:14:49.668571 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:14:49.668805 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:14:49.668827 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:14:59.675086 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:14:59.675145 1 main.go:227] handling current node\nI0521 15:14:59.675171 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:14:59.675183 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:14:59.675396 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:14:59.675422 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:15:09.681193 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:15:09.681239 1 main.go:227] handling current node\nI0521 15:15:09.681261 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:15:09.681274 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:15:09.681438 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:15:09.681460 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:15:19.687496 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:15:19.687548 1 main.go:227] handling current node\nI0521 15:15:19.687573 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:15:19.687586 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:15:19.688574 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:15:19.688672 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:15:29.726572 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:15:29.726640 1 main.go:227] handling current node\nI0521 15:15:29.726665 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:15:29.726678 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:15:29.726928 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:15:29.727022 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:15:39.733153 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:15:39.733211 1 main.go:227] handling current node\nI0521 15:15:39.733238 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:15:39.733251 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:15:39.733506 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:15:39.733530 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:15:49.740237 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:15:49.740302 1 main.go:227] handling current node\nI0521 15:15:49.740331 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:15:49.740345 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:15:49.740619 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:15:49.740653 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:15:59.747582 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:15:59.747659 1 main.go:227] handling current node\nI0521 15:15:59.747683 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:15:59.747696 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:15:59.747980 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:15:59.748007 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:16:09.755179 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:16:09.755244 1 main.go:227] handling current node\nI0521 15:16:09.755276 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:16:09.755291 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:16:09.755488 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:16:09.755521 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:16:19.761410 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:16:19.761455 1 main.go:227] handling current node\nI0521 15:16:19.761479 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:16:19.761492 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:16:19.761719 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:16:19.761739 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:16:29.768514 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:16:29.768572 1 main.go:227] handling current node\nI0521 15:16:29.768597 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:16:29.768611 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:16:29.768888 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:16:29.768916 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:16:39.775336 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:16:39.775394 1 main.go:227] handling current node\nI0521 15:16:39.775420 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:16:39.775434 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:16:39.775646 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:16:39.775672 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:16:49.820588 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:16:49.820669 1 main.go:227] handling current node\nI0521 15:16:49.820697 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:16:49.820726 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:16:49.821085 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:16:49.821118 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:16:59.827961 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:16:59.828024 1 main.go:227] handling current node\nI0521 15:16:59.828048 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:16:59.828062 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:16:59.828269 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:16:59.828296 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:17:09.834557 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:17:09.834614 1 main.go:227] handling current node\nI0521 15:17:09.834640 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:17:09.834653 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:17:09.834961 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:17:09.834995 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:17:19.841564 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:17:19.841623 1 main.go:227] handling current node\nI0521 15:17:19.841649 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:17:19.841662 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:17:19.841969 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:17:19.842003 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:17:29.851708 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:17:29.851765 1 main.go:227] handling current node\nI0521 15:17:29.851790 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:17:29.851804 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:17:29.852024 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:17:29.852053 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:17:39.858397 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:17:39.858447 1 main.go:227] handling current node\nI0521 15:17:39.858472 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:17:39.858486 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:17:39.858711 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:17:39.858734 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:17:49.869165 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:17:49.869215 1 main.go:227] handling current node\nI0521 15:17:49.869240 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:17:49.869253 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:17:49.869501 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:17:49.869524 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:17:59.875932 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:17:59.875993 1 main.go:227] handling current node\nI0521 15:17:59.876018 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:17:59.876032 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:17:59.876255 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:17:59.876284 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:18:09.882660 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:18:09.882708 1 main.go:227] handling current node\nI0521 15:18:09.882737 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:18:09.882750 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:18:09.882953 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:18:09.882976 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:18:19.888548 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:18:19.888599 1 main.go:227] handling current node\nI0521 15:18:19.888621 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:18:19.888633 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:18:19.888813 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:18:19.888831 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:18:29.894711 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:18:29.894751 1 main.go:227] handling current node\nI0521 15:18:29.894773 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:18:29.894786 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:18:29.894995 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:18:29.895025 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:18:39.903637 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:18:39.903737 1 main.go:227] handling current node\nI0521 15:18:39.903771 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:18:39.903787 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:18:39.904231 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:18:39.904265 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:18:49.911932 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:18:49.911996 1 main.go:227] handling current node\nI0521 15:18:49.912022 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:18:49.912036 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:18:49.912280 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:18:49.912310 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:18:59.919158 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:18:59.919208 1 main.go:227] handling current node\nI0521 15:18:59.919233 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:18:59.919247 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:18:59.919482 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:18:59.919506 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:19:09.926440 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:19:09.926497 1 main.go:227] handling current node\nI0521 15:19:09.926525 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:19:09.926538 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:19:09.926764 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:19:09.926788 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:19:19.933500 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:19:19.933548 1 main.go:227] handling current node\nI0521 15:19:19.933575 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:19:19.933588 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:19:19.933875 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:19:19.933902 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:19:29.940360 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:19:29.940407 1 main.go:227] handling current node\nI0521 15:19:29.940432 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:19:29.940446 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:19:29.940671 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:19:29.940692 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:19:39.947986 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:19:39.948041 1 main.go:227] handling current node\nI0521 15:19:39.948067 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:19:39.948080 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:19:39.948333 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:19:39.948357 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:19:49.955352 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:19:49.955404 1 main.go:227] handling current node\nI0521 15:19:49.955429 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:19:49.955442 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:19:49.955669 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:19:49.955692 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:19:59.961515 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:19:59.961565 1 main.go:227] handling current node\nI0521 15:19:59.961591 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:19:59.961605 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:19:59.961878 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:19:59.961904 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:20:09.968600 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:20:09.968649 1 main.go:227] handling current node\nI0521 15:20:09.968675 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:20:09.968708 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:20:09.969346 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:20:09.969407 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:20:19.976345 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:20:19.976457 1 main.go:227] handling current node\nI0521 15:20:19.976485 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:20:19.976500 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:20:19.976726 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:20:19.976755 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:20:29.982851 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:20:29.982909 1 main.go:227] handling current node\nI0521 15:20:29.982940 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:20:29.982955 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:20:29.983159 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:20:29.983177 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:20:39.989463 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:20:39.989520 1 main.go:227] handling current node\nI0521 15:20:39.989546 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:20:39.989560 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:20:39.989791 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:20:39.989867 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:20:49.996066 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:20:49.996115 1 main.go:227] handling current node\nI0521 15:20:49.996140 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:20:49.996154 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:20:49.996382 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:20:49.996405 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:21:00.002619 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:21:00.002667 1 main.go:227] handling current node\nI0521 15:21:00.002692 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:21:00.002708 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:21:00.002973 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:21:00.003000 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:21:10.009245 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:21:10.009304 1 main.go:227] handling current node\nI0521 15:21:10.009330 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:21:10.009344 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:21:10.009551 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:21:10.009577 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:21:20.015797 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:21:20.015852 1 main.go:227] handling current node\nI0521 15:21:20.015876 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:21:20.015890 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:21:20.016138 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:21:20.016168 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:21:30.022446 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:21:30.022519 1 main.go:227] handling current node\nI0521 15:21:30.022552 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:21:30.022573 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:21:30.022846 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:21:30.022885 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:21:40.029732 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:21:40.029788 1 main.go:227] handling current node\nI0521 15:21:40.029867 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:21:40.029887 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:21:40.030108 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:21:40.030264 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:21:50.038216 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:21:50.038311 1 main.go:227] handling current node\nI0521 15:21:50.038341 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:21:50.038372 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:21:50.038999 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:21:50.039046 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:22:00.044958 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:22:00.045005 1 main.go:227] handling current node\nI0521 15:22:00.045030 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:22:00.045043 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:22:00.045262 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:22:00.045284 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:22:10.051759 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:22:10.051830 1 main.go:227] handling current node\nI0521 15:22:10.051857 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:22:10.051875 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:22:10.052123 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:22:10.052152 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:22:20.058543 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:22:20.058591 1 main.go:227] handling current node\nI0521 15:22:20.058617 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:22:20.058630 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:22:20.058849 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:22:20.058874 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:22:30.065441 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:22:30.065490 1 main.go:227] handling current node\nI0521 15:22:30.065516 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:22:30.065529 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:22:30.065755 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:22:30.065779 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:22:40.073009 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:22:40.073067 1 main.go:227] handling current node\nI0521 15:22:40.073093 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:22:40.073107 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:22:40.073324 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:22:40.073352 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:22:50.079951 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:22:50.080009 1 main.go:227] handling current node\nI0521 15:22:50.080035 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:22:50.080052 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:22:50.080267 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:22:50.080294 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:23:00.087093 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:23:00.087150 1 main.go:227] handling current node\nI0521 15:23:00.087175 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:23:00.087189 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:23:00.087461 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:23:00.087489 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:23:10.093469 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:23:10.093517 1 main.go:227] handling current node\nI0521 15:23:10.093543 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:23:10.093557 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:23:10.093786 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:23:10.094036 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:23:20.100580 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:23:20.100628 1 main.go:227] handling current node\nI0521 15:23:20.100653 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:23:20.100667 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:23:20.100884 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:23:20.100906 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:23:30.107320 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:23:30.107372 1 main.go:227] handling current node\nI0521 15:23:30.107398 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:23:30.107412 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:23:30.107649 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:23:30.107673 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:23:40.114338 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:23:40.114386 1 main.go:227] handling current node\nI0521 15:23:40.114411 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:23:40.114424 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:23:40.114647 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:23:40.114669 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:23:50.121695 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:23:50.121756 1 main.go:227] handling current node\nI0521 15:23:50.121788 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:23:50.121835 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:23:50.122197 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:23:50.122228 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:24:00.128900 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:24:00.128957 1 main.go:227] handling current node\nI0521 15:24:00.128981 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:24:00.128994 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:24:00.129213 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:24:00.129240 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:24:10.135427 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:24:10.135476 1 main.go:227] handling current node\nI0521 15:24:10.135503 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:24:10.135517 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:24:10.135774 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:24:10.135798 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:24:20.142870 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:24:20.142928 1 main.go:227] handling current node\nI0521 15:24:20.142956 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:24:20.142969 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:24:20.143199 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:24:20.143227 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:24:30.150054 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:24:30.150101 1 main.go:227] handling current node\nI0521 15:24:30.150127 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:24:30.150141 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:24:30.150356 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:24:30.150379 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:24:40.157001 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:24:40.157056 1 main.go:227] handling current node\nI0521 15:24:40.157081 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:24:40.157095 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:24:40.157318 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:24:40.157346 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:24:50.164480 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:24:50.164541 1 main.go:227] handling current node\nI0521 15:24:50.164577 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:24:50.164601 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:24:50.164843 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:24:50.164886 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:25:00.171521 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:25:00.171568 1 main.go:227] handling current node\nI0521 15:25:00.171593 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:25:00.171606 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:25:00.171849 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:25:00.171871 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:25:10.178744 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:25:10.178800 1 main.go:227] handling current node\nI0521 15:25:10.178826 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:25:10.178839 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:25:10.179060 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:25:10.179089 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:25:20.187769 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:25:20.187853 1 main.go:227] handling current node\nI0521 15:25:20.187895 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:25:20.187911 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:25:20.188201 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:25:20.188234 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:25:30.194649 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:25:30.194699 1 main.go:227] handling current node\nI0521 15:25:30.194724 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:25:30.194738 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:25:30.195001 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:25:30.195024 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:25:40.201506 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:25:40.201562 1 main.go:227] handling current node\nI0521 15:25:40.201588 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:25:40.201601 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:25:40.201879 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:25:40.201911 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:25:50.207469 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:25:50.207524 1 main.go:227] handling current node\nI0521 15:25:50.207543 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:25:50.207552 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:25:50.207726 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:25:50.207739 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:26:00.214097 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:26:00.214156 1 main.go:227] handling current node\nI0521 15:26:00.214182 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:26:00.214198 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:26:00.214427 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:26:00.214455 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:26:10.220591 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:26:10.220641 1 main.go:227] handling current node\nI0521 15:26:10.220666 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:26:10.220679 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:26:10.220899 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:26:10.220922 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:26:20.227170 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:26:20.227220 1 main.go:227] handling current node\nI0521 15:26:20.227246 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:26:20.227261 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:26:20.227468 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:26:20.227490 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:26:30.234091 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:26:30.234152 1 main.go:227] handling current node\nI0521 15:26:30.234179 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:26:30.234193 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:26:30.234405 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:26:30.234432 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:26:40.240590 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:26:40.240638 1 main.go:227] handling current node\nI0521 15:26:40.240663 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:26:40.240676 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:26:40.240897 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:26:40.240920 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:26:50.247123 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:26:50.247171 1 main.go:227] handling current node\nI0521 15:26:50.247197 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:26:50.247210 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:26:50.247433 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:26:50.247457 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:27:00.253858 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:27:00.253910 1 main.go:227] handling current node\nI0521 15:27:00.253940 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:27:00.253955 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:27:00.254182 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:27:00.254206 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:27:10.262212 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:27:10.262277 1 main.go:227] handling current node\nI0521 15:27:10.262305 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:27:10.262321 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:27:10.262611 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:27:10.262648 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:27:20.269629 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:27:20.269706 1 main.go:227] handling current node\nI0521 15:27:20.269735 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:27:20.269750 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:27:20.270028 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:27:20.270060 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:27:30.276733 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:27:30.276797 1 main.go:227] handling current node\nI0521 15:27:30.276827 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:27:30.276841 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:27:30.277059 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:27:30.277088 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:27:40.284088 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:27:40.284140 1 main.go:227] handling current node\nI0521 15:27:40.284169 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:27:40.284183 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:27:40.284518 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:27:40.284554 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:27:50.291331 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:27:50.291391 1 main.go:227] handling current node\nI0521 15:27:50.291417 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:27:50.291430 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:27:50.291647 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:27:50.291676 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:28:00.298562 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:28:00.298619 1 main.go:227] handling current node\nI0521 15:28:00.298645 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:28:00.298659 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:28:00.298880 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:28:00.298907 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:28:10.305470 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:28:10.305521 1 main.go:227] handling current node\nI0521 15:28:10.305547 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:28:10.305560 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:28:10.305777 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:28:10.305800 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:28:20.312577 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:28:20.312626 1 main.go:227] handling current node\nI0521 15:28:20.312652 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:28:20.312665 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:28:20.312884 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:28:20.313114 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:28:30.320134 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:28:30.320196 1 main.go:227] handling current node\nI0521 15:28:30.320221 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:28:30.320235 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:28:30.320466 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:28:30.320489 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:28:40.327211 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:28:40.327259 1 main.go:227] handling current node\nI0521 15:28:40.327287 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:28:40.327300 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:28:40.327517 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:28:40.327539 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:28:50.334162 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:28:50.334216 1 main.go:227] handling current node\nI0521 15:28:50.334244 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:28:50.334257 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:28:50.334497 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:28:50.334519 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:29:00.342022 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:29:00.342086 1 main.go:227] handling current node\nI0521 15:29:00.342112 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:29:00.342126 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:29:00.342356 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:29:00.342383 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:29:10.348736 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:29:10.348798 1 main.go:227] handling current node\nI0521 15:29:10.348824 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:29:10.348837 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:29:10.349054 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:29:10.349082 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:29:20.355371 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:29:20.355433 1 main.go:227] handling current node\nI0521 15:29:20.355462 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:29:20.355479 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:29:20.355711 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:29:20.355740 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:29:30.362460 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:29:30.362520 1 main.go:227] handling current node\nI0521 15:29:30.362547 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:29:30.362561 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:29:30.362796 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:29:30.362826 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:29:40.369546 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:29:40.369604 1 main.go:227] handling current node\nI0521 15:29:40.369630 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:29:40.369643 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:29:40.369897 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:29:40.369938 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:29:50.376625 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:29:50.376685 1 main.go:227] handling current node\nI0521 15:29:50.376719 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:29:50.376735 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:29:50.376961 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:29:50.376989 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:30:00.383878 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:30:00.383937 1 main.go:227] handling current node\nI0521 15:30:00.383964 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:30:00.384012 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:30:00.384238 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:30:00.384265 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:30:10.391012 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:30:10.391061 1 main.go:227] handling current node\nI0521 15:30:10.391087 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:30:10.391101 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:30:10.391338 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:30:10.391362 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:30:20.399477 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:30:20.399550 1 main.go:227] handling current node\nI0521 15:30:20.399578 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:30:20.399594 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:30:20.399896 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:30:20.399919 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:30:30.407490 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:30:30.407566 1 main.go:227] handling current node\nI0521 15:30:30.407602 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:30:30.407623 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:30:30.407900 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:30:30.407940 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:30:40.413409 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:30:40.413458 1 main.go:227] handling current node\nI0521 15:30:40.413483 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:30:40.413496 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:30:40.413714 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:30:40.413737 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:30:50.419789 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:30:50.419846 1 main.go:227] handling current node\nI0521 15:30:50.419871 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:30:50.419887 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:30:50.420116 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:30:50.420137 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:31:00.426468 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:31:00.426528 1 main.go:227] handling current node\nI0521 15:31:00.426553 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:31:00.426569 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:31:00.426803 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:31:00.426832 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:31:10.433169 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:31:10.433219 1 main.go:227] handling current node\nI0521 15:31:10.433245 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:31:10.433258 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:31:10.433480 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:31:10.433505 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:31:20.440754 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:31:20.440811 1 main.go:227] handling current node\nI0521 15:31:20.440839 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:31:20.440853 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:31:20.441073 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:31:20.441250 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:31:30.447932 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:31:30.447988 1 main.go:227] handling current node\nI0521 15:31:30.448015 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:31:30.448029 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:31:30.448245 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:31:30.448273 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:31:40.454585 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:31:40.454636 1 main.go:227] handling current node\nI0521 15:31:40.454662 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:31:40.454676 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:31:40.454915 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:31:40.454938 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:31:50.462180 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:31:50.462237 1 main.go:227] handling current node\nI0521 15:31:50.462261 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:31:50.462274 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:31:50.462491 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:31:50.462517 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:32:00.469616 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:32:00.469851 1 main.go:227] handling current node\nI0521 15:32:00.470114 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:32:00.470182 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:32:00.520200 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:32:00.520267 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:32:10.527675 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:32:10.527725 1 main.go:227] handling current node\nI0521 15:32:10.527751 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:32:10.527764 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:32:10.528031 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:32:10.528068 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:32:20.534826 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:32:20.534875 1 main.go:227] handling current node\nI0521 15:32:20.534900 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:32:20.534914 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:32:20.535143 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:32:20.535165 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:32:30.540903 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:32:30.540941 1 main.go:227] handling current node\nI0521 15:32:30.540960 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:32:30.540969 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:32:30.541152 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:32:30.541169 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:32:40.547860 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:32:40.547911 1 main.go:227] handling current node\nI0521 15:32:40.547935 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:32:40.547949 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:32:40.548180 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:32:40.548204 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:32:50.554597 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:32:50.554646 1 main.go:227] handling current node\nI0521 15:32:50.554672 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:32:50.554688 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:32:50.554918 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:32:50.554941 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:33:00.561191 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:33:00.561242 1 main.go:227] handling current node\nI0521 15:33:00.561268 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:33:00.561280 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:33:00.561511 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:33:00.561539 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:33:10.568153 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:33:10.568204 1 main.go:227] handling current node\nI0521 15:33:10.568229 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:33:10.568243 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:33:10.568464 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:33:10.568518 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:33:20.575037 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:33:20.575087 1 main.go:227] handling current node\nI0521 15:33:20.575112 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:33:20.575126 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:33:20.575350 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:33:20.575372 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:33:30.582126 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:33:30.582175 1 main.go:227] handling current node\nI0521 15:33:30.582200 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:33:30.582214 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:33:30.582427 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:33:30.582449 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:33:40.589331 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:33:40.589390 1 main.go:227] handling current node\nI0521 15:33:40.589417 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:33:40.589430 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:33:40.589654 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:33:40.589681 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:33:50.596852 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:33:50.596915 1 main.go:227] handling current node\nI0521 15:33:50.596942 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:33:50.596956 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:33:50.597248 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:33:50.597276 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:34:00.603570 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:34:00.603638 1 main.go:227] handling current node\nI0521 15:34:00.603665 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:34:00.603679 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:34:00.603910 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:34:00.603933 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:34:10.610539 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:34:10.610588 1 main.go:227] handling current node\nI0521 15:34:10.610612 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:34:10.610625 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:34:10.610838 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:34:10.610861 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:34:20.617400 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:34:20.617458 1 main.go:227] handling current node\nI0521 15:34:20.617485 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:34:20.617500 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:34:20.617726 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:34:20.617754 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:34:30.624867 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:34:30.624928 1 main.go:227] handling current node\nI0521 15:34:30.624954 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:34:30.624968 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:34:30.625206 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:34:30.625236 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:34:40.632201 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:34:40.632254 1 main.go:227] handling current node\nI0521 15:34:40.632287 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:34:40.632303 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:34:40.632555 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:34:40.632580 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:34:50.639474 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:34:50.639534 1 main.go:227] handling current node\nI0521 15:34:50.639560 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:34:50.639573 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:34:50.639786 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:34:50.639814 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:35:00.646891 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:35:00.646966 1 main.go:227] handling current node\nI0521 15:35:00.647000 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:35:00.647022 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:35:00.647705 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:35:00.647773 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:35:10.654893 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:35:10.654948 1 main.go:227] handling current node\nI0521 15:35:10.654974 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:35:10.654989 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:35:10.655207 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:35:10.655229 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:35:20.662233 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:35:20.662282 1 main.go:227] handling current node\nI0521 15:35:20.662309 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:35:20.662323 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:35:20.662546 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:35:20.662569 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:35:30.669707 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:35:30.669755 1 main.go:227] handling current node\nI0521 15:35:30.669782 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:35:30.669794 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:35:30.670059 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:35:30.670083 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:35:40.678087 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:35:40.678135 1 main.go:227] handling current node\nI0521 15:35:40.678160 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:35:40.678175 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:35:40.678394 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:35:40.678417 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:35:50.685147 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:35:50.685197 1 main.go:227] handling current node\nI0521 15:35:50.685224 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:35:50.685237 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:35:50.685452 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:35:50.685475 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:36:00.695529 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:36:00.695582 1 main.go:227] handling current node\nI0521 15:36:00.695660 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:36:00.695679 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:36:00.695891 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:36:00.695914 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:36:10.703348 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:36:10.703399 1 main.go:227] handling current node\nI0521 15:36:10.703426 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:36:10.703441 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:36:10.703662 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:36:10.703685 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:36:20.710219 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:36:20.710268 1 main.go:227] handling current node\nI0521 15:36:20.710293 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:36:20.710306 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:36:20.710517 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:36:20.710540 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:36:30.717155 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:36:30.717206 1 main.go:227] handling current node\nI0521 15:36:30.717232 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:36:30.717245 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:36:30.717460 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:36:30.717484 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:36:40.819845 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:36:40.819922 1 main.go:227] handling current node\nI0521 15:36:40.819952 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:36:40.819965 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:36:40.820211 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:36:40.820234 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:36:50.828176 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:36:50.828230 1 main.go:227] handling current node\nI0521 15:36:50.828256 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:36:50.828269 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:36:50.828499 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:36:50.828522 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:37:00.835691 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:37:00.835741 1 main.go:227] handling current node\nI0521 15:37:00.835766 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:37:00.835780 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:37:00.836014 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:37:00.836036 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:37:10.843002 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:37:10.843065 1 main.go:227] handling current node\nI0521 15:37:10.843095 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:37:10.843109 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:37:10.843368 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:37:10.843395 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:37:20.850370 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:37:20.850424 1 main.go:227] handling current node\nI0521 15:37:20.850452 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:37:20.850465 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:37:20.850694 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:37:20.850717 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:37:30.857187 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:37:30.857238 1 main.go:227] handling current node\nI0521 15:37:30.857264 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:37:30.857277 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:37:30.857513 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:37:30.857539 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:37:40.864608 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:37:40.864667 1 main.go:227] handling current node\nI0521 15:37:40.864693 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:37:40.864707 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:37:40.864966 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:37:40.864993 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:37:50.872131 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:37:50.872187 1 main.go:227] handling current node\nI0521 15:37:50.872216 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:37:50.872229 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:37:50.872459 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:37:50.872482 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:38:00.879338 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:38:00.879389 1 main.go:227] handling current node\nI0521 15:38:00.879415 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:38:00.879434 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:38:00.879680 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:38:00.879705 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:38:10.886773 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:38:10.886836 1 main.go:227] handling current node\nI0521 15:38:10.886864 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:38:10.886881 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:38:10.887128 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:38:10.887156 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:38:20.901120 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:38:20.901191 1 main.go:227] handling current node\nI0521 15:38:20.901232 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:38:20.901250 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:38:20.901556 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:38:20.901583 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:38:30.908233 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:38:30.908288 1 main.go:227] handling current node\nI0521 15:38:30.908313 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:38:30.908327 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:38:30.908550 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:38:30.908573 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:38:40.915737 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:38:40.915798 1 main.go:227] handling current node\nI0521 15:38:40.915825 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:38:40.915839 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:38:40.916058 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:38:40.916081 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:38:50.923159 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:38:50.923208 1 main.go:227] handling current node\nI0521 15:38:50.923233 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:38:50.923246 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:38:50.923465 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:38:50.923488 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:39:00.930482 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:39:00.930530 1 main.go:227] handling current node\nI0521 15:39:00.930555 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:39:00.930569 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:39:00.930795 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:39:00.930818 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:39:10.937525 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:39:10.937571 1 main.go:227] handling current node\nI0521 15:39:10.937596 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:39:10.937609 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:39:10.937877 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:39:10.937903 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:39:20.944812 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:39:20.944862 1 main.go:227] handling current node\nI0521 15:39:20.944888 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:39:20.944901 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:39:20.945124 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:39:20.945148 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:39:30.951835 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:39:30.951883 1 main.go:227] handling current node\nI0521 15:39:30.951908 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:39:30.951921 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:39:30.952155 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:39:30.952191 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:39:40.958992 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:39:40.959042 1 main.go:227] handling current node\nI0521 15:39:40.959067 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:39:40.959080 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:39:40.959345 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:39:40.959369 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:39:50.965959 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:39:50.966011 1 main.go:227] handling current node\nI0521 15:39:50.966036 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:39:50.966050 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:39:50.966269 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:39:50.966313 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:40:00.974668 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:40:00.974747 1 main.go:227] handling current node\nI0521 15:40:00.974779 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:40:00.974821 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:40:00.975127 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:40:00.975153 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:40:10.981336 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:40:10.981399 1 main.go:227] handling current node\nI0521 15:40:10.981425 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:40:10.981439 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:40:10.981667 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:40:10.981693 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:40:20.988550 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:40:20.988606 1 main.go:227] handling current node\nI0521 15:40:20.988632 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:40:20.988646 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:40:20.988869 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:40:20.988896 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:40:30.995595 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:40:30.995653 1 main.go:227] handling current node\nI0521 15:40:30.995679 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:40:30.995693 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:40:30.995938 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:40:30.995960 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:40:41.002552 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:40:41.002603 1 main.go:227] handling current node\nI0521 15:40:41.002629 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:40:41.002643 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:40:41.002915 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:40:41.002942 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:40:51.009453 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:40:51.009501 1 main.go:227] handling current node\nI0521 15:40:51.009527 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:40:51.009540 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:40:51.009773 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:40:51.009795 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:41:01.016117 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:41:01.016179 1 main.go:227] handling current node\nI0521 15:41:01.016220 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:41:01.016242 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:41:01.016541 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:41:01.016572 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:41:11.023203 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:41:11.023260 1 main.go:227] handling current node\nI0521 15:41:11.023286 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:41:11.023300 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:41:11.023594 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:41:11.023624 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:41:21.030426 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:41:21.030485 1 main.go:227] handling current node\nI0521 15:41:21.030511 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:41:21.030525 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:41:21.030763 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:41:21.030791 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:41:31.036941 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:41:31.037008 1 main.go:227] handling current node\nI0521 15:41:31.037035 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:41:31.037050 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:41:31.037280 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:41:31.037307 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:41:41.043681 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:41:41.043732 1 main.go:227] handling current node\nI0521 15:41:41.043759 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:41:41.043772 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:41:41.044008 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:41:41.044031 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:41:51.051606 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:41:51.051683 1 main.go:227] handling current node\nI0521 15:41:51.051715 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:41:51.051737 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:41:51.052741 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:41:51.052834 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:42:01.059766 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:42:01.059834 1 main.go:227] handling current node\nI0521 15:42:01.059878 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:42:01.059900 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:42:01.060162 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:42:01.060194 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:42:11.066876 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:42:11.066925 1 main.go:227] handling current node\nI0521 15:42:11.066951 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:42:11.066965 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:42:11.067185 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:42:11.067208 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:42:21.073956 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:42:21.074007 1 main.go:227] handling current node\nI0521 15:42:21.074032 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:42:21.074045 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:42:21.074266 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:42:21.074290 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:42:31.081580 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:42:31.081630 1 main.go:227] handling current node\nI0521 15:42:31.081655 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:42:31.081669 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:42:31.081934 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:42:31.082204 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:42:41.089276 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:42:41.089337 1 main.go:227] handling current node\nI0521 15:42:41.089364 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:42:41.089378 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:42:41.089606 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:42:41.089636 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:42:51.096462 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:42:51.096528 1 main.go:227] handling current node\nI0521 15:42:51.096559 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:42:51.096574 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:42:51.096804 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:42:51.096834 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:43:01.103117 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:43:01.103177 1 main.go:227] handling current node\nI0521 15:43:01.103204 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:43:01.103220 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:43:01.103502 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:43:01.103534 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:43:11.110345 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:43:11.110398 1 main.go:227] handling current node\nI0521 15:43:11.110423 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:43:11.110436 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:43:11.110665 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:43:11.110688 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:43:21.117600 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:43:21.117665 1 main.go:227] handling current node\nI0521 15:43:21.117696 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:43:21.117710 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:43:21.117989 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:43:21.118019 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:43:31.125288 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:43:31.125341 1 main.go:227] handling current node\nI0521 15:43:31.125370 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:43:31.125386 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:43:31.125616 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:43:31.125640 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:43:41.220310 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:43:41.220379 1 main.go:227] handling current node\nI0521 15:43:41.220408 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:43:41.220425 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:43:41.220693 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:43:41.220720 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:43:51.227254 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:43:51.227310 1 main.go:227] handling current node\nI0521 15:43:51.227336 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:43:51.227348 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:43:51.227596 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:43:51.227621 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:44:01.233986 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:44:01.234037 1 main.go:227] handling current node\nI0521 15:44:01.234066 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:44:01.234079 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:44:01.234281 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:44:01.234304 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:44:11.240944 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:44:11.241022 1 main.go:227] handling current node\nI0521 15:44:11.241062 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:44:11.241086 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:44:11.241340 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:44:11.241373 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:44:21.248077 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:44:21.248131 1 main.go:227] handling current node\nI0521 15:44:21.248159 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:44:21.248172 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:44:21.248400 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:44:21.248424 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:44:31.255221 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:44:31.255291 1 main.go:227] handling current node\nI0521 15:44:31.255333 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:44:31.255353 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:44:31.255619 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:44:31.255651 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:44:41.261967 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:44:41.262021 1 main.go:227] handling current node\nI0521 15:44:41.262045 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:44:41.262059 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:44:41.262312 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:44:41.262332 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:44:51.268933 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:44:51.268987 1 main.go:227] handling current node\nI0521 15:44:51.269027 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:44:51.269040 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:44:51.269266 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:44:51.269289 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:45:01.276319 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:45:01.276385 1 main.go:227] handling current node\nI0521 15:45:01.276414 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:45:01.276433 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:45:01.276674 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:45:01.276708 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:45:11.283024 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:45:11.283074 1 main.go:227] handling current node\nI0521 15:45:11.283100 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:45:11.283113 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:45:11.283351 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:45:11.283376 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:45:21.321590 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:45:21.321667 1 main.go:227] handling current node\nI0521 15:45:21.321697 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:45:21.321712 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:45:21.419456 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:45:21.419519 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:45:31.426348 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:45:31.426404 1 main.go:227] handling current node\nI0521 15:45:31.426430 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:45:31.426444 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:45:31.426677 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:45:31.426700 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:45:41.433889 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:45:41.433938 1 main.go:227] handling current node\nI0521 15:45:41.433963 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:45:41.433976 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:45:41.434199 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:45:41.434222 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:45:51.440713 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:45:51.440761 1 main.go:227] handling current node\nI0521 15:45:51.440786 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:45:51.440799 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:45:51.441034 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:45:51.441057 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:46:01.447721 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:46:01.447770 1 main.go:227] handling current node\nI0521 15:46:01.447796 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:46:01.447810 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:46:01.448029 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:46:01.448051 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:46:11.455000 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:46:11.455049 1 main.go:227] handling current node\nI0521 15:46:11.455075 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:46:11.455088 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:46:11.455307 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:46:11.455328 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:46:21.461957 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:46:21.462007 1 main.go:227] handling current node\nI0521 15:46:21.462032 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:46:21.462045 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:46:21.462279 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:46:21.462302 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:46:31.469020 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:46:31.469066 1 main.go:227] handling current node\nI0521 15:46:31.469089 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:46:31.469101 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:46:31.469320 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:46:31.469343 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:46:41.475724 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:46:41.475772 1 main.go:227] handling current node\nI0521 15:46:41.475797 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:46:41.475810 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:46:41.476039 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:46:41.476060 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:46:51.620079 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:46:51.620171 1 main.go:227] handling current node\nI0521 15:46:51.620203 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:46:51.620219 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:46:51.620480 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:46:51.620511 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:47:01.628377 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:47:01.628440 1 main.go:227] handling current node\nI0521 15:47:01.628466 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:47:01.628482 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:47:01.628723 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:47:01.628750 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:47:11.635903 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:47:11.635960 1 main.go:227] handling current node\nI0521 15:47:11.635986 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:47:11.636000 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:47:11.636215 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:47:11.636243 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:47:21.642954 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:47:21.643014 1 main.go:227] handling current node\nI0521 15:47:21.643041 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:47:21.643055 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:47:21.643280 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:47:21.643308 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:47:31.649895 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:47:31.649951 1 main.go:227] handling current node\nI0521 15:47:31.649977 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:47:31.649991 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:47:31.650223 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:47:31.650250 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:47:41.656876 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:47:41.656935 1 main.go:227] handling current node\nI0521 15:47:41.656962 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:47:41.656975 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:47:41.657365 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:47:41.657391 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:47:51.664229 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:47:51.664287 1 main.go:227] handling current node\nI0521 15:47:51.664314 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:47:51.664328 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:47:51.664552 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:47:51.664580 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:48:01.671245 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:48:01.671303 1 main.go:227] handling current node\nI0521 15:48:01.671329 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:48:01.671343 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:48:01.671576 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:48:01.671605 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:48:11.678011 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:48:11.678097 1 main.go:227] handling current node\nI0521 15:48:11.678137 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:48:11.678154 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:48:11.678379 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:48:11.678409 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:48:21.685130 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:48:21.685217 1 main.go:227] handling current node\nI0521 15:48:21.685244 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:48:21.685259 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:48:21.685474 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:48:21.685503 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:48:31.692499 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:48:31.692582 1 main.go:227] handling current node\nI0521 15:48:31.692611 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:48:31.692629 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:48:31.693209 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:48:31.693301 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:48:41.700573 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:48:41.700632 1 main.go:227] handling current node\nI0521 15:48:41.700659 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:48:41.700672 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:48:41.700906 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:48:41.700934 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:48:51.707445 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:48:51.707509 1 main.go:227] handling current node\nI0521 15:48:51.707536 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:48:51.707551 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:48:51.707772 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:48:51.707799 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:49:01.714356 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:49:01.714415 1 main.go:227] handling current node\nI0521 15:49:01.714441 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:49:01.714455 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:49:01.714678 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:49:01.714705 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:49:11.721315 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:49:11.721386 1 main.go:227] handling current node\nI0521 15:49:11.721413 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:49:11.721427 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:49:11.721660 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:49:11.721688 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:49:21.727885 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:49:21.727946 1 main.go:227] handling current node\nI0521 15:49:21.727974 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:49:21.727987 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:49:21.728209 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:49:21.728237 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:49:31.734651 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:49:31.734708 1 main.go:227] handling current node\nI0521 15:49:31.734735 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:49:31.734748 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:49:31.734965 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:49:31.734992 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:49:41.741626 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:49:41.741689 1 main.go:227] handling current node\nI0521 15:49:41.741719 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:49:41.741733 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:49:41.741998 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:49:41.742061 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:49:51.749184 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:49:51.749275 1 main.go:227] handling current node\nI0521 15:49:51.749308 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:49:51.749428 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:49:51.750060 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:49:51.750116 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:50:01.757218 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:50:01.757282 1 main.go:227] handling current node\nI0521 15:50:01.757311 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:50:01.757325 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:50:01.757544 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:50:01.757567 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:50:11.766937 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:50:11.767011 1 main.go:227] handling current node\nI0521 15:50:11.767038 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:50:11.767053 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:50:11.767284 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:50:11.767313 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:50:21.774111 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:50:21.774170 1 main.go:227] handling current node\nI0521 15:50:21.774198 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:50:21.774212 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:50:21.774448 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:50:21.774478 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:50:31.780527 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:50:31.780583 1 main.go:227] handling current node\nI0521 15:50:31.780614 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:50:31.780630 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:50:31.780898 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:50:31.780927 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:50:41.787186 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:50:41.787246 1 main.go:227] handling current node\nI0521 15:50:41.787273 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:50:41.787287 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:50:41.787518 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:50:41.787546 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:50:51.794218 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:50:51.794281 1 main.go:227] handling current node\nI0521 15:50:51.794308 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:50:51.794322 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:50:51.794552 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:50:51.794580 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:51:01.801090 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:51:01.801150 1 main.go:227] handling current node\nI0521 15:51:01.801177 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:51:01.801191 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:51:01.801425 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:51:01.801454 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:51:11.807880 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:51:11.807939 1 main.go:227] handling current node\nI0521 15:51:11.807966 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:51:11.807979 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:51:11.808204 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:51:11.808234 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:51:21.814973 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:51:21.815030 1 main.go:227] handling current node\nI0521 15:51:21.815058 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:51:21.815072 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:51:21.815302 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:51:21.815329 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:51:31.821881 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:51:31.821938 1 main.go:227] handling current node\nI0521 15:51:31.821967 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:51:31.821982 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:51:31.822225 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:51:31.822253 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:51:41.828446 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:51:41.828504 1 main.go:227] handling current node\nI0521 15:51:41.828529 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:51:41.828543 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:51:41.828763 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:51:41.828790 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:51:51.835034 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:51:51.835089 1 main.go:227] handling current node\nI0521 15:51:51.835117 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:51:51.835131 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:51:51.835428 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:51:51.835453 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:52:01.841965 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:52:01.842017 1 main.go:227] handling current node\nI0521 15:52:01.842046 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:52:01.842059 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:52:01.842275 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:52:01.842298 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:52:11.848787 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:52:11.848852 1 main.go:227] handling current node\nI0521 15:52:11.848881 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:52:11.848896 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:52:11.849115 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:52:11.849143 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:52:21.855623 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:52:21.855681 1 main.go:227] handling current node\nI0521 15:52:21.855708 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:52:21.855721 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:52:21.855943 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:52:21.855971 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:52:31.862488 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:52:31.862539 1 main.go:227] handling current node\nI0521 15:52:31.862566 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:52:31.862579 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:52:31.862806 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:52:31.862829 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:52:41.869914 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:52:41.869965 1 main.go:227] handling current node\nI0521 15:52:41.869990 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:52:41.870006 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:52:41.870221 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:52:41.870250 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:52:51.877953 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:52:51.878030 1 main.go:227] handling current node\nI0521 15:52:51.878068 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:52:51.878087 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:52:51.878302 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:52:51.878326 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:53:01.884972 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:53:01.885022 1 main.go:227] handling current node\nI0521 15:53:01.885048 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:53:01.885061 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:53:01.885283 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:53:01.885306 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:53:11.893591 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:53:11.893687 1 main.go:227] handling current node\nI0521 15:53:11.893735 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:53:11.893764 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:53:11.894094 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:53:11.894132 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:53:21.901160 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:53:21.901217 1 main.go:227] handling current node\nI0521 15:53:21.901243 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:53:21.901257 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:53:21.901470 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:53:21.901493 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:53:31.907892 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:53:31.907956 1 main.go:227] handling current node\nI0521 15:53:31.907987 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:53:31.908001 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:53:31.908225 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:53:31.908253 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:53:41.914658 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:53:41.914718 1 main.go:227] handling current node\nI0521 15:53:41.914747 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:53:41.914768 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:53:41.915047 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:53:41.915080 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:53:51.922742 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:53:51.922807 1 main.go:227] handling current node\nI0521 15:53:51.922837 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:53:51.922851 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:53:51.923079 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:53:51.923108 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:54:01.929781 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:54:01.929885 1 main.go:227] handling current node\nI0521 15:54:01.929913 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:54:01.929927 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:54:01.930151 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:54:01.930174 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:54:11.936584 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:54:11.936634 1 main.go:227] handling current node\nI0521 15:54:11.936659 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:54:11.936673 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:54:11.936883 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:54:11.936907 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:54:21.943910 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:54:21.943962 1 main.go:227] handling current node\nI0521 15:54:21.943987 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:54:21.944001 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:54:21.944216 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:54:21.944238 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:54:31.951155 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:54:31.951205 1 main.go:227] handling current node\nI0521 15:54:31.951232 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:54:31.951246 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:54:31.951471 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:54:31.951496 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:54:41.958074 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:54:41.958137 1 main.go:227] handling current node\nI0521 15:54:41.958166 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:54:41.958181 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:54:41.958392 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:54:41.958421 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:54:51.964821 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:54:51.964885 1 main.go:227] handling current node\nI0521 15:54:51.964914 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:54:51.964929 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:54:51.965155 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:54:51.965184 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:55:02.025887 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:55:02.025950 1 main.go:227] handling current node\nI0521 15:55:02.025975 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:55:02.025989 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:55:02.026205 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:55:02.026227 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:55:12.032798 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:55:12.032863 1 main.go:227] handling current node\nI0521 15:55:12.032892 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:55:12.032906 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:55:12.033129 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:55:12.033157 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:55:22.040363 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:55:22.040413 1 main.go:227] handling current node\nI0521 15:55:22.040438 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:55:22.040455 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:55:22.040674 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:55:22.040697 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:55:32.047042 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:55:32.047103 1 main.go:227] handling current node\nI0521 15:55:32.047132 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:55:32.047146 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:55:32.047381 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:55:32.047411 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:55:42.053795 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:55:42.053880 1 main.go:227] handling current node\nI0521 15:55:42.053906 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:55:42.053919 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:55:42.054140 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:55:42.054164 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:55:52.061352 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:55:52.061404 1 main.go:227] handling current node\nI0521 15:55:52.061434 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:55:52.061447 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:55:52.061675 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:55:52.061698 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:56:02.068343 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:56:02.068401 1 main.go:227] handling current node\nI0521 15:56:02.068427 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:56:02.068441 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:56:02.068658 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:56:02.068687 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:56:12.075175 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:56:12.075240 1 main.go:227] handling current node\nI0521 15:56:12.075271 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:56:12.075286 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:56:12.075510 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:56:12.075542 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:56:22.081699 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:56:22.081751 1 main.go:227] handling current node\nI0521 15:56:22.081778 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:56:22.081795 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:56:22.082071 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:56:22.082096 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:56:32.088670 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:56:32.088728 1 main.go:227] handling current node\nI0521 15:56:32.088753 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:56:32.088766 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:56:32.088994 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:56:32.089017 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:56:42.220415 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:56:42.220497 1 main.go:227] handling current node\nI0521 15:56:42.220529 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:56:42.220543 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:56:42.220810 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:56:42.220840 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:56:52.227629 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:56:52.227693 1 main.go:227] handling current node\nI0521 15:56:52.227719 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:56:52.227733 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:56:52.227962 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:56:52.227990 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:57:02.234790 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:57:02.234848 1 main.go:227] handling current node\nI0521 15:57:02.234883 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:57:02.234902 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:57:02.235114 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:57:02.235136 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:57:12.242030 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:57:12.242082 1 main.go:227] handling current node\nI0521 15:57:12.242111 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:57:12.242125 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:57:12.242359 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:57:12.242384 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:57:22.249121 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:57:22.249179 1 main.go:227] handling current node\nI0521 15:57:22.249204 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:57:22.249220 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:57:22.249441 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:57:22.249468 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:57:32.256682 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:57:32.256743 1 main.go:227] handling current node\nI0521 15:57:32.256773 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:57:32.256787 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:57:32.256994 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:57:32.257022 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:57:42.263580 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:57:42.263634 1 main.go:227] handling current node\nI0521 15:57:42.263662 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:57:42.263682 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:57:42.263924 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:57:42.263948 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:57:52.270417 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:57:52.270477 1 main.go:227] handling current node\nI0521 15:57:52.270513 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:57:52.270530 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:57:52.270755 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:57:52.270781 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:58:02.277763 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:58:02.277852 1 main.go:227] handling current node\nI0521 15:58:02.277893 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:58:02.277907 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:58:02.299046 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:58:02.299081 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:58:12.420895 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:58:12.420959 1 main.go:227] handling current node\nI0521 15:58:12.420992 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:58:12.421005 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:58:12.421218 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:58:12.421247 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:58:22.428687 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:58:22.428730 1 main.go:227] handling current node\nI0521 15:58:22.428748 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:58:22.428765 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:58:22.428919 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:58:22.428942 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:58:32.434610 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:58:32.434651 1 main.go:227] handling current node\nI0521 15:58:32.434676 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:58:32.434688 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:58:32.434874 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:58:32.434894 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:58:42.440900 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:58:42.440950 1 main.go:227] handling current node\nI0521 15:58:42.440978 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:58:42.440995 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:58:42.441198 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:58:42.441222 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:58:52.447620 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:58:52.447669 1 main.go:227] handling current node\nI0521 15:58:52.447690 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:58:52.447704 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:58:52.447891 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:58:52.447912 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:59:02.452751 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:59:02.452803 1 main.go:227] handling current node\nI0521 15:59:02.452825 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:59:02.452838 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:59:02.494658 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:59:02.494703 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:59:12.499773 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:59:12.499818 1 main.go:227] handling current node\nI0521 15:59:12.499838 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:59:12.499848 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:59:12.500014 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:59:12.500044 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:59:22.506493 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:59:22.506541 1 main.go:227] handling current node\nI0521 15:59:22.506568 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:59:22.506582 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:59:22.538904 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:59:22.538952 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:59:32.547289 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:59:32.547347 1 main.go:227] handling current node\nI0521 15:59:32.547374 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:59:32.547387 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:59:32.547606 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:59:32.547636 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:59:42.553628 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:59:42.553669 1 main.go:227] handling current node\nI0521 15:59:42.553692 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:59:42.553705 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:59:42.553933 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:59:42.553954 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:59:52.560884 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:59:52.560937 1 main.go:227] handling current node\nI0521 15:59:52.560962 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:59:52.560976 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:59:52.561252 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:59:52.561276 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:00:02.567982 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:00:02.568024 1 main.go:227] handling current node\nI0521 16:00:02.568048 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:00:02.568061 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:00:02.568256 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:00:02.568275 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:00:12.572788 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:00:12.572818 1 main.go:227] handling current node\nI0521 16:00:12.572832 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:00:12.572849 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:00:12.572976 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:00:12.572988 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:00:22.579602 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:00:22.579658 1 main.go:227] handling current node\nI0521 16:00:22.579681 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:00:22.579693 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:00:22.579876 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:00:22.579895 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:00:32.585146 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:00:32.585185 1 main.go:227] handling current node\nI0521 16:00:32.585206 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:00:32.585218 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:00:32.585393 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:00:32.585410 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:00:42.592334 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:00:42.592393 1 main.go:227] handling current node\nI0521 16:00:42.592418 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:00:42.592432 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:00:43.021038 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:00:43.021092 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:00:53.036698 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:00:53.036754 1 main.go:227] handling current node\nI0521 16:00:53.036780 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:00:53.036796 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:00:53.037033 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:00:53.037065 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:01:03.043837 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:01:03.043887 1 main.go:227] handling current node\nI0521 16:01:03.043916 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:01:03.043929 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:01:03.044141 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:01:03.044163 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:01:13.050226 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:01:13.050276 1 main.go:227] handling current node\nI0521 16:01:13.050299 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:01:13.050311 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:01:13.050516 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:01:13.050539 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:01:23.057680 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:01:23.057754 1 main.go:227] handling current node\nI0521 16:01:23.057780 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:01:23.057795 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:01:23.119766 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:01:23.119809 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:01:33.125962 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:01:33.126032 1 main.go:227] handling current node\nI0521 16:01:33.126058 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:01:33.126073 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:01:33.126267 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:01:33.126287 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:01:43.132728 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:01:43.132776 1 main.go:227] handling current node\nI0521 16:01:43.132813 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:01:43.132826 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:01:43.133060 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:01:43.133082 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:01:53.140001 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:01:53.140067 1 main.go:227] handling current node\nI0521 16:01:53.140100 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:01:53.140130 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:01:53.140366 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:01:53.140399 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:02:03.147244 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:02:03.147299 1 main.go:227] handling current node\nI0521 16:02:03.147323 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:02:03.147338 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:02:03.147528 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:02:03.147554 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:02:13.152394 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:02:13.152436 1 main.go:227] handling current node\nI0521 16:02:13.152458 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:02:13.152470 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:02:13.152642 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:02:13.152658 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:02:23.160032 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:02:23.160083 1 main.go:227] handling current node\nI0521 16:02:23.160111 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:02:23.160127 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:02:23.403797 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:02:23.403874 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:02:33.428626 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:02:33.428681 1 main.go:227] handling current node\nI0521 16:02:33.428708 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:02:33.428722 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:02:33.428952 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:02:33.428991 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:02:43.435553 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:02:43.435600 1 main.go:227] handling current node\nI0521 16:02:43.435627 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:02:43.435640 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:02:43.435842 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:02:43.435862 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:02:53.441018 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:02:53.441058 1 main.go:227] handling current node\nI0521 16:02:53.441091 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:02:53.441103 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:02:53.441286 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:02:53.441314 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:03:03.447140 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:03:03.447177 1 main.go:227] handling current node\nI0521 16:03:03.447193 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:03:03.447201 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:03:03.447374 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:03:03.447389 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:03:13.454058 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:03:13.454104 1 main.go:227] handling current node\nI0521 16:03:13.454125 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:03:13.454144 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:03:13.454542 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:03:13.454566 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:03:23.461408 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:03:23.461463 1 main.go:227] handling current node\nI0521 16:03:23.461488 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:03:23.461501 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:03:23.461745 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:03:23.461774 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:03:33.469342 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:03:33.469393 1 main.go:227] handling current node\nI0521 16:03:33.469421 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:03:33.469435 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:03:33.469697 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:03:33.469718 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:03:43.475817 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:03:43.475872 1 main.go:227] handling current node\nI0521 16:03:43.475900 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:03:43.475913 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:03:43.476131 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:03:43.476157 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:03:53.481795 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:03:53.481895 1 main.go:227] handling current node\nI0521 16:03:53.481922 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:03:53.481936 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:03:53.482174 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:03:53.482195 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:04:03.489399 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:04:03.489455 1 main.go:227] handling current node\nI0521 16:04:03.489498 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:04:03.489511 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:04:03.489713 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:04:03.489737 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:04:13.496011 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:04:13.496067 1 main.go:227] handling current node\nI0521 16:04:13.496092 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:04:13.496105 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:04:13.496300 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:04:13.496317 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:04:23.502732 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:04:23.502789 1 main.go:227] handling current node\nI0521 16:04:23.502814 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:04:23.502828 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:04:23.503029 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:04:23.503056 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:04:33.510488 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:04:33.510582 1 main.go:227] handling current node\nI0521 16:04:33.510612 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:04:33.510630 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:04:33.510885 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:04:33.510911 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:04:43.517208 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:04:43.517257 1 main.go:227] handling current node\nI0521 16:04:43.517280 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:04:43.517294 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:04:43.517484 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:04:43.517505 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:04:53.524452 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:04:53.524510 1 main.go:227] handling current node\nI0521 16:04:53.524534 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:04:53.524546 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:04:53.524758 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:04:53.524790 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:05:03.531430 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:05:03.531488 1 main.go:227] handling current node\nI0521 16:05:03.531512 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:05:03.531527 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:05:03.531720 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:05:03.531744 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:05:13.539812 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:05:13.539859 1 main.go:227] handling current node\nI0521 16:05:13.539884 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:05:13.539898 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:05:13.540114 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:05:13.540136 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:05:23.545896 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:05:23.545934 1 main.go:227] handling current node\nI0521 16:05:23.545952 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:05:23.545961 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:05:23.546118 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:05:23.546134 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:05:33.553412 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:05:33.553458 1 main.go:227] handling current node\nI0521 16:05:33.553483 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:05:33.553495 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:05:33.553732 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:05:33.553760 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:05:43.560369 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:05:43.560421 1 main.go:227] handling current node\nI0521 16:05:43.560446 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:05:43.560460 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:05:43.560710 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:05:43.560737 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:05:53.567877 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:05:53.567936 1 main.go:227] handling current node\nI0521 16:05:53.567962 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:05:53.567976 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:05:53.568188 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:05:53.568215 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:06:03.575219 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:06:03.575278 1 main.go:227] handling current node\nI0521 16:06:03.575305 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:06:03.575318 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:06:03.575548 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:06:03.575584 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:06:13.583853 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:06:13.583923 1 main.go:227] handling current node\nI0521 16:06:13.583951 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:06:13.583981 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:06:13.584275 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:06:13.584320 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:06:23.595499 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:06:23.595545 1 main.go:227] handling current node\nI0521 16:06:23.595571 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:06:23.595585 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:06:23.595792 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:06:23.595813 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:06:33.602413 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:06:33.602473 1 main.go:227] handling current node\nI0521 16:06:33.602500 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:06:33.602513 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:06:33.602732 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:06:33.602755 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:06:43.609420 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:06:43.609477 1 main.go:227] handling current node\nI0521 16:06:43.609502 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:06:43.609516 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:06:43.609724 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:06:43.609750 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:06:53.615887 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:06:53.615950 1 main.go:227] handling current node\nI0521 16:06:53.615977 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:06:53.615990 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:06:53.616211 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:06:53.616240 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:07:03.622749 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:07:03.622800 1 main.go:227] handling current node\nI0521 16:07:03.622826 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:07:03.622840 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:07:03.623057 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:07:03.623080 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:07:13.629748 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:07:13.629846 1 main.go:227] handling current node\nI0521 16:07:13.629875 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:07:13.629890 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:07:13.630119 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:07:13.630147 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:07:23.636730 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:07:23.636794 1 main.go:227] handling current node\nI0521 16:07:23.636820 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:07:23.636834 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:07:23.637050 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:07:23.637078 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:07:33.643292 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:07:33.643352 1 main.go:227] handling current node\nI0521 16:07:33.643377 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:07:33.643390 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:07:33.643620 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:07:33.643648 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:07:43.650234 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:07:43.650295 1 main.go:227] handling current node\nI0521 16:07:43.650321 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:07:43.650335 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:07:43.650561 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:07:43.650590 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:07:53.658436 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:07:53.658539 1 main.go:227] handling current node\nI0521 16:07:53.659018 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:07:53.719330 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:07:53.719667 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:07:53.719703 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:08:03.727004 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:08:03.727061 1 main.go:227] handling current node\nI0521 16:08:03.727086 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:08:03.727102 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:08:03.727318 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:08:03.727344 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:08:13.734920 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:08:13.734969 1 main.go:227] handling current node\nI0521 16:08:13.734994 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:08:13.735006 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:08:13.735270 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:08:13.735292 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:08:23.742517 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:08:23.742578 1 main.go:227] handling current node\nI0521 16:08:23.742605 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:08:23.742618 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:08:23.742821 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:08:23.742846 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:08:33.749703 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:08:33.749756 1 main.go:227] handling current node\nI0521 16:08:33.749788 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:08:33.749840 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:08:33.750070 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:08:33.750094 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:08:43.757553 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:08:43.757627 1 main.go:227] handling current node\nI0521 16:08:43.757654 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:08:43.757669 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:08:43.757914 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:08:43.757943 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:08:53.764616 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:08:53.764671 1 main.go:227] handling current node\nI0521 16:08:53.764698 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:08:53.764711 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:08:53.764949 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:08:53.764974 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:09:03.771490 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:09:03.771544 1 main.go:227] handling current node\nI0521 16:09:03.771573 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:09:03.771587 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:09:03.771806 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:09:03.771835 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:09:13.779162 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:09:13.779218 1 main.go:227] handling current node\nI0521 16:09:13.779248 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:09:13.779262 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:09:13.779482 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:09:13.779506 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:09:23.786775 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:09:23.786888 1 main.go:227] handling current node\nI0521 16:09:23.786923 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:09:23.786942 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:09:23.787278 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:09:23.787325 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:09:33.795395 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:09:33.795453 1 main.go:227] handling current node\nI0521 16:09:33.795480 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:09:33.795494 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:09:33.795766 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:09:33.795795 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:09:43.802836 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:09:43.802896 1 main.go:227] handling current node\nI0521 16:09:43.802922 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:09:43.802936 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:09:43.803154 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:09:43.803182 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:09:53.810451 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:09:53.810506 1 main.go:227] handling current node\nI0521 16:09:53.810535 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:09:53.810549 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:09:53.810780 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:09:53.810808 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:10:03.818083 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:10:03.818139 1 main.go:227] handling current node\nI0521 16:10:03.818165 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:10:03.818178 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:10:03.818433 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:10:03.818464 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:10:13.825714 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:10:13.825777 1 main.go:227] handling current node\nI0521 16:10:13.825832 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:10:13.825857 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:10:13.826820 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:10:13.826846 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:10:23.832876 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:10:23.832925 1 main.go:227] handling current node\nI0521 16:10:23.832943 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:10:23.832952 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:10:23.833124 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:10:23.833145 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:10:33.840074 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:10:33.840131 1 main.go:227] handling current node\nI0521 16:10:33.840156 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:10:33.840170 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:10:33.840377 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:10:33.840403 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:10:43.847698 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:10:43.847756 1 main.go:227] handling current node\nI0521 16:10:43.847782 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:10:43.847796 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:10:43.848016 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:10:43.848044 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:10:53.855085 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:10:53.855143 1 main.go:227] handling current node\nI0521 16:10:53.855171 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:10:53.855184 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:10:53.855408 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:10:53.855436 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:11:03.863988 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:11:03.864069 1 main.go:227] handling current node\nI0521 16:11:03.864099 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:11:03.864129 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:11:03.864386 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:11:03.864417 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:11:13.871461 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:11:13.871527 1 main.go:227] handling current node\nI0521 16:11:13.871554 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:11:13.871569 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:11:13.871781 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:11:13.871809 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:11:23.878454 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:11:23.878518 1 main.go:227] handling current node\nI0521 16:11:23.878558 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:11:23.878580 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:11:23.878818 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:11:23.878849 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:11:33.885721 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:11:33.885780 1 main.go:227] handling current node\nI0521 16:11:33.885843 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:11:33.885860 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:11:33.886089 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:11:33.886119 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:11:43.891932 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:11:43.892000 1 main.go:227] handling current node\nI0521 16:11:43.892026 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:11:43.892040 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:11:43.892260 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:11:43.892287 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:11:53.899628 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:11:53.899688 1 main.go:227] handling current node\nI0521 16:11:53.899713 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:11:53.899727 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:11:53.899938 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:11:53.899965 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:12:03.907107 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:12:03.907166 1 main.go:227] handling current node\nI0521 16:12:03.907194 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:12:03.907207 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:12:03.907422 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:12:03.907450 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:12:13.914648 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:12:13.914704 1 main.go:227] handling current node\nI0521 16:12:13.914730 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:12:13.914743 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:12:13.914953 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:12:13.914981 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:12:23.922326 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:12:23.922381 1 main.go:227] handling current node\nI0521 16:12:23.922406 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:12:23.922420 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:12:23.922627 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:12:23.922654 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:12:33.929590 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:12:33.929651 1 main.go:227] handling current node\nI0521 16:12:33.929679 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:12:33.929693 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:12:33.929962 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:12:33.929992 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:12:43.937002 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:12:43.937064 1 main.go:227] handling current node\nI0521 16:12:43.937091 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:12:43.937105 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:12:43.937324 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:12:43.937352 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:12:53.944794 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:12:53.944852 1 main.go:227] handling current node\nI0521 16:12:53.944878 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:12:53.944892 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:12:53.945215 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:12:53.945239 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:13:03.951972 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:13:03.952020 1 main.go:227] handling current node\nI0521 16:13:03.952047 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:13:03.952063 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:13:03.952321 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:13:03.952344 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:13:13.959730 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:13:13.959783 1 main.go:227] handling current node\nI0521 16:13:13.959812 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:13:13.959897 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:13:13.960169 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:13:13.960196 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:13:23.968168 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:13:23.968241 1 main.go:227] handling current node\nI0521 16:13:23.968268 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:13:23.968287 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:13:23.968520 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:13:23.968550 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:13:33.974807 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:13:33.974869 1 main.go:227] handling current node\nI0521 16:13:33.974897 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:13:33.974913 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:13:33.975135 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:13:33.975163 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:13:43.981784 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:13:43.981879 1 main.go:227] handling current node\nI0521 16:13:43.981910 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:13:43.981924 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:13:43.982161 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:13:43.982184 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:13:53.988404 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:13:53.988461 1 main.go:227] handling current node\nI0521 16:13:53.988485 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:13:53.988497 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:13:53.988727 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:13:53.988755 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:14:03.996249 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:14:03.996334 1 main.go:227] handling current node\nI0521 16:14:03.996371 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:14:03.996396 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:14:03.996666 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:14:03.996710 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:14:14.003104 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:14:14.003154 1 main.go:227] handling current node\nI0521 16:14:14.003179 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:14:14.003189 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:14:14.003372 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:14:14.003393 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:14:24.120808 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:14:24.120887 1 main.go:227] handling current node\nI0521 16:14:24.120915 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:14:24.120929 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:14:24.121151 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:14:24.121178 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:14:34.127543 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:14:34.127600 1 main.go:227] handling current node\nI0521 16:14:34.127624 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:14:34.127638 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:14:34.127842 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:14:34.127870 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:14:44.135666 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:14:44.135719 1 main.go:227] handling current node\nI0521 16:14:44.135749 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:14:44.135763 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:14:44.135985 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:14:44.136008 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:14:54.143150 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:14:54.143209 1 main.go:227] handling current node\nI0521 16:14:54.143235 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:14:54.143251 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:14:54.143465 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:14:54.143494 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:15:04.150974 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:15:04.151028 1 main.go:227] handling current node\nI0521 16:15:04.151058 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:15:04.151073 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:15:04.151303 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:15:04.151326 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:15:14.158191 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:15:14.158245 1 main.go:227] handling current node\nI0521 16:15:14.158274 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:15:14.158287 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:15:14.158513 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:15:14.158536 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:15:24.165439 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:15:24.165503 1 main.go:227] handling current node\nI0521 16:15:24.165534 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:15:24.165551 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:15:24.165766 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:15:24.165794 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:15:34.172763 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:15:34.172823 1 main.go:227] handling current node\nI0521 16:15:34.172849 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:15:34.172862 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:15:34.173076 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:15:34.173104 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:15:44.179454 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:15:44.179499 1 main.go:227] handling current node\nI0521 16:15:44.179519 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:15:44.179530 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:15:44.179726 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:15:44.179745 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:15:54.187525 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:15:54.187598 1 main.go:227] handling current node\nI0521 16:15:54.187627 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:15:54.187643 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:15:54.187944 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:15:54.187974 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:16:04.194901 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:16:04.194963 1 main.go:227] handling current node\nI0521 16:16:04.194992 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:16:04.195006 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:16:04.195205 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:16:04.195232 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:16:14.202575 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:16:14.202647 1 main.go:227] handling current node\nI0521 16:16:14.202676 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:16:14.202690 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:16:14.202907 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:16:14.202936 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:16:24.209788 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:16:24.209905 1 main.go:227] handling current node\nI0521 16:16:24.209935 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:16:24.209949 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:16:24.210156 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:16:24.210184 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:16:34.217438 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:16:34.217496 1 main.go:227] handling current node\nI0521 16:16:34.217539 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:16:34.217565 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:16:34.217794 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:16:34.217862 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:16:44.224439 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:16:44.224501 1 main.go:227] handling current node\nI0521 16:16:44.224527 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:16:44.224541 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:16:44.224757 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:16:44.224785 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:16:54.231175 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:16:54.231224 1 main.go:227] handling current node\nI0521 16:16:54.231249 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:16:54.231262 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:16:54.231469 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:16:54.231490 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:17:04.246344 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:17:04.246390 1 main.go:227] handling current node\nI0521 16:17:04.246412 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:17:04.246421 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:17:04.246626 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:17:04.246646 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:17:14.253759 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:17:14.253847 1 main.go:227] handling current node\nI0521 16:17:14.253875 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:17:14.253890 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:17:14.254131 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:17:14.254155 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:17:24.261252 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:17:24.261304 1 main.go:227] handling current node\nI0521 16:17:24.261331 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:17:24.261345 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:17:24.261581 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:17:24.261609 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:17:34.319782 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:17:34.319855 1 main.go:227] handling current node\nI0521 16:17:34.319884 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:17:34.319900 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:17:34.320183 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:17:34.320210 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:17:44.327503 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:17:44.327559 1 main.go:227] handling current node\nI0521 16:17:44.327584 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:17:44.327597 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:17:44.327822 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:17:44.327845 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:17:54.334842 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:17:54.334892 1 main.go:227] handling current node\nI0521 16:17:54.334918 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:17:54.334932 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:17:54.335162 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:17:54.335185 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:18:04.342054 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:18:04.342105 1 main.go:227] handling current node\nI0521 16:18:04.342133 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:18:04.342147 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:18:04.342385 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:18:04.342409 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:18:14.348695 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:18:14.348743 1 main.go:227] handling current node\nI0521 16:18:14.348767 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:18:14.348780 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:18:14.349031 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:18:14.349053 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:18:24.355960 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:18:24.356010 1 main.go:227] handling current node\nI0521 16:18:24.356043 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:18:24.356057 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:18:24.356272 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:18:24.356295 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:18:34.363382 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:18:34.363432 1 main.go:227] handling current node\nI0521 16:18:34.363458 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:18:34.363471 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:18:34.363695 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:18:34.363717 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:18:44.370605 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:18:44.370653 1 main.go:227] handling current node\nI0521 16:18:44.370680 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:18:44.370693 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:18:44.370936 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:18:44.370958 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:18:54.378464 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:18:54.378522 1 main.go:227] handling current node\nI0521 16:18:54.378558 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:18:54.378580 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:18:54.378825 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:18:54.378849 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:19:04.386364 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:19:04.386414 1 main.go:227] handling current node\nI0521 16:19:04.386440 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:19:04.386453 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:19:04.386691 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:19:04.386724 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:19:14.393606 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:19:14.393656 1 main.go:227] handling current node\nI0521 16:19:14.393681 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:19:14.393694 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:19:14.393966 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:19:14.393990 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:19:24.419706 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:19:24.419764 1 main.go:227] handling current node\nI0521 16:19:24.419784 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:19:24.419794 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:19:24.420061 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:19:24.420090 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:19:34.426840 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:19:34.426888 1 main.go:227] handling current node\nI0521 16:19:34.426913 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:19:34.426926 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:19:34.427153 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:19:34.427177 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:19:44.434148 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:19:44.434214 1 main.go:227] handling current node\nI0521 16:19:44.434239 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:19:44.434252 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:19:44.434465 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:19:44.434487 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:19:54.441162 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:19:54.441221 1 main.go:227] handling current node\nI0521 16:19:54.441246 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:19:54.441260 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:19:54.441487 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:19:54.441510 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:20:04.448423 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:20:04.448471 1 main.go:227] handling current node\nI0521 16:20:04.448497 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:20:04.448512 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:20:04.448726 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:20:04.448748 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:20:14.455594 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:20:14.455639 1 main.go:227] handling current node\nI0521 16:20:14.455664 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:20:14.455678 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:20:14.455894 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:20:14.455916 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:20:24.462708 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:20:24.462771 1 main.go:227] handling current node\nI0521 16:20:24.462797 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:20:24.462811 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:20:24.463039 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:20:24.463062 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:20:34.470537 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:20:34.470586 1 main.go:227] handling current node\nI0521 16:20:34.470612 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:20:34.470626 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:20:34.470871 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:20:34.470894 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:20:44.478301 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:20:44.478342 1 main.go:227] handling current node\nI0521 16:20:44.478363 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:20:44.478375 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:20:44.478581 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:20:44.478600 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:20:54.486028 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:20:54.486074 1 main.go:227] handling current node\nI0521 16:20:54.486101 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:20:54.486115 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:20:54.486362 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:20:54.486387 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:21:04.493286 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:21:04.493341 1 main.go:227] handling current node\nI0521 16:21:04.493366 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:21:04.493379 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:21:04.493605 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:21:04.493628 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:21:14.501772 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:21:14.501858 1 main.go:227] handling current node\nI0521 16:21:14.501885 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:21:14.501900 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:21:14.502134 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:21:14.502164 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:21:24.509533 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:21:24.509585 1 main.go:227] handling current node\nI0521 16:21:24.509612 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:21:24.509625 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:21:24.509936 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:21:24.509983 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:21:34.517499 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:21:34.517551 1 main.go:227] handling current node\nI0521 16:21:34.517580 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:21:34.517593 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:21:34.517862 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:21:34.517889 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:21:44.525372 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:21:44.525422 1 main.go:227] handling current node\nI0521 16:21:44.525448 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:21:44.525461 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:21:44.525686 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:21:44.525709 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:21:54.534396 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:21:54.534453 1 main.go:227] handling current node\nI0521 16:21:54.534482 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:21:54.534496 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:21:54.534734 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:21:54.534757 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:22:04.542662 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:22:04.542712 1 main.go:227] handling current node\nI0521 16:22:04.542738 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:22:04.542751 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:22:04.542977 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:22:04.543000 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:22:14.551422 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:22:14.551482 1 main.go:227] handling current node\nI0521 16:22:14.551508 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:22:14.551525 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:22:14.551755 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:22:14.551781 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:22:24.558986 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:22:24.559036 1 main.go:227] handling current node\nI0521 16:22:24.559065 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:22:24.559079 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:22:24.559313 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:22:24.559336 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:22:34.567524 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:22:34.567573 1 main.go:227] handling current node\nI0521 16:22:34.567599 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:22:34.567613 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:22:34.568401 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:22:34.568506 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:22:44.576579 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:22:44.576644 1 main.go:227] handling current node\nI0521 16:22:44.576670 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:22:44.576685 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:22:44.576915 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:22:44.576942 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:22:54.584291 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:22:54.584343 1 main.go:227] handling current node\nI0521 16:22:54.584371 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:22:54.584384 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:22:54.584606 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:22:54.584629 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:23:04.592143 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:23:04.592208 1 main.go:227] handling current node\nI0521 16:23:04.592237 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:23:04.592251 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:23:04.592486 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:23:04.592515 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:23:14.600887 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:23:14.600938 1 main.go:227] handling current node\nI0521 16:23:14.600964 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:23:14.600978 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:23:14.601214 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:23:14.601238 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:23:24.608566 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:23:24.608614 1 main.go:227] handling current node\nI0521 16:23:24.608642 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:23:24.608655 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:23:24.608883 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:23:24.608910 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:23:34.616341 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:23:34.616401 1 main.go:227] handling current node\nI0521 16:23:34.616426 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:23:34.616440 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:23:34.616658 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:23:34.616687 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:23:44.624112 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:23:44.624173 1 main.go:227] handling current node\nI0521 16:23:44.624200 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:23:44.624213 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:23:44.624467 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:23:44.624496 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:23:54.632443 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:23:54.632495 1 main.go:227] handling current node\nI0521 16:23:54.632522 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:23:54.632535 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:23:54.632775 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:23:54.632799 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:24:04.640153 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:24:04.640211 1 main.go:227] handling current node\nI0521 16:24:04.640238 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:24:04.640252 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:24:04.640486 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:24:04.640516 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:24:14.649427 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:24:14.649508 1 main.go:227] handling current node\nI0521 16:24:14.649536 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:24:14.649552 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:24:14.719316 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:24:14.719356 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:24:24.726362 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:24:24.726418 1 main.go:227] handling current node\nI0521 16:24:24.726444 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:24:24.726457 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:24:24.726682 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:24:24.726705 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:24:34.734031 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:24:34.734080 1 main.go:227] handling current node\nI0521 16:24:34.734107 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:24:34.734120 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:24:34.734348 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:24:34.734370 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:24:44.741525 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:24:44.741575 1 main.go:227] handling current node\nI0521 16:24:44.741601 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:24:44.741615 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:24:44.741888 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:24:44.741913 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:24:54.750099 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:24:54.750150 1 main.go:227] handling current node\nI0521 16:24:54.750176 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:24:54.750190 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:24:54.750417 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:24:54.750443 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:25:04.758454 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:25:04.758504 1 main.go:227] handling current node\nI0521 16:25:04.758529 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:25:04.758543 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:25:04.758773 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:25:04.758795 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:25:14.766733 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:25:14.766791 1 main.go:227] handling current node\nI0521 16:25:14.766816 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:25:14.766830 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:25:14.767057 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:25:14.767080 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:25:24.773986 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:25:24.774036 1 main.go:227] handling current node\nI0521 16:25:24.774062 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:25:24.774077 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:25:24.774295 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:25:24.774318 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:25:34.782121 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:25:34.782170 1 main.go:227] handling current node\nI0521 16:25:34.782196 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:25:34.782210 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:25:34.782448 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:25:34.782472 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:25:44.790456 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:25:44.790512 1 main.go:227] handling current node\nI0521 16:25:44.790538 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:25:44.790552 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:25:44.790791 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:25:44.790819 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:25:54.798664 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:25:54.798722 1 main.go:227] handling current node\nI0521 16:25:54.798749 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:25:54.798763 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:25:54.799014 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:25:54.799041 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:26:04.808781 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:26:04.808849 1 main.go:227] handling current node\nI0521 16:26:04.808877 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:26:04.808893 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:26:04.809192 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:26:04.809235 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:26:14.817597 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:26:14.817647 1 main.go:227] handling current node\nI0521 16:26:14.817672 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:26:14.817686 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:26:14.817940 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:26:14.817967 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:26:24.825398 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:26:24.825469 1 main.go:227] handling current node\nI0521 16:26:24.825496 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:26:24.825510 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:26:24.825740 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:26:24.825769 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:26:34.833492 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:26:34.833548 1 main.go:227] handling current node\nI0521 16:26:34.833574 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:26:34.833588 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:26:34.833881 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:26:34.833923 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:26:44.841173 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:26:44.841222 1 main.go:227] handling current node\nI0521 16:26:44.841248 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:26:44.841267 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:26:44.841498 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:26:44.841522 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:26:54.848581 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:26:54.848631 1 main.go:227] handling current node\nI0521 16:26:54.848658 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:26:54.848672 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:26:54.848903 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:26:54.848928 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:27:04.855928 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:27:04.855976 1 main.go:227] handling current node\nI0521 16:27:04.856003 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:27:04.856015 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:27:04.856237 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:27:04.856262 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:27:14.863651 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:27:14.863701 1 main.go:227] handling current node\nI0521 16:27:14.863727 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:27:14.863740 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:27:14.863977 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:27:14.864002 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:27:24.870614 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:27:24.870674 1 main.go:227] handling current node\nI0521 16:27:24.870712 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:27:24.870733 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:27:24.870993 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:27:24.871024 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:27:34.877889 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:27:34.877947 1 main.go:227] handling current node\nI0521 16:27:34.877974 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:27:34.877988 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:27:34.878214 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:27:34.878241 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:27:44.885465 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:27:44.885526 1 main.go:227] handling current node\nI0521 16:27:44.885552 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:27:44.885566 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:27:44.885844 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:27:44.885873 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:27:54.920507 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:27:54.920583 1 main.go:227] handling current node\nI0521 16:27:54.920611 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:27:54.920624 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:27:54.920886 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:27:54.920912 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:28:04.927809 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:28:04.927880 1 main.go:227] handling current node\nI0521 16:28:04.927907 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:28:04.927922 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:28:04.928153 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:28:04.928180 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:28:14.934846 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:28:14.934903 1 main.go:227] handling current node\nI0521 16:28:14.934929 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:28:14.934942 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:28:14.935161 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:28:14.935188 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:28:24.942510 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:28:24.942559 1 main.go:227] handling current node\nI0521 16:28:24.942588 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:28:24.942602 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:28:24.942822 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:28:24.942845 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:28:34.948248 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:28:34.948293 1 main.go:227] handling current node\nI0521 16:28:34.948315 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:28:34.948326 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:28:34.948537 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:28:34.948557 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:28:44.955426 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:28:44.955487 1 main.go:227] handling current node\nI0521 16:28:44.955513 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:28:44.955527 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:28:44.955757 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:28:44.955784 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:28:54.962674 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:28:54.962727 1 main.go:227] handling current node\nI0521 16:28:54.962756 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:28:54.962769 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:28:54.962996 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:28:54.963020 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:29:04.977562 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:29:04.977633 1 main.go:227] handling current node\nI0521 16:29:04.977664 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:29:04.977679 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:29:04.977997 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:29:04.978037 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:29:14.984366 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:29:14.984421 1 main.go:227] handling current node\nI0521 16:29:14.984447 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:29:14.984460 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:29:14.984683 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:29:14.984711 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:29:24.994937 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:29:24.994988 1 main.go:227] handling current node\nI0521 16:29:24.995013 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:29:24.995027 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:29:24.995731 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:29:24.995776 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:29:35.002884 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:29:35.002943 1 main.go:227] handling current node\nI0521 16:29:35.002969 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:29:35.002982 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:29:35.003200 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:29:35.003227 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:29:45.007562 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:29:45.007603 1 main.go:227] handling current node\nI0521 16:29:45.007627 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:29:45.007639 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:29:45.007838 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:29:45.007856 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:29:55.015918 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:29:55.015973 1 main.go:227] handling current node\nI0521 16:29:55.016002 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:29:55.016018 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:29:55.016253 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:29:55.016276 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:30:05.022485 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:30:05.022532 1 main.go:227] handling current node\nI0521 16:30:05.022551 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:30:05.022561 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:30:05.022737 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:30:05.022755 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:30:15.029545 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:30:15.029599 1 main.go:227] handling current node\nI0521 16:30:15.029629 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:30:15.029643 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:30:15.029918 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:30:15.029944 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:30:25.036560 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:30:25.036609 1 main.go:227] handling current node\nI0521 16:30:25.036636 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:30:25.036651 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:30:25.036867 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:30:25.036889 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:30:35.043697 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:30:35.043757 1 main.go:227] handling current node\nI0521 16:30:35.043783 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:30:35.043797 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:30:35.044043 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:30:35.044066 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:30:45.051449 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:30:45.051513 1 main.go:227] handling current node\nI0521 16:30:45.051542 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:30:45.051557 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:30:45.051785 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:30:45.051813 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:30:55.060168 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:30:55.060233 1 main.go:227] handling current node\nI0521 16:30:55.060266 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:30:55.060277 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:30:55.060506 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:30:55.060524 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:31:05.067027 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:31:05.067077 1 main.go:227] handling current node\nI0521 16:31:05.067103 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:31:05.067115 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:31:05.067330 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:31:05.067353 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:31:15.074436 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:31:15.074488 1 main.go:227] handling current node\nI0521 16:31:15.074517 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:31:15.074531 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:31:15.074744 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:31:15.074767 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:31:25.081551 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:31:25.081612 1 main.go:227] handling current node\nI0521 16:31:25.081637 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:31:25.081651 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:31:25.081916 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:31:25.081941 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:31:35.088494 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:31:35.088557 1 main.go:227] handling current node\nI0521 16:31:35.088588 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:31:35.088602 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:31:35.088836 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:31:35.088865 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:31:45.096901 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:31:45.096952 1 main.go:227] handling current node\nI0521 16:31:45.096981 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:31:45.096995 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:31:45.097230 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:31:45.097252 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:31:55.104827 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:31:55.104888 1 main.go:227] handling current node\nI0521 16:31:55.104916 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:31:55.104930 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:31:55.105155 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:31:55.105182 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:32:05.110714 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:32:05.110756 1 main.go:227] handling current node\nI0521 16:32:05.110779 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:32:05.110790 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:32:05.110989 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:32:05.111007 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:32:15.116440 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:32:15.116488 1 main.go:227] handling current node\nI0521 16:32:15.116514 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:32:15.116528 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:32:15.116766 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:32:15.116800 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:32:25.122294 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:32:25.122332 1 main.go:227] handling current node\nI0521 16:32:25.122356 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:32:25.122365 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:32:25.122525 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:32:25.122540 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:32:35.129064 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:32:35.129111 1 main.go:227] handling current node\nI0521 16:32:35.129136 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:32:35.129149 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:32:35.129456 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:32:35.129498 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:32:45.220678 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:32:45.220747 1 main.go:227] handling current node\nI0521 16:32:45.220770 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:32:45.220780 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:32:45.312003 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:32:45.312057 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:32:55.373748 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:32:55.373796 1 main.go:227] handling current node\nI0521 16:32:55.419149 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:32:55.419186 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:32:55.419393 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:32:55.419415 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:33:05.425580 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:33:05.425626 1 main.go:227] handling current node\nI0521 16:33:05.425649 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:33:05.425661 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:33:05.425927 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:33:05.425951 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:33:15.432433 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:33:15.432492 1 main.go:227] handling current node\nI0521 16:33:15.432517 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:33:15.432532 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:33:15.432769 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:33:15.432792 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:33:25.439617 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:33:25.439666 1 main.go:227] handling current node\nI0521 16:33:25.439692 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:33:25.439706 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:33:25.439920 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:33:25.439942 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:33:35.447168 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:33:35.447227 1 main.go:227] handling current node\nI0521 16:33:35.447252 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:33:35.447266 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:33:35.447494 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:33:35.447530 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:33:45.454388 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:33:45.454447 1 main.go:227] handling current node\nI0521 16:33:45.454472 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:33:45.454487 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:33:45.454705 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:33:45.454735 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:33:55.461100 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:33:55.461149 1 main.go:227] handling current node\nI0521 16:33:55.461176 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:33:55.461190 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:33:55.461415 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:33:55.461438 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:34:05.521260 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:34:05.521336 1 main.go:227] handling current node\nI0521 16:34:05.521365 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:34:05.521380 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:34:05.521611 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:34:05.521638 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:34:15.527694 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:34:15.527757 1 main.go:227] handling current node\nI0521 16:34:15.527783 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:34:15.527798 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:34:15.528024 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:34:15.528051 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:34:25.535468 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:34:25.535521 1 main.go:227] handling current node\nI0521 16:34:25.535546 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:34:25.535559 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:34:25.535775 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:34:25.535802 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:34:35.542365 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:34:35.542414 1 main.go:227] handling current node\nI0521 16:34:35.542435 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:34:35.542445 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:34:35.631930 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:34:35.631984 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:34:45.763356 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:34:45.763421 1 main.go:227] handling current node\nI0521 16:34:45.763461 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:34:45.763484 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:34:45.763780 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:34:45.763814 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:34:55.820794 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:34:55.820856 1 main.go:227] handling current node\nI0521 16:34:55.820883 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:34:55.820897 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:34:55.821143 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:34:55.821166 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:35:05.829854 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:35:05.829931 1 main.go:227] handling current node\nI0521 16:35:05.829970 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:35:05.830004 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:35:05.830278 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:35:05.830307 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:35:15.838745 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:35:15.838797 1 main.go:227] handling current node\nI0521 16:35:15.838824 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:35:15.838837 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:35:15.839066 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:35:15.839093 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:35:25.846051 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:35:25.846119 1 main.go:227] handling current node\nI0521 16:35:25.846160 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:35:25.846182 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:35:25.846433 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:35:25.846465 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:35:35.855748 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:35:35.855824 1 main.go:227] handling current node\nI0521 16:35:35.855852 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:35:35.855880 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:35:35.856259 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:35:35.856289 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:35:45.864845 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:35:45.864901 1 main.go:227] handling current node\nI0521 16:35:45.864926 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:35:45.864941 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:35:45.865168 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:35:45.865192 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:35:55.873521 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:35:55.873582 1 main.go:227] handling current node\nI0521 16:35:55.873611 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:35:55.873624 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:35:55.919777 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:35:55.919828 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:36:05.928497 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:36:05.928548 1 main.go:227] handling current node\nI0521 16:36:05.928573 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:36:05.928587 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:36:05.928813 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:36:05.928835 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:36:15.937516 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:36:15.937565 1 main.go:227] handling current node\nI0521 16:36:15.937591 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:36:15.937604 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:36:15.937885 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:36:15.937912 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:36:25.945649 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:36:25.945687 1 main.go:227] handling current node\nI0521 16:36:25.945706 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:36:25.945720 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:36:25.945907 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:36:25.945925 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:36:35.953384 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:36:35.953454 1 main.go:227] handling current node\nI0521 16:36:35.953481 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:36:35.953495 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:36:35.953727 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:36:35.953755 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:36:45.961587 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:36:45.961646 1 main.go:227] handling current node\nI0521 16:36:45.961672 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:36:45.961686 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:36:45.961947 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:36:45.961978 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:36:55.969507 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:36:55.969564 1 main.go:227] handling current node\nI0521 16:36:55.969590 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:36:55.969604 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:36:55.969861 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:36:55.969891 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:37:05.976509 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:37:05.976568 1 main.go:227] handling current node\nI0521 16:37:05.976594 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:37:05.976608 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:37:05.976832 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:37:05.976861 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:37:16.120127 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:37:16.120208 1 main.go:227] handling current node\nI0521 16:37:16.120235 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:37:16.120250 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:37:16.120511 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:37:16.120544 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:37:26.127744 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:37:26.127804 1 main.go:227] handling current node\nI0521 16:37:26.127831 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:37:26.127845 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:37:26.128082 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:37:26.128111 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:37:36.134990 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:37:36.135055 1 main.go:227] handling current node\nI0521 16:37:36.135083 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:37:36.135098 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:37:36.135341 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:37:36.135372 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:37:46.142116 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:37:46.142176 1 main.go:227] handling current node\nI0521 16:37:46.142204 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:37:46.142221 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:37:46.142444 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:37:46.142470 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:37:56.149566 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:37:56.149620 1 main.go:227] handling current node\nI0521 16:37:56.149649 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:37:56.149662 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:37:56.219308 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:37:56.219370 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:38:06.226759 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:38:06.226814 1 main.go:227] handling current node\nI0521 16:38:06.226840 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:38:06.226854 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:38:06.227083 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:38:06.227533 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:38:16.234670 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:38:16.234730 1 main.go:227] handling current node\nI0521 16:38:16.234766 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:38:16.234786 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:38:16.235052 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:38:16.235082 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:38:26.241866 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:38:26.241929 1 main.go:227] handling current node\nI0521 16:38:26.241974 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:38:26.241995 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:38:26.242242 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:38:26.242276 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:38:36.249383 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:38:36.249440 1 main.go:227] handling current node\nI0521 16:38:36.249467 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:38:36.249484 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:38:36.249748 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:38:36.249776 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:38:46.257066 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:38:46.257125 1 main.go:227] handling current node\nI0521 16:38:46.257151 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:38:46.257166 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:38:46.257413 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:38:46.257442 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:38:56.266895 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:38:56.266974 1 main.go:227] handling current node\nI0521 16:38:56.266999 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:38:56.267014 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:38:56.267299 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:38:56.267329 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \n==== END logs for container kindnet-cni of pod kube-system/kindnet-7b2zs ====\n==== START logs for container kindnet-cni of pod kube-system/kindnet-n7f64 ====\nI0521 15:13:54.724865 1 main.go:316] probe TCP address kali-control-plane:6443\nI0521 15:13:55.021386 1 main.go:102] connected to apiserver: https://kali-control-plane:6443\nI0521 15:13:55.021427 1 main.go:107] hostIP = 172.18.0.4\npodIP = 172.18.0.4\nI0521 15:13:55.021725 1 main.go:116] setting mtu 1500 for CNI \nI0521 15:13:55.021754 1 main.go:146] kindnetd IP family: \"ipv4\"\nI0521 15:13:55.021775 1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]\nI0521 15:13:56.020085 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:13:56.020161 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:13:56.020432 1 routes.go:46] Adding route {Ifindex: 0 Dst: 10.244.0.0/24 Src: Gw: 172.18.0.3 Flags: [] Table: 0} \nI0521 15:13:56.020602 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:13:56.020631 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:13:56.020729 1 routes.go:46] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: Gw: 172.18.0.2 Flags: [] Table: 0} \nI0521 15:13:56.020861 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:13:56.020891 1 main.go:227] handling current node\nI0521 15:14:06.031599 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:14:06.031653 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:14:06.031870 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:14:06.031900 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:14:06.032007 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:14:06.032042 1 main.go:227] handling current node\nI0521 15:14:16.039737 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:14:16.039832 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:14:16.040141 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:14:16.040174 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:14:16.040309 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:14:16.040340 1 main.go:227] handling current node\nI0521 15:14:26.046464 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:14:26.046516 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:14:26.046791 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:14:26.046817 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:14:26.046924 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:14:26.046948 1 main.go:227] handling current node\nI0521 15:14:36.052708 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:14:36.052757 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:14:36.052967 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:14:36.052991 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:14:36.053098 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:14:36.053122 1 main.go:227] handling current node\nI0521 15:14:46.058735 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:14:46.058789 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:14:46.058993 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:14:46.059021 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:14:46.059135 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:14:46.059163 1 main.go:227] handling current node\nI0521 15:14:56.064971 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:14:56.065021 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:14:56.065239 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:14:56.065269 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:14:56.065368 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:14:56.065394 1 main.go:227] handling current node\nI0521 15:15:06.070811 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:15:06.070861 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:15:06.071077 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:15:06.071106 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:15:06.071222 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:15:06.071252 1 main.go:227] handling current node\nI0521 15:15:16.077772 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:15:16.077856 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:15:16.078127 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:15:16.078158 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:15:16.078263 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:15:16.078291 1 main.go:227] handling current node\nI0521 15:15:26.084010 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:15:26.084061 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:15:26.084258 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:15:26.084296 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:15:26.084403 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:15:26.084431 1 main.go:227] handling current node\nI0521 15:15:36.090419 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:15:36.090471 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:15:36.090647 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:15:36.090668 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:15:36.090809 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:15:36.090856 1 main.go:227] handling current node\nI0521 15:15:46.096938 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:15:46.096992 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:15:46.097196 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:15:46.097222 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:15:46.097329 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:15:46.097354 1 main.go:227] handling current node\nI0521 15:15:56.102997 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:15:56.103052 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:15:56.103255 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:15:56.103285 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:15:56.103393 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:15:56.103421 1 main.go:227] handling current node\nI0521 15:16:06.109575 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:16:06.109638 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:16:06.120183 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:16:06.120230 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:16:06.120338 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:16:06.120366 1 main.go:227] handling current node\nI0521 15:16:16.126370 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:16:16.126426 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:16:16.126646 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:16:16.126673 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:16:16.126794 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:16:16.126820 1 main.go:227] handling current node\nI0521 15:16:26.137003 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:16:26.137044 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:16:26.137243 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:16:26.137259 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:16:26.137340 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:16:26.137357 1 main.go:227] handling current node\nI0521 15:16:36.143229 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:16:36.143274 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:16:36.143481 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:16:36.143504 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:16:36.143624 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:16:36.143647 1 main.go:227] handling current node\nI0521 15:16:46.149481 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:16:46.149524 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:16:46.149753 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:16:46.149776 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:16:46.149932 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:16:46.149958 1 main.go:227] handling current node\nI0521 15:16:56.159232 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:16:56.159280 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:16:56.159530 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:16:56.159555 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:16:56.159684 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:16:56.159708 1 main.go:227] handling current node\nI0521 15:17:06.165174 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:17:06.165235 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:17:06.165496 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:17:06.165527 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:17:06.165663 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:17:06.165693 1 main.go:227] handling current node\nI0521 15:17:16.174521 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:17:16.174583 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:17:16.174845 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:17:16.174875 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:17:16.175036 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:17:16.175068 1 main.go:227] handling current node\nI0521 15:17:26.181448 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:17:26.181520 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:17:26.181781 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:17:26.181852 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:17:26.181991 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:17:26.182020 1 main.go:227] handling current node\nI0521 15:17:36.187745 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:17:36.187823 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:17:36.188059 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:17:36.188089 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:17:36.188221 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:17:36.188251 1 main.go:227] handling current node\nI0521 15:17:46.195071 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:17:46.195254 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:17:46.195666 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:17:46.195700 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:17:46.195849 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:17:46.195884 1 main.go:227] handling current node\nI0521 15:17:56.202219 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:17:56.202274 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:17:56.202486 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:17:56.202513 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:17:56.202627 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:17:56.202654 1 main.go:227] handling current node\nI0521 15:18:06.208621 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:18:06.208674 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:18:06.208922 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:18:06.208946 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:18:06.209068 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:18:06.209092 1 main.go:227] handling current node\nI0521 15:18:16.214885 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:18:16.214936 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:18:16.215170 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:18:16.215207 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:18:16.215380 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:18:16.215407 1 main.go:227] handling current node\nI0521 15:18:26.221303 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:18:26.221352 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:18:26.221622 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:18:26.221647 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:18:26.221873 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:18:26.221903 1 main.go:227] handling current node\nI0521 15:18:36.227540 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:18:36.227597 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:18:36.227794 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:18:36.227824 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:18:36.227954 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:18:36.227984 1 main.go:227] handling current node\nI0521 15:18:46.234092 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:18:46.234160 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:18:46.234399 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:18:46.234423 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:18:46.234543 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:18:46.234567 1 main.go:227] handling current node\nI0521 15:18:56.240306 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:18:56.240357 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:18:56.240587 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:18:56.240614 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:18:56.240746 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:18:56.240771 1 main.go:227] handling current node\nI0521 15:19:06.320058 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:19:06.320131 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:19:06.320377 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:19:06.320402 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:19:06.320553 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:19:06.320589 1 main.go:227] handling current node\nI0521 15:19:16.326817 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:19:16.326866 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:19:16.327139 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:19:16.327163 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:19:16.327293 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:19:16.327317 1 main.go:227] handling current node\nI0521 15:19:26.333542 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:19:26.333603 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:19:26.333923 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:19:26.333955 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:19:26.334103 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:19:26.334139 1 main.go:227] handling current node\nI0521 15:19:36.340335 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:19:36.340387 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:19:36.340665 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:19:36.340690 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:19:36.340814 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:19:36.340838 1 main.go:227] handling current node\nI0521 15:19:46.346895 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:19:46.346945 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:19:46.347165 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:19:46.347189 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:19:46.347324 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:19:46.347347 1 main.go:227] handling current node\nI0521 15:19:56.353102 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:19:56.353159 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:19:56.353429 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:19:56.353453 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:19:56.353596 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:19:56.353622 1 main.go:227] handling current node\nI0521 15:20:06.359488 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:20:06.359551 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:20:06.359874 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:20:06.359901 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:20:06.360039 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:20:06.360064 1 main.go:227] handling current node\nI0521 15:20:16.366345 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:20:16.366401 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:20:16.366612 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:20:16.366640 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:20:16.366764 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:20:16.366793 1 main.go:227] handling current node\nI0521 15:20:26.372661 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:20:26.372709 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:20:26.372997 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:20:26.373021 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:20:26.373159 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:20:26.373184 1 main.go:227] handling current node\nI0521 15:20:36.380447 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:20:36.380565 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:20:36.380937 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:20:36.380980 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:20:36.381136 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:20:36.381167 1 main.go:227] handling current node\nI0521 15:20:46.387348 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:20:46.387395 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:20:46.387646 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:20:46.387670 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:20:46.387791 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:20:46.387814 1 main.go:227] handling current node\nI0521 15:20:56.394222 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:20:56.394276 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:20:56.394484 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:20:56.394511 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:20:56.394629 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:20:56.394654 1 main.go:227] handling current node\nI0521 15:21:06.400077 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:21:06.400131 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:21:06.400392 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:21:06.400416 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:21:06.400546 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:21:06.400569 1 main.go:227] handling current node\nI0521 15:21:16.405483 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:21:16.405526 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:21:16.405743 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:21:16.405765 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:21:16.405933 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:21:16.405959 1 main.go:227] handling current node\nI0521 15:21:26.411504 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:21:26.411564 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:21:26.411780 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:21:26.411810 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:21:26.411935 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:21:26.411964 1 main.go:227] handling current node\nI0521 15:21:36.417537 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:21:36.417581 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:21:36.417794 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:21:36.417870 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:21:36.418005 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:21:36.418028 1 main.go:227] handling current node\nI0521 15:21:46.423802 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:21:46.423854 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:21:46.424091 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:21:46.424115 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:21:46.424252 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:21:46.424275 1 main.go:227] handling current node\nI0521 15:21:56.430130 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:21:56.430186 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:21:56.430425 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:21:56.430454 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:21:56.430576 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:21:56.430604 1 main.go:227] handling current node\nI0521 15:22:06.436279 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:22:06.436341 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:22:06.436591 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:22:06.436622 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:22:06.436766 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:22:06.436799 1 main.go:227] handling current node\nI0521 15:22:16.444845 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:22:16.444922 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:22:16.445208 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:22:16.445241 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:22:16.445393 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:22:16.445434 1 main.go:227] handling current node\nI0521 15:22:26.451587 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:22:26.451647 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:22:26.451909 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:22:26.451938 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:22:26.452071 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:22:26.452099 1 main.go:227] handling current node\nI0521 15:22:36.458023 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:22:36.458084 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:22:36.458299 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:22:36.458328 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:22:36.458453 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:22:36.458482 1 main.go:227] handling current node\nI0521 15:22:46.464575 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:22:46.464628 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:22:46.464847 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:22:46.464871 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:22:46.465005 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:22:46.465030 1 main.go:227] handling current node\nI0521 15:22:56.470872 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:22:56.470918 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:22:56.471141 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:22:56.471165 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:22:56.471291 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:22:56.471315 1 main.go:227] handling current node\nI0521 15:23:06.477038 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:23:06.477100 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:23:06.477326 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:23:06.477355 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:23:06.477491 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:23:06.477521 1 main.go:227] handling current node\nI0521 15:23:16.483698 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:23:16.483746 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:23:16.483979 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:23:16.484003 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:23:16.484136 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:23:16.484161 1 main.go:227] handling current node\nI0521 15:23:26.490363 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:23:26.490415 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:23:26.490653 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:23:26.490677 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:23:26.490808 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:23:26.490832 1 main.go:227] handling current node\nI0521 15:23:36.496179 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:23:36.496237 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:23:36.496472 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:23:36.496502 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:23:36.496646 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:23:36.496676 1 main.go:227] handling current node\nI0521 15:23:46.502557 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:23:46.502603 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:23:46.502885 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:23:46.502910 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:23:46.503043 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:23:46.503067 1 main.go:227] handling current node\nI0521 15:23:56.509395 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:23:56.509443 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:23:56.509707 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:23:56.509731 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:23:56.509894 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:23:56.509922 1 main.go:227] handling current node\nI0521 15:24:06.516251 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:24:06.516325 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:24:06.516678 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:24:06.516714 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:24:06.519207 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:24:06.619303 1 main.go:227] handling current node\nI0521 15:24:16.624860 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:24:16.624916 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:24:16.625196 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:24:16.625237 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:24:16.625413 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:24:16.625444 1 main.go:227] handling current node\nI0521 15:24:26.631208 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:24:26.631260 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:24:26.631484 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:24:26.631513 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:24:26.631647 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:24:26.631674 1 main.go:227] handling current node\nI0521 15:24:36.638924 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:24:36.639010 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:24:36.639314 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:24:36.639355 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:24:36.639540 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:24:36.639583 1 main.go:227] handling current node\nI0521 15:24:46.646008 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:24:46.646062 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:24:46.646325 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:24:46.646357 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:24:46.646506 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:24:46.646542 1 main.go:227] handling current node\nI0521 15:24:56.652724 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:24:56.652766 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:24:56.652985 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:24:56.653009 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:24:56.653142 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:24:56.653166 1 main.go:227] handling current node\nI0521 15:25:06.659352 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:25:06.659395 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:25:06.659630 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:25:06.659655 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:25:06.659800 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:25:06.659824 1 main.go:227] handling current node\nI0521 15:25:16.666184 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:25:16.666229 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:25:16.666446 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:25:16.666472 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:25:16.666615 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:25:16.666639 1 main.go:227] handling current node\nI0521 15:25:26.672597 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:25:26.672664 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:25:26.672913 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:25:26.672945 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:25:26.673084 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:25:26.673114 1 main.go:227] handling current node\nI0521 15:25:36.679505 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:25:36.679551 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:25:36.679805 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:25:36.679840 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:25:36.680017 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:25:36.680043 1 main.go:227] handling current node\nI0521 15:25:46.686286 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:25:46.686334 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:25:46.686601 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:25:46.686624 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:25:46.686743 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:25:46.686767 1 main.go:227] handling current node\nI0521 15:25:56.694038 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:25:56.694119 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:25:56.694398 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:25:56.694436 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:25:56.694582 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:25:56.694614 1 main.go:227] handling current node\nI0521 15:26:06.700871 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:26:06.700933 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:26:06.701171 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:26:06.701200 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:26:06.701396 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:26:06.701426 1 main.go:227] handling current node\nI0521 15:26:16.707005 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:26:16.707054 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:26:16.707282 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:26:16.707305 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:26:16.707444 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:26:16.707469 1 main.go:227] handling current node\nI0521 15:26:26.714083 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:26:26.714136 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:26:26.714365 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:26:26.714390 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:26:26.714526 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:26:26.714548 1 main.go:227] handling current node\nI0521 15:26:36.720233 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:26:36.720287 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:26:36.720546 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:26:36.720574 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:26:36.720694 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:26:36.720722 1 main.go:227] handling current node\nI0521 15:26:46.726692 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:26:46.726745 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:26:46.726982 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:26:46.727005 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:26:46.727141 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:26:46.727165 1 main.go:227] handling current node\nI0521 15:26:56.732730 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:26:56.732778 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:26:56.732991 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:26:56.733015 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:26:56.733137 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:26:56.733161 1 main.go:227] handling current node\nI0521 15:27:06.739532 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:27:06.739593 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:27:06.775355 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:27:06.775399 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:27:06.775686 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:27:06.775718 1 main.go:227] handling current node\nI0521 15:27:16.781679 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:27:16.781732 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:27:16.781985 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:27:16.782016 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:27:16.782154 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:27:16.782181 1 main.go:227] handling current node\nI0521 15:27:26.820323 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:27:26.820412 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:27:26.820768 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:27:26.820810 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:27:26.821002 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:27:26.821044 1 main.go:227] handling current node\nI0521 15:27:36.827756 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:27:36.827818 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:27:36.828041 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:27:36.828070 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:27:36.828205 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:27:36.828233 1 main.go:227] handling current node\nI0521 15:27:46.834540 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:27:46.834602 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:27:46.834837 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:27:46.834867 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:27:46.835005 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:27:46.835034 1 main.go:227] handling current node\nI0521 15:27:56.840295 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:27:56.840345 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:27:56.840592 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:27:56.840627 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:27:56.840790 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:27:56.840816 1 main.go:227] handling current node\nI0521 15:28:06.846873 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:28:06.846923 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:28:06.847184 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:28:06.847209 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:28:06.847345 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:28:06.847370 1 main.go:227] handling current node\nI0521 15:28:16.853712 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:28:16.853786 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:28:16.854043 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:28:16.854073 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:28:16.854220 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:28:16.854254 1 main.go:227] handling current node\nI0521 15:28:26.860331 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:28:26.860390 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:28:26.860634 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:28:26.860664 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:28:26.860808 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:28:26.860838 1 main.go:227] handling current node\nI0521 15:28:36.866892 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:28:36.866962 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:28:36.867203 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:28:36.867233 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:28:36.867372 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:28:36.867401 1 main.go:227] handling current node\nI0521 15:28:46.873434 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:28:46.873485 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:28:46.873715 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:28:46.873739 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:28:46.873923 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:28:46.873951 1 main.go:227] handling current node\nI0521 15:28:56.880346 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:28:56.880426 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:28:56.880657 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:28:56.880688 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:28:56.880819 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:28:56.880849 1 main.go:227] handling current node\nI0521 15:29:06.886895 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:29:06.886947 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:29:06.887180 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:29:06.887204 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:29:06.887345 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:29:06.887370 1 main.go:227] handling current node\nI0521 15:29:16.896148 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:29:16.896234 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:29:16.896514 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:29:16.896546 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:29:16.896704 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:29:16.896747 1 main.go:227] handling current node\nI0521 15:29:26.902823 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:29:26.902880 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:29:26.903152 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:29:26.903200 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:29:26.903328 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:29:26.903353 1 main.go:227] handling current node\nI0521 15:29:36.909527 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:29:36.909575 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:29:36.909796 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:29:36.909858 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:29:36.909989 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:29:36.910014 1 main.go:227] handling current node\nI0521 15:29:46.916091 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:29:46.916147 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:29:46.916370 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:29:46.916398 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:29:46.916525 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:29:46.916555 1 main.go:227] handling current node\nI0521 15:29:56.922264 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:29:56.922323 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:29:56.922554 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:29:56.922583 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:29:56.922719 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:29:56.922747 1 main.go:227] handling current node\nI0521 15:30:06.928457 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:30:06.928507 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:30:06.928719 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:30:06.928742 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:30:06.928867 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:30:06.928891 1 main.go:227] handling current node\nI0521 15:30:16.934810 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:30:16.934860 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:30:16.935084 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:30:16.935109 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:30:16.935235 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:30:16.935259 1 main.go:227] handling current node\nI0521 15:30:26.942005 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:30:26.942075 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:30:26.942312 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:30:26.942341 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:30:26.942469 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:30:26.942498 1 main.go:227] handling current node\nI0521 15:30:36.948089 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:30:36.948146 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:30:36.948403 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:30:36.948429 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:30:36.948563 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:30:36.948589 1 main.go:227] handling current node\nI0521 15:30:46.954343 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:30:46.954393 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:30:46.954602 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:30:46.954634 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:30:46.954785 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:30:46.954819 1 main.go:227] handling current node\nI0521 15:30:56.961094 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:30:56.961144 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:30:56.961392 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:30:56.961450 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:30:56.961592 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:30:56.961617 1 main.go:227] handling current node\nI0521 15:31:06.967547 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:31:06.967596 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:31:06.967821 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:31:06.967856 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:31:06.968711 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:31:06.968780 1 main.go:227] handling current node\nI0521 15:31:16.974490 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:31:16.974542 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:31:16.974782 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:31:16.974807 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:31:16.974921 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:31:16.974945 1 main.go:227] handling current node\nI0521 15:31:26.981059 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:31:26.981107 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:31:26.981334 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:31:26.981358 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:31:26.981483 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:31:26.981506 1 main.go:227] handling current node\nI0521 15:31:36.987914 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:31:36.987963 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:31:36.988201 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:31:36.988234 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:31:36.988373 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:31:36.988397 1 main.go:227] handling current node\nI0521 15:31:46.994778 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:31:46.994831 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:31:46.995083 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:31:46.995111 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:31:46.995247 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:31:46.995272 1 main.go:227] handling current node\nI0521 15:31:57.001500 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:31:57.001552 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:31:57.001765 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:31:57.001786 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:31:57.001970 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:31:57.001995 1 main.go:227] handling current node\nI0521 15:32:07.007756 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:32:07.007813 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:32:07.008029 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:32:07.008059 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:32:07.008187 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:32:07.008217 1 main.go:227] handling current node\nI0521 15:32:17.014201 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:32:17.014274 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:32:17.014619 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:32:17.014669 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:32:17.014866 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:32:17.014911 1 main.go:227] handling current node\nI0521 15:32:27.020701 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:32:27.020754 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:32:27.021023 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:32:27.021061 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:32:27.021226 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:32:27.021256 1 main.go:227] handling current node\nI0521 15:32:37.027661 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:32:37.027717 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:32:37.027947 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:32:37.027977 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:32:37.028134 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:32:37.028163 1 main.go:227] handling current node\nI0521 15:32:47.035191 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:32:47.035293 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:32:47.035604 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:32:47.035637 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:32:47.036154 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:32:47.119192 1 main.go:227] handling current node\nI0521 15:32:57.225721 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:32:57.225787 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:32:57.226087 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:32:57.226487 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:32:57.226657 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:32:57.226688 1 main.go:227] handling current node\nI0521 15:33:07.233347 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:33:07.233403 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:33:07.233614 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:33:07.233646 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:33:07.233782 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:33:07.233842 1 main.go:227] handling current node\nI0521 15:33:17.240253 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:33:17.240316 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:33:17.240538 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:33:17.240563 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:33:17.240686 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:33:17.240711 1 main.go:227] handling current node\nI0521 15:33:27.246266 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:33:27.246326 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:33:27.246619 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:33:27.246662 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:33:27.246846 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:33:27.246878 1 main.go:227] handling current node\nI0521 15:33:37.253080 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:33:37.253126 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:33:37.253347 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:33:37.253380 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:33:37.253509 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:33:37.253533 1 main.go:227] handling current node\nI0521 15:33:47.261999 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:33:47.262050 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:33:47.262279 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:33:47.262313 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:33:47.262486 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:33:47.262514 1 main.go:227] handling current node\nI0521 15:33:57.268373 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:33:57.268425 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:33:57.268652 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:33:57.268675 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:33:57.268808 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:33:57.268833 1 main.go:227] handling current node\nI0521 15:34:07.275853 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:34:07.276118 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:34:07.276767 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:34:07.276829 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:34:07.277019 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:34:07.277077 1 main.go:227] handling current node\nI0521 15:34:17.283904 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:34:17.283954 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:34:17.284175 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:34:17.284199 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:34:17.284321 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:34:17.284345 1 main.go:227] handling current node\nI0521 15:34:27.290830 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:34:27.290881 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:34:27.291096 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:34:27.291121 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:34:27.291248 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:34:27.291273 1 main.go:227] handling current node\nI0521 15:34:37.297449 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:34:37.297499 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:34:37.297757 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:34:37.297782 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:34:37.297946 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:34:37.297974 1 main.go:227] handling current node\nI0521 15:34:47.309137 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:34:47.309188 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:34:47.309401 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:34:47.309426 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:34:47.309555 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:34:47.309579 1 main.go:227] handling current node\nI0521 15:34:57.316007 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:34:57.316070 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:34:57.316289 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:34:57.316313 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:34:57.316443 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:34:57.316469 1 main.go:227] handling current node\nI0521 15:35:07.322574 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:35:07.322625 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:35:07.322861 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:35:07.322885 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:35:07.323005 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:35:07.323030 1 main.go:227] handling current node\nI0521 15:35:17.329556 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:35:17.329606 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:35:17.329872 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:35:17.330374 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:35:17.330503 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:35:17.330528 1 main.go:227] handling current node\nI0521 15:35:27.336390 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:35:27.336438 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:35:27.336665 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:35:27.336688 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:35:27.336821 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:35:27.336845 1 main.go:227] handling current node\nI0521 15:35:37.342913 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:35:37.342972 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:35:37.343199 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:35:37.343229 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:35:37.343371 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:35:37.343401 1 main.go:227] handling current node\nI0521 15:35:47.420643 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:35:47.420720 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:35:47.421016 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:35:47.421059 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:35:47.421206 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:35:47.421246 1 main.go:227] handling current node\nI0521 15:35:57.426886 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:35:57.426935 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:35:57.427178 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:35:57.427199 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:35:57.427308 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:35:57.427329 1 main.go:227] handling current node\nI0521 15:36:07.433292 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:36:07.433347 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:36:07.433577 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:36:07.433601 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:36:07.433722 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:36:07.433746 1 main.go:227] handling current node\nI0521 15:36:17.439950 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:36:17.440021 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:36:17.440240 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:36:17.440272 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:36:17.440401 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:36:17.440428 1 main.go:227] handling current node\nI0521 15:36:27.446619 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:36:27.446667 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:36:27.446898 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:36:27.446922 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:36:27.447054 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:36:27.447080 1 main.go:227] handling current node\nI0521 15:36:37.453007 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:36:37.453057 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:36:37.453286 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:36:37.453310 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:36:37.453434 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:36:37.453458 1 main.go:227] handling current node\nI0521 15:36:47.459341 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:36:47.459388 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:36:47.459595 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:36:47.459618 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:36:47.459741 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:36:47.459762 1 main.go:227] handling current node\nI0521 15:36:57.465470 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:36:57.465516 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:36:57.465721 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:36:57.465745 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:36:57.465917 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:36:57.465945 1 main.go:227] handling current node\nI0521 15:37:07.471671 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:37:07.471722 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:37:07.471979 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:37:07.472002 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:37:07.472127 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:37:07.472150 1 main.go:227] handling current node\nI0521 15:37:17.480099 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:37:17.480169 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:37:17.480436 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:37:17.480464 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:37:17.480602 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:37:17.480630 1 main.go:227] handling current node\nI0521 15:37:27.486597 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:37:27.486660 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:37:27.486886 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:37:27.486915 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:37:27.487052 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:37:27.487082 1 main.go:227] handling current node\nI0521 15:37:37.493433 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:37:37.493495 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:37:37.493715 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:37:37.493744 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:37:37.493927 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:37:37.493959 1 main.go:227] handling current node\nI0521 15:37:47.499966 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:37:47.500027 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:37:47.500270 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:37:47.500306 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:37:47.500467 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:37:47.500498 1 main.go:227] handling current node\nI0521 15:37:57.506550 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:37:57.506602 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:37:57.506815 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:37:57.506839 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:37:57.506960 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:37:57.506984 1 main.go:227] handling current node\nI0521 15:38:07.513107 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:38:07.513163 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:38:07.513380 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:38:07.513410 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:38:07.513557 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:38:07.513589 1 main.go:227] handling current node\nI0521 15:38:17.519544 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:38:17.519600 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:38:17.519890 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:38:17.519922 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:38:17.520057 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:38:17.520087 1 main.go:227] handling current node\nI0521 15:38:27.526175 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:38:27.526230 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:38:27.526454 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:38:27.526483 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:38:27.526638 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:38:27.526669 1 main.go:227] handling current node\nI0521 15:38:37.532409 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:38:37.532461 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:38:37.532689 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:38:37.532713 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:38:37.532851 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:38:37.532876 1 main.go:227] handling current node\nI0521 15:38:47.539161 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:38:47.539213 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:38:47.539444 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:38:47.539468 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:38:47.539595 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:38:47.539620 1 main.go:227] handling current node\nI0521 15:38:57.545562 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:38:57.545616 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:38:57.545868 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:38:57.545901 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:38:57.546038 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:38:57.546067 1 main.go:227] handling current node\nI0521 15:39:07.552042 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:39:07.552090 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:39:07.552369 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:39:07.552395 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:39:07.552524 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:39:07.552548 1 main.go:227] handling current node\nI0521 15:39:17.558407 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:39:17.558454 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:39:17.558674 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:39:17.558697 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:39:17.558818 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:39:17.558841 1 main.go:227] handling current node\nI0521 15:39:27.564514 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:39:27.564582 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:39:27.564791 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:39:27.564816 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:39:27.564935 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:39:27.564959 1 main.go:227] handling current node\nI0521 15:39:37.570971 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:39:37.571030 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:39:37.571266 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:39:37.571290 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:39:37.571409 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:39:37.571432 1 main.go:227] handling current node\nI0521 15:39:47.577312 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:39:47.577361 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:39:47.577564 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:39:47.577621 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:39:47.577748 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:39:47.577774 1 main.go:227] handling current node\nI0521 15:39:57.583189 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:39:57.583240 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:39:57.583448 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:39:57.583476 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:39:57.583605 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:39:57.583629 1 main.go:227] handling current node\nI0521 15:40:07.589248 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:40:07.589297 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:40:07.589511 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:40:07.589556 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:40:07.589683 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:40:07.589707 1 main.go:227] handling current node\nI0521 15:40:17.596269 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:40:17.596360 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:40:17.596579 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:40:17.596603 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:40:17.596739 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:40:17.596764 1 main.go:227] handling current node\nI0521 15:40:27.602928 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:40:27.602977 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:40:27.603201 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:40:27.603224 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:40:27.603349 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:40:27.603373 1 main.go:227] handling current node\nI0521 15:40:37.609344 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:40:37.609393 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:40:37.610245 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:40:37.610346 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:40:37.610583 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:40:37.610624 1 main.go:227] handling current node\nI0521 15:40:47.616320 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:40:47.616372 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:40:47.616577 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:40:47.616599 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:40:47.616717 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:40:47.616740 1 main.go:227] handling current node\nI0521 15:40:57.623064 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:40:57.623114 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:40:57.623333 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:40:57.623357 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:40:57.623477 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:40:57.623501 1 main.go:227] handling current node\nI0521 15:41:07.629442 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:41:07.629493 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:41:07.629755 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:41:07.629779 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:41:07.629965 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:41:07.629994 1 main.go:227] handling current node\nI0521 15:41:17.637005 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:41:17.637052 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:41:17.637275 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:41:17.637298 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:41:17.637435 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:41:17.637460 1 main.go:227] handling current node\nI0521 15:41:27.644486 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:41:27.644533 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:41:27.644756 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:41:27.644779 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:41:27.644902 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:41:27.644925 1 main.go:227] handling current node\nI0521 15:41:37.651893 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:41:37.651945 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:41:37.652183 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:41:37.652208 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:41:37.652339 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:41:37.652363 1 main.go:227] handling current node\nI0521 15:41:47.659050 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:41:47.659097 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:41:47.659330 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:41:47.659356 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:41:47.659493 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:41:47.659519 1 main.go:227] handling current node\nI0521 15:41:57.666532 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:41:57.666580 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:41:57.666824 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:41:57.666848 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:41:57.666970 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:41:57.666994 1 main.go:227] handling current node\nI0521 15:42:07.673565 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:42:07.673617 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:42:07.673888 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:42:07.673916 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:42:07.674051 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:42:07.674073 1 main.go:227] handling current node\nI0521 15:42:17.681219 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:42:17.681485 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:42:17.719611 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:42:17.719667 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:42:17.719813 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:42:17.719862 1 main.go:227] handling current node\nI0521 15:42:27.727507 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:42:27.727570 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:42:27.727803 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:42:27.727827 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:42:27.727950 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:42:27.727973 1 main.go:227] handling current node\nI0521 15:42:37.734990 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:42:37.735053 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:42:37.735295 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:42:37.735320 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:42:37.735445 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:42:37.735470 1 main.go:227] handling current node\nI0521 15:42:47.742346 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:42:47.742405 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:42:47.742623 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:42:47.742654 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:42:47.742778 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:42:47.742810 1 main.go:227] handling current node\nI0521 15:42:57.750305 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:42:57.750373 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:42:57.750598 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:42:57.750628 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:42:57.750771 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:42:57.750799 1 main.go:227] handling current node\nI0521 15:43:07.757512 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:43:07.757565 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:43:07.757845 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:43:07.757873 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:43:07.758003 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:43:07.758027 1 main.go:227] handling current node\nI0521 15:43:17.764226 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:43:17.764287 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:43:17.764513 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:43:17.764543 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:43:17.764677 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:43:17.764705 1 main.go:227] handling current node\nI0521 15:43:27.771000 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:43:27.771049 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:43:27.771296 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:43:27.771320 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:43:27.771440 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:43:27.771464 1 main.go:227] handling current node\nI0521 15:43:37.777690 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:43:37.777745 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:43:37.778071 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:43:37.778098 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:43:37.778248 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:43:37.778280 1 main.go:227] handling current node\nI0521 15:43:47.784438 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:43:47.784496 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:43:47.784714 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:43:47.784742 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:43:47.784870 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:43:47.784898 1 main.go:227] handling current node\nI0521 15:43:57.791080 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:43:57.791137 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:43:57.791364 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:43:57.791392 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:43:57.791556 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:43:57.791586 1 main.go:227] handling current node\nI0521 15:44:07.797949 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:44:07.797995 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:44:07.798272 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:44:07.798297 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:44:07.798422 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:44:07.798447 1 main.go:227] handling current node\nI0521 15:44:17.804634 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:44:17.804686 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:44:17.804905 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:44:17.804929 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:44:17.805068 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:44:17.805104 1 main.go:227] handling current node\nI0521 15:44:27.811189 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:44:27.811244 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:44:27.811472 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:44:27.811496 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:44:27.811626 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:44:27.811650 1 main.go:227] handling current node\nI0521 15:44:37.818957 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:44:37.819014 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:44:37.819245 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:44:37.819273 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:44:37.819404 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:44:37.819431 1 main.go:227] handling current node\nI0521 15:44:47.826869 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:44:47.826929 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:44:47.827148 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:44:47.827176 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:44:47.827320 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:44:47.827348 1 main.go:227] handling current node\nI0521 15:44:57.835194 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:44:57.835246 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:44:57.835500 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:44:57.835525 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:44:57.835650 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:44:57.835675 1 main.go:227] handling current node\nI0521 15:45:07.843220 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:45:07.843269 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:45:07.843493 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:45:07.843519 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:45:07.843643 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:45:07.843667 1 main.go:227] handling current node\nI0521 15:45:17.851738 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:45:17.851796 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:45:17.852029 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:45:17.852057 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:45:17.852199 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:45:17.852226 1 main.go:227] handling current node\nI0521 15:45:27.859791 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:45:27.859844 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:45:27.860075 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:45:27.860099 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:45:27.860735 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:45:27.860814 1 main.go:227] handling current node\nI0521 15:45:37.868153 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:45:37.868219 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:45:37.868447 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:45:37.868472 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:45:37.868593 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:45:37.868617 1 main.go:227] handling current node\nI0521 15:45:47.874954 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:45:47.875012 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:45:47.875237 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:45:47.875266 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:45:47.875397 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:45:47.875425 1 main.go:227] handling current node\nI0521 15:45:57.882409 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:45:57.882466 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:45:57.882693 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:45:57.882722 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:45:57.882859 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:45:57.882888 1 main.go:227] handling current node\nI0521 15:46:07.888997 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:46:07.889051 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:46:07.889271 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:46:07.889300 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:46:07.889429 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:46:07.889457 1 main.go:227] handling current node\nI0521 15:46:17.896600 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:46:17.896655 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:46:17.896879 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:46:17.896908 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:46:17.897050 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:46:17.897079 1 main.go:227] handling current node\nI0521 15:46:27.904336 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:46:27.904385 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:46:27.904611 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:46:27.904636 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:46:27.904769 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:46:27.904794 1 main.go:227] handling current node\nI0521 15:46:37.911272 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:46:37.911329 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:46:37.911562 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:46:37.911591 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:46:37.911726 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:46:37.911754 1 main.go:227] handling current node\nI0521 15:46:47.919206 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:46:47.919266 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:46:47.919507 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:46:47.919537 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:46:47.919671 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:46:47.919700 1 main.go:227] handling current node\nI0521 15:46:57.928251 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:46:57.928304 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:46:57.928531 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:46:57.928566 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:46:57.928699 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:46:57.928728 1 main.go:227] handling current node\nI0521 15:47:07.934690 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:47:07.934741 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:47:07.934991 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:47:07.935015 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:47:07.935140 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:47:07.935165 1 main.go:227] handling current node\nI0521 15:47:17.942621 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:47:17.942682 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:47:17.942934 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:47:17.942967 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:47:17.943099 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:47:17.943123 1 main.go:227] handling current node\nI0521 15:47:27.949571 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:47:27.949620 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:47:27.949872 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:47:27.949903 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:47:27.950033 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:47:27.950058 1 main.go:227] handling current node\nI0521 15:47:37.956346 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:47:37.956415 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:47:37.956649 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:47:37.956673 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:47:37.956791 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:47:37.956814 1 main.go:227] handling current node\nI0521 15:47:47.963222 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:47:47.963269 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:47:47.963493 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:47:47.963517 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:47:47.963639 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:47:47.963663 1 main.go:227] handling current node\nI0521 15:47:57.970635 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:47:57.970697 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:47:57.970919 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:47:57.970949 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:47:57.971096 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:47:57.971129 1 main.go:227] handling current node\nI0521 15:48:07.977451 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:48:07.977507 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:48:07.977729 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:48:07.977758 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:48:07.977943 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:48:07.977980 1 main.go:227] handling current node\nI0521 15:48:17.983606 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:48:17.983660 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:48:17.983867 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:48:17.983894 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:48:17.984030 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:48:17.984058 1 main.go:227] handling current node\nI0521 15:48:27.990079 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:48:27.990141 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:48:27.990388 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:48:27.990420 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:48:27.990589 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:48:27.990620 1 main.go:227] handling current node\nI0521 15:48:38.120560 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:48:38.120639 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:48:38.120902 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:48:38.120932 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:48:38.121068 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:48:38.121096 1 main.go:227] handling current node\nI0521 15:48:48.126911 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:48:48.126966 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:48:48.127189 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:48:48.127213 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:48:48.127333 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:48:48.127357 1 main.go:227] handling current node\nI0521 15:48:58.133674 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:48:58.133730 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:48:58.134001 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:48:58.134032 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:48:58.134167 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:48:58.134195 1 main.go:227] handling current node\nI0521 15:49:08.140125 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:49:08.140179 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:49:08.140411 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:49:08.140439 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:49:08.140575 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:49:08.140855 1 main.go:227] handling current node\nI0521 15:49:18.146968 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:49:18.147024 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:49:18.147247 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:49:18.147276 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:49:18.147417 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:49:18.147445 1 main.go:227] handling current node\nI0521 15:49:28.153787 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:49:28.153899 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:49:28.154170 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:49:28.154205 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:49:28.154370 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:49:28.154399 1 main.go:227] handling current node\nI0521 15:49:38.160216 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:49:38.160269 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:49:38.160496 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:49:38.160524 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:49:38.160650 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:49:38.160677 1 main.go:227] handling current node\nI0521 15:49:48.167141 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:49:48.167197 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:49:48.167414 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:49:48.167444 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:49:48.167578 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:49:48.167607 1 main.go:227] handling current node\nI0521 15:49:58.173966 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:49:58.174027 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:49:58.174246 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:49:58.174473 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:49:58.174606 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:49:58.174636 1 main.go:227] handling current node\nI0521 15:50:08.181010 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:50:08.181066 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:50:08.181298 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:50:08.181327 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:50:08.181462 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:50:08.181494 1 main.go:227] handling current node\nI0521 15:50:18.188395 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:50:18.188597 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:50:18.189445 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:50:18.189517 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:50:18.189714 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:50:18.189743 1 main.go:227] handling current node\nI0521 15:50:28.196158 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:50:28.196205 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:50:28.196427 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:50:28.196450 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:50:28.196590 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:50:28.196616 1 main.go:227] handling current node\nI0521 15:50:38.203400 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:50:38.203454 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:50:38.203678 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:50:38.203701 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:50:38.203876 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:50:38.203901 1 main.go:227] handling current node\nI0521 15:50:48.210810 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:50:48.210867 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:50:48.211091 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:50:48.211118 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:50:48.211249 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:50:48.211274 1 main.go:227] handling current node\nI0521 15:50:58.217726 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:50:58.217785 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:50:58.218054 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:50:58.218085 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:50:58.218222 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:50:58.218252 1 main.go:227] handling current node\nI0521 15:51:08.224576 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:51:08.224625 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:51:08.224841 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:51:08.224865 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:51:08.224992 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:51:08.225311 1 main.go:227] handling current node\nI0521 15:51:18.231817 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:51:18.231869 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:51:18.232103 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:51:18.232126 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:51:18.232285 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:51:18.232311 1 main.go:227] handling current node\nI0521 15:51:28.238779 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:51:28.238849 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:51:28.239075 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:51:28.239103 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:51:28.239233 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:51:28.239260 1 main.go:227] handling current node\nI0521 15:51:38.245793 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:51:38.245885 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:51:38.246117 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:51:38.246141 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:51:38.246259 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:51:38.246284 1 main.go:227] handling current node\nI0521 15:51:48.420634 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:51:48.420701 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:51:48.420972 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:51:48.420997 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:51:48.421119 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:51:48.421143 1 main.go:227] handling current node\nI0521 15:51:58.427305 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:51:58.427358 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:51:58.427580 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:51:58.428056 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:51:58.428223 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:51:58.428264 1 main.go:227] handling current node\nI0521 15:52:08.434925 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:52:08.434977 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:52:08.435221 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:52:08.435247 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:52:08.435383 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:52:08.435406 1 main.go:227] handling current node\nI0521 15:52:18.441994 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:52:18.442064 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:52:18.442283 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:52:18.442312 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:52:18.442467 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:52:18.442496 1 main.go:227] handling current node\nI0521 15:52:28.449344 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:52:28.449397 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:52:28.449627 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:52:28.449650 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:52:28.449772 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:52:28.449799 1 main.go:227] handling current node\nI0521 15:52:38.456442 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:52:38.456492 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:52:38.456704 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:52:38.456728 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:52:38.456856 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:52:38.456880 1 main.go:227] handling current node\nI0521 15:52:48.463264 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:52:48.463319 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:52:48.463545 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:52:48.463572 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:52:48.463721 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:52:48.463748 1 main.go:227] handling current node\nI0521 15:52:58.470020 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:52:58.470081 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:52:58.470306 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:52:58.470334 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:52:58.470465 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:52:58.470492 1 main.go:227] handling current node\nI0521 15:53:08.476670 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:53:08.476743 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:53:08.476962 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:53:08.476991 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:53:08.477119 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:53:08.477149 1 main.go:227] handling current node\nI0521 15:53:18.483058 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:53:18.483109 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:53:18.483338 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:53:18.483362 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:53:18.483493 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:53:18.483517 1 main.go:227] handling current node\nI0521 15:53:28.521331 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:53:28.521407 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:53:28.521679 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:53:28.521707 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:53:28.619339 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:53:28.619383 1 main.go:227] handling current node\nI0521 15:53:38.625043 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:53:38.625102 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:53:38.625319 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:53:38.625347 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:53:38.625477 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:53:38.625507 1 main.go:227] handling current node\nI0521 15:53:48.632026 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:53:48.632081 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:53:48.632319 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:53:48.632344 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:53:48.632474 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:53:48.632499 1 main.go:227] handling current node\nI0521 15:53:58.637986 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:53:58.638042 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:53:58.638256 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:53:58.638283 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:53:58.638411 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:53:58.638441 1 main.go:227] handling current node\nI0521 15:54:08.644430 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:54:08.644487 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:54:08.644706 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:54:08.644734 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:54:08.644865 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:54:08.644893 1 main.go:227] handling current node\nI0521 15:54:18.651368 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:54:18.651423 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:54:18.651637 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:54:18.651666 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:54:18.651800 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:54:18.651828 1 main.go:227] handling current node\nI0521 15:54:28.658305 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:54:28.658370 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:54:28.658602 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:54:28.658631 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:54:28.658760 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:54:28.658787 1 main.go:227] handling current node\nI0521 15:54:38.665104 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:54:38.665167 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:54:38.665411 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:54:38.665445 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:54:38.665609 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:54:38.665640 1 main.go:227] handling current node\nI0521 15:54:48.671775 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:54:48.671830 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:54:48.672069 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:54:48.672097 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:54:48.672234 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:54:48.672262 1 main.go:227] handling current node\nI0521 15:54:58.678826 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:54:58.678880 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:54:58.679112 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:54:58.679141 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:54:58.679299 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:54:58.679330 1 main.go:227] handling current node\nI0521 15:55:08.685402 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:55:08.685458 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:55:08.685739 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:55:08.685768 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:55:08.685922 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:55:08.685953 1 main.go:227] handling current node\nI0521 15:55:18.691804 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:55:18.691863 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:55:18.692123 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:55:18.692152 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:55:18.692281 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:55:18.692311 1 main.go:227] handling current node\nI0521 15:55:28.698603 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:55:28.698658 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:55:28.698882 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:55:28.698910 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:55:28.699043 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:55:28.699072 1 main.go:227] handling current node\nI0521 15:55:38.705294 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:55:38.705350 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:55:38.705593 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:55:38.705623 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:55:38.705757 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:55:38.705789 1 main.go:227] handling current node\nI0521 15:55:48.711992 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:55:48.712046 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:55:48.712280 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:55:48.712310 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:55:48.712444 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:55:48.712473 1 main.go:227] handling current node\nI0521 15:55:58.719905 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:55:58.719963 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:55:58.720195 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:55:58.720225 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:55:58.720360 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:55:58.720389 1 main.go:227] handling current node\nI0521 15:56:08.728088 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:56:08.728142 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:56:08.728365 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:56:08.728393 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:56:08.728538 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:56:08.728567 1 main.go:227] handling current node\nI0521 15:56:18.734550 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:56:18.734605 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:56:18.734817 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:56:18.734846 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:56:18.734985 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:56:18.735013 1 main.go:227] handling current node\nI0521 15:56:28.742010 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:56:28.742062 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:56:28.742284 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:56:28.742311 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:56:28.742458 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:56:28.742486 1 main.go:227] handling current node\nI0521 15:56:38.820287 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:56:38.820395 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:56:38.820633 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:56:38.821153 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:56:38.821294 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:56:38.821320 1 main.go:227] handling current node\nI0521 15:56:48.826978 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:56:48.827032 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:56:48.827254 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:56:48.827284 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:56:48.827420 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:56:48.827449 1 main.go:227] handling current node\nI0521 15:56:58.833122 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:56:58.833175 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:56:58.833395 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:56:58.833423 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:56:58.833559 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:56:58.833588 1 main.go:227] handling current node\nI0521 15:57:08.839564 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:57:08.839614 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:57:08.839826 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:57:08.839853 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:57:08.839985 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:57:08.840014 1 main.go:227] handling current node\nI0521 15:57:18.846185 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:57:18.846240 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:57:18.846484 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:57:18.846516 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:57:18.846656 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:57:18.846686 1 main.go:227] handling current node\nI0521 15:57:28.852485 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:57:28.852539 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:57:28.852766 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:57:28.852795 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:57:28.852935 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:57:28.852964 1 main.go:227] handling current node\nI0521 15:57:38.858806 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:57:38.858862 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:57:38.859093 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:57:38.859124 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:57:38.859263 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:57:38.859292 1 main.go:227] handling current node\nI0521 15:57:48.863479 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:57:48.863518 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:57:48.863680 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:57:48.863699 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:57:48.863795 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:57:48.863814 1 main.go:227] handling current node\nI0521 15:57:58.868363 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:57:58.868404 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:57:58.868625 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:57:58.868648 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:57:58.868772 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:57:58.868796 1 main.go:227] handling current node\nI0521 15:58:08.875380 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:58:08.875425 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:58:08.875651 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:58:08.875681 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:58:08.875802 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:58:08.875823 1 main.go:227] handling current node\nI0521 15:58:18.920571 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:58:18.920644 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:58:18.920916 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:58:18.920944 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:58:18.921090 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:58:18.921117 1 main.go:227] handling current node\nI0521 15:58:28.926856 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:58:28.926903 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:58:28.927107 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:58:28.927131 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:58:28.927251 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:58:28.927275 1 main.go:227] handling current node\nI0521 15:58:38.933655 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:58:38.933703 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:58:38.934010 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:58:38.934080 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:58:38.934248 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:58:38.934275 1 main.go:227] handling current node\nI0521 15:58:48.941162 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:58:48.941225 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:58:48.991018 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:58:48.991059 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:58:48.991868 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:58:48.991901 1 main.go:227] handling current node\nI0521 15:58:58.997695 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:58:58.997745 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:58:58.998041 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:58:58.998084 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:58:58.998231 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:58:58.998260 1 main.go:227] handling current node\nI0521 15:59:09.003674 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:59:09.003720 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:59:09.003983 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:59:09.004019 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:59:09.004178 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:59:09.004201 1 main.go:227] handling current node\nI0521 15:59:19.009294 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:59:19.009329 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:59:19.009557 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:59:19.009576 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:59:19.009692 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:59:19.009713 1 main.go:227] handling current node\nI0521 15:59:29.014315 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:59:29.014361 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:59:29.014609 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:59:29.014636 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:59:29.014784 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:59:29.014811 1 main.go:227] handling current node\nI0521 15:59:39.120408 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:59:39.120484 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:59:39.120682 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:59:39.120706 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:59:39.120828 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:59:39.120856 1 main.go:227] handling current node\nI0521 15:59:49.126114 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:59:49.126173 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:59:49.126367 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:59:49.126391 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:59:49.126503 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:59:49.126526 1 main.go:227] handling current node\nI0521 15:59:59.132727 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:59:59.132788 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:59:59.133091 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:59:59.133126 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 15:59:59.133353 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:59:59.133392 1 main.go:227] handling current node\nI0521 16:00:09.140504 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:00:09.140557 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:00:09.140802 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:00:09.140831 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:00:09.140967 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:00:09.140994 1 main.go:227] handling current node\nI0521 16:00:19.147494 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:00:19.147545 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:00:19.171441 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:00:19.171485 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:00:19.171996 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:00:19.172024 1 main.go:227] handling current node\nI0521 16:00:29.178236 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:00:29.178299 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:00:29.178551 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:00:29.178580 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:00:29.178728 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:00:29.178757 1 main.go:227] handling current node\nI0521 16:00:39.184074 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:00:39.184123 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:00:39.184377 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:00:39.184406 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:00:39.184569 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:00:39.184597 1 main.go:227] handling current node\nI0521 16:00:49.189768 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:00:49.189857 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:00:49.190059 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:00:49.190108 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:00:49.190227 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:00:49.190254 1 main.go:227] handling current node\nI0521 16:00:59.196314 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:00:59.196373 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:00:59.223070 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:00:59.223162 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:00:59.223432 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:00:59.223480 1 main.go:227] handling current node\nI0521 16:01:09.228320 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:01:09.228361 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:01:09.228544 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:01:09.228564 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:01:09.228669 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:01:09.228687 1 main.go:227] handling current node\nI0521 16:01:19.233833 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:01:19.233884 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:01:19.234092 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:01:19.234116 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:01:19.234256 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:01:19.234281 1 main.go:227] handling current node\nI0521 16:01:29.240454 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:01:29.240500 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:01:29.240732 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:01:29.240753 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:01:29.240901 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:01:29.240922 1 main.go:227] handling current node\nI0521 16:01:39.246809 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:01:39.246867 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:01:39.247132 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:01:39.247157 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:01:39.247286 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:01:39.247310 1 main.go:227] handling current node\nI0521 16:01:49.251921 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:01:49.251960 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:01:49.252135 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:01:49.252156 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:01:49.252263 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:01:49.252283 1 main.go:227] handling current node\nI0521 16:01:59.258304 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:01:59.258351 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:01:59.258598 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:01:59.258626 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:01:59.258774 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:01:59.258800 1 main.go:227] handling current node\nI0521 16:02:09.266332 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:02:09.266382 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:02:09.266703 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:02:09.266727 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:02:09.266896 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:02:09.266918 1 main.go:227] handling current node\nI0521 16:02:19.273421 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:02:19.273463 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:02:19.273710 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:02:19.273731 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:02:19.273953 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:02:19.273980 1 main.go:227] handling current node\nI0521 16:02:29.280037 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:02:29.280082 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:02:29.280995 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:02:29.281175 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:02:29.281683 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:02:29.319179 1 main.go:227] handling current node\nI0521 16:02:39.325296 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:02:39.325391 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:02:39.575085 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:02:39.575154 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:02:39.744722 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:02:39.744795 1 main.go:227] handling current node\nI0521 16:02:49.752278 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:02:49.752331 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:02:49.752598 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:02:49.752622 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:02:49.752799 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:02:49.752823 1 main.go:227] handling current node\nI0521 16:02:59.758429 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:02:59.758483 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:02:59.758724 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:02:59.758749 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:02:59.758901 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:02:59.758927 1 main.go:227] handling current node\nI0521 16:03:09.765559 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:03:09.765602 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:03:09.765883 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:03:09.765908 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:03:09.766047 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:03:09.766070 1 main.go:227] handling current node\nI0521 16:03:19.771522 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:03:19.771584 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:03:19.771890 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:03:19.771916 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:03:19.772162 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:03:19.772187 1 main.go:227] handling current node\nI0521 16:03:29.776851 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:03:29.776928 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:03:29.777166 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:03:29.777190 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:03:29.777322 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:03:29.777344 1 main.go:227] handling current node\nI0521 16:03:39.782828 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:03:39.782877 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:03:39.783125 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:03:39.783148 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:03:39.783276 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:03:39.783299 1 main.go:227] handling current node\nI0521 16:03:49.790751 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:03:49.790816 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:03:49.791160 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:03:49.791193 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:03:49.791369 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:03:49.791394 1 main.go:227] handling current node\nI0521 16:03:59.797048 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:03:59.797158 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:03:59.797661 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:03:59.797693 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:03:59.819684 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:03:59.819724 1 main.go:227] handling current node\nI0521 16:04:09.825594 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:04:09.825654 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:04:09.826029 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:04:09.826072 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:04:09.826297 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:04:09.826341 1 main.go:227] handling current node\nI0521 16:04:19.832263 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:04:19.832311 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:04:19.832572 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:04:19.832598 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:04:19.832752 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:04:19.832778 1 main.go:227] handling current node\nI0521 16:04:29.838531 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:04:29.838577 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:04:29.838852 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:04:29.838878 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:04:29.839048 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:04:29.839073 1 main.go:227] handling current node\nI0521 16:04:39.844061 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:04:39.844104 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:04:39.844353 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:04:39.844371 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:04:39.844511 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:04:39.844537 1 main.go:227] handling current node\nI0521 16:04:49.851238 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:04:49.851298 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:04:49.851576 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:04:49.851609 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:04:49.851763 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:04:49.851793 1 main.go:227] handling current node\nI0521 16:04:59.858140 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:04:59.858187 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:04:59.858424 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:04:59.858448 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:04:59.858630 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:04:59.858667 1 main.go:227] handling current node\nI0521 16:05:09.864364 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:05:09.864432 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:05:09.864699 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:05:09.864727 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:05:09.864880 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:05:09.864902 1 main.go:227] handling current node\nI0521 16:05:19.871535 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:05:19.871581 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:05:19.871797 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:05:19.871820 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:05:19.871942 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:05:19.871965 1 main.go:227] handling current node\nI0521 16:05:29.878460 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:05:29.878542 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:05:29.879169 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:05:29.879235 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:05:29.919712 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:05:29.919754 1 main.go:227] handling current node\nI0521 16:05:39.925707 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:05:39.925764 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:05:39.926065 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:05:39.926181 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:05:39.926304 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:05:39.926331 1 main.go:227] handling current node\nI0521 16:05:49.932193 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:05:49.932259 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:05:49.932510 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:05:49.932538 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:05:49.932670 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:05:49.932698 1 main.go:227] handling current node\nI0521 16:05:59.939405 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:05:59.939462 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:05:59.939706 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:05:59.939734 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:05:59.939866 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:05:59.939894 1 main.go:227] handling current node\nI0521 16:06:09.947848 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:06:09.947914 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:06:09.948164 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:06:09.948193 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:06:09.948326 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:06:09.948356 1 main.go:227] handling current node\nI0521 16:06:19.954713 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:06:19.954764 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:06:19.954984 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:06:19.955008 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:06:19.955154 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:06:19.955185 1 main.go:227] handling current node\nI0521 16:06:29.962757 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:06:29.962818 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:06:29.963045 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:06:29.963074 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:06:29.963201 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:06:29.963229 1 main.go:227] handling current node\nI0521 16:06:39.969875 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:06:39.969951 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:06:39.970168 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:06:39.970191 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:06:39.970323 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:06:39.970349 1 main.go:227] handling current node\nI0521 16:06:49.977765 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:06:49.977862 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:06:49.978139 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:06:49.978168 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:06:49.978311 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:06:49.978339 1 main.go:227] handling current node\nI0521 16:06:59.984473 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:06:59.984527 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:06:59.984763 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:06:59.984788 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:06:59.984923 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:06:59.984947 1 main.go:227] handling current node\nI0521 16:07:09.990920 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:07:09.990967 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:07:09.991245 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:07:09.991268 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:07:09.991387 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:07:09.991410 1 main.go:227] handling current node\nI0521 16:07:19.997483 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:07:19.997533 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:07:19.997771 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:07:19.997795 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:07:19.997966 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:07:19.997989 1 main.go:227] handling current node\nI0521 16:07:30.004017 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:07:30.004072 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:07:30.004299 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:07:30.004333 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:07:30.004476 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:07:30.004501 1 main.go:227] handling current node\nI0521 16:07:40.010703 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:07:40.010759 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:07:40.010999 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:07:40.011025 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:07:40.011163 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:07:40.011188 1 main.go:227] handling current node\nI0521 16:07:50.017334 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:07:50.017383 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:07:50.017625 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:07:50.017650 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:07:50.017783 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:07:50.017865 1 main.go:227] handling current node\nI0521 16:08:00.023941 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:08:00.024001 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:08:00.024227 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:08:00.024254 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:08:00.024382 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:08:00.024410 1 main.go:227] handling current node\nI0521 16:08:10.031050 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:08:10.031096 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:08:10.031326 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:08:10.031350 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:08:10.031473 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:08:10.031496 1 main.go:227] handling current node\nI0521 16:08:20.038059 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:08:20.038128 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:08:20.038455 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:08:20.038489 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:08:20.038654 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:08:20.038690 1 main.go:227] handling current node\nI0521 16:08:30.045353 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:08:30.045411 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:08:30.045643 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:08:30.045670 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:08:30.045789 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:08:30.045870 1 main.go:227] handling current node\nI0521 16:08:40.052016 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:08:40.052077 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:08:40.052839 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:08:40.052913 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:08:40.053086 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:08:40.053240 1 main.go:227] handling current node\nI0521 16:08:50.060057 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:08:50.060121 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:08:50.060355 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:08:50.060384 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:08:50.060510 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:08:50.060538 1 main.go:227] handling current node\nI0521 16:09:00.066802 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:09:00.066879 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:09:00.067113 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:09:00.067184 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:09:00.067308 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:09:00.067336 1 main.go:227] handling current node\nI0521 16:09:10.073459 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:09:10.073510 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:09:10.073744 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:09:10.073768 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:09:10.073942 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:09:10.074204 1 main.go:227] handling current node\nI0521 16:09:20.080600 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:09:20.080646 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:09:20.080878 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:09:20.080902 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:09:20.081028 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:09:20.081051 1 main.go:227] handling current node\nI0521 16:09:30.087361 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:09:30.087432 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:09:30.087662 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:09:30.087692 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:09:30.087815 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:09:30.087844 1 main.go:227] handling current node\nI0521 16:09:40.094347 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:09:40.094399 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:09:40.094631 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:09:40.094654 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:09:40.094786 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:09:40.094809 1 main.go:227] handling current node\nI0521 16:09:50.101458 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:09:50.101518 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:09:50.101765 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:09:50.101794 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:09:50.101969 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:09:50.102001 1 main.go:227] handling current node\nI0521 16:10:00.108887 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:10:00.108939 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:10:00.109735 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:10:00.119188 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:10:00.119776 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:10:00.119853 1 main.go:227] handling current node\nI0521 16:10:10.126630 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:10:10.126682 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:10:10.126923 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:10:10.126947 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:10:10.127070 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:10:10.127095 1 main.go:227] handling current node\nI0521 16:10:20.134054 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:10:20.134113 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:10:20.134333 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:10:20.134353 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:10:20.134476 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:10:20.134506 1 main.go:227] handling current node\nI0521 16:10:30.140175 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:10:30.140232 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:10:30.140453 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:10:30.140481 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:10:30.140610 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:10:30.140641 1 main.go:227] handling current node\nI0521 16:10:40.146932 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:10:40.146984 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:10:40.147208 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:10:40.147232 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:10:40.147363 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:10:40.147387 1 main.go:227] handling current node\nI0521 16:10:50.153909 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:10:50.153961 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:10:50.154194 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:10:50.154281 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:10:50.154406 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:10:50.154432 1 main.go:227] handling current node\nI0521 16:11:00.161008 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:11:00.161065 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:11:00.161294 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:11:00.161319 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:11:00.161440 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:11:00.161464 1 main.go:227] handling current node\nI0521 16:11:10.168318 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:11:10.168377 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:11:10.168593 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:11:10.168620 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:11:10.168746 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:11:10.168774 1 main.go:227] handling current node\nI0521 16:11:20.175508 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:11:20.175557 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:11:20.175817 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:11:20.175844 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:11:20.175979 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:11:20.176003 1 main.go:227] handling current node\nI0521 16:11:30.319846 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:11:30.319915 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:11:30.320263 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:11:30.320295 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:11:30.320442 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:11:30.320473 1 main.go:227] handling current node\nI0521 16:11:40.327849 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:11:40.327914 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:11:40.328170 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:11:40.328197 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:11:40.328320 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:11:40.328346 1 main.go:227] handling current node\nI0521 16:11:50.336034 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:11:50.336105 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:11:50.336334 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:11:50.336362 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:11:50.336481 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:11:50.336508 1 main.go:227] handling current node\nI0521 16:12:00.344774 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:12:00.344825 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:12:00.345072 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:12:00.345098 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:12:00.345226 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:12:00.345251 1 main.go:227] handling current node\nI0521 16:12:10.352250 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:12:10.352301 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:12:10.352554 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:12:10.352579 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:12:10.352705 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:12:10.352760 1 main.go:227] handling current node\nI0521 16:12:20.359232 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:12:20.359282 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:12:20.359492 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:12:20.359512 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:12:20.359639 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:12:20.359663 1 main.go:227] handling current node\nI0521 16:12:30.366258 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:12:30.366313 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:12:30.366564 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:12:30.366589 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:12:30.366711 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:12:30.366733 1 main.go:227] handling current node\nI0521 16:12:40.372510 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:12:40.372558 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:12:40.372776 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:12:40.372800 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:12:40.373467 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:12:40.373569 1 main.go:227] handling current node\nI0521 16:12:50.380697 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:12:50.380745 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:12:50.380977 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:12:50.381001 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:12:50.381122 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:12:50.381145 1 main.go:227] handling current node\nI0521 16:13:00.388659 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:13:00.388716 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:13:00.388935 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:13:00.388959 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:13:00.389102 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:13:00.389128 1 main.go:227] handling current node\nI0521 16:13:10.396082 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:13:10.396140 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:13:10.396375 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:13:10.396406 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:13:10.396532 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:13:10.396562 1 main.go:227] handling current node\nI0521 16:13:20.404163 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:13:20.404217 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:13:20.404460 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:13:20.404489 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:13:20.404626 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:13:20.404657 1 main.go:227] handling current node\nI0521 16:13:30.412506 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:13:30.412571 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:13:30.412823 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:13:30.412857 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:13:30.412986 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:13:30.413016 1 main.go:227] handling current node\nI0521 16:13:40.420502 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:13:40.420575 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:13:40.420856 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:13:40.420893 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:13:40.421070 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:13:40.421101 1 main.go:227] handling current node\nI0521 16:13:50.427824 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:13:50.427873 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:13:50.428084 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:13:50.428106 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:13:50.428225 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:13:50.428247 1 main.go:227] handling current node\nI0521 16:14:00.435189 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:14:00.435246 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:14:00.435486 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:14:00.435515 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:14:00.435640 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:14:00.435669 1 main.go:227] handling current node\nI0521 16:14:10.442715 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:14:10.442770 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:14:10.443009 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:14:10.443038 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:14:10.443163 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:14:10.443192 1 main.go:227] handling current node\nI0521 16:14:20.450858 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:14:20.450913 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:14:20.451138 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:14:20.451166 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:14:20.451297 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:14:20.451325 1 main.go:227] handling current node\nI0521 16:14:30.460288 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:14:30.460374 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:14:30.460695 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:14:30.460728 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:14:30.460873 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:14:30.460904 1 main.go:227] handling current node\nI0521 16:14:40.468666 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:14:40.468733 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:14:40.468956 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:14:40.468985 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:14:40.469105 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:14:40.469132 1 main.go:227] handling current node\nI0521 16:14:50.476455 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:14:50.476507 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:14:50.476731 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:14:50.476754 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:14:50.476884 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:14:50.476909 1 main.go:227] handling current node\nI0521 16:15:00.484066 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:15:00.484110 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:15:00.484329 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:15:00.484352 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:15:00.484472 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:15:00.484497 1 main.go:227] handling current node\nI0521 16:15:10.491341 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:15:10.491399 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:15:10.491636 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:15:10.491664 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:15:10.491786 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:15:10.491817 1 main.go:227] handling current node\nI0521 16:15:20.498673 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:15:20.498723 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:15:20.498959 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:15:20.498983 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:15:20.499108 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:15:20.499133 1 main.go:227] handling current node\nI0521 16:15:30.506021 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:15:30.506075 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:15:30.506314 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:15:30.506339 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:15:30.506483 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:15:30.506508 1 main.go:227] handling current node\nI0521 16:15:40.513433 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:15:40.513487 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:15:40.513722 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:15:40.513751 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:15:40.513916 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:15:40.513949 1 main.go:227] handling current node\nI0521 16:15:50.520662 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:15:50.520717 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:15:50.520939 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:15:50.520972 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:15:50.521092 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:15:50.521121 1 main.go:227] handling current node\nI0521 16:16:00.527743 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:16:00.527808 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:16:00.528035 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:16:00.528066 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:16:00.528199 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:16:00.528229 1 main.go:227] handling current node\nI0521 16:16:10.534724 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:16:10.534778 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:16:10.535005 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:16:10.535040 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:16:10.535167 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:16:10.535195 1 main.go:227] handling current node\nI0521 16:16:20.543181 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:16:20.543261 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:16:20.543480 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:16:20.543501 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:16:20.543624 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:16:20.543643 1 main.go:227] handling current node\nI0521 16:16:30.550183 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:16:30.550249 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:16:30.550480 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:16:30.550506 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:16:30.550632 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:16:30.550658 1 main.go:227] handling current node\nI0521 16:16:40.556385 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:16:40.556432 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:16:40.556631 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:16:40.556655 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:16:40.556753 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:16:40.556776 1 main.go:227] handling current node\nI0521 16:16:50.564058 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:16:50.564108 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:16:50.564341 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:16:50.564366 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:16:50.564501 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:16:50.564527 1 main.go:227] handling current node\nI0521 16:17:00.570736 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:17:00.570789 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:17:00.571014 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:17:00.571037 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:17:00.571161 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:17:00.571185 1 main.go:227] handling current node\nI0521 16:17:10.577401 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:17:10.577460 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:17:10.577699 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:17:10.577729 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:17:10.577889 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:17:10.577923 1 main.go:227] handling current node\nI0521 16:17:20.586194 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:17:20.586250 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:17:20.586475 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:17:20.586500 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:17:20.586638 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:17:20.586663 1 main.go:227] handling current node\nI0521 16:17:30.593053 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:17:30.593109 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:17:30.593335 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:17:30.593364 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:17:30.593502 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:17:30.593535 1 main.go:227] handling current node\nI0521 16:17:40.599885 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:17:40.599938 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:17:40.600168 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:17:40.600196 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:17:40.600323 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:17:40.600351 1 main.go:227] handling current node\nI0521 16:17:50.607144 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:17:50.607205 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:17:50.607510 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:17:50.607544 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:17:50.607706 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:17:50.607739 1 main.go:227] handling current node\nI0521 16:18:00.614381 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:18:00.614429 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:18:00.614656 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:18:00.614680 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:18:00.614813 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:18:00.614838 1 main.go:227] handling current node\nI0521 16:18:10.621976 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:18:10.622031 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:18:10.622314 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:18:10.622339 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:18:10.622463 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:18:10.622487 1 main.go:227] handling current node\nI0521 16:18:20.628583 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:18:20.628653 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:18:20.628881 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:18:20.628910 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:18:20.629028 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:18:20.629058 1 main.go:227] handling current node\nI0521 16:18:30.636178 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:18:30.636240 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:18:30.636463 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:18:30.636491 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:18:30.636614 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:18:30.636642 1 main.go:227] handling current node\nI0521 16:18:40.643683 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:18:40.643745 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:18:40.643977 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:18:40.644002 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:18:40.644125 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:18:40.644149 1 main.go:227] handling current node\nI0521 16:18:50.650783 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:18:50.650832 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:18:50.651057 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:18:50.651081 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:18:50.651205 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:18:50.651230 1 main.go:227] handling current node\nI0521 16:19:00.658353 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:19:00.658422 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:19:00.658657 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:19:00.658682 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:19:00.658805 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:19:00.658828 1 main.go:227] handling current node\nI0521 16:19:10.663544 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:19:10.663583 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:19:10.663735 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:19:10.663755 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:19:10.663861 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:19:10.663900 1 main.go:227] handling current node\nI0521 16:19:20.670034 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:19:20.670078 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:19:20.670354 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:19:20.670374 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:19:20.671082 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:19:20.671160 1 main.go:227] handling current node\nI0521 16:19:30.676524 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:19:30.676588 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:19:30.676818 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:19:30.676842 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:19:30.676963 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:19:30.676986 1 main.go:227] handling current node\nI0521 16:19:40.683272 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:19:40.683321 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:19:40.683549 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:19:40.683572 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:19:40.683701 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:19:40.683726 1 main.go:227] handling current node\nI0521 16:19:50.691333 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:19:50.691391 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:19:50.691627 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:19:50.691651 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:19:50.691774 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:19:50.691797 1 main.go:227] handling current node\nI0521 16:20:00.697977 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:20:00.698049 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:20:00.698336 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:20:00.698370 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:20:00.698530 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:20:00.698562 1 main.go:227] handling current node\nI0521 16:20:10.705617 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:20:10.705666 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:20:10.705944 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:20:10.705979 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:20:10.706124 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:20:10.706151 1 main.go:227] handling current node\nI0521 16:20:20.712983 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:20:20.713035 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:20:20.713275 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:20:20.713303 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:20:20.713438 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:20:20.713465 1 main.go:227] handling current node\nI0521 16:20:30.720171 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:20:30.720233 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:20:30.720460 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:20:30.720493 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:20:30.720634 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:20:30.720663 1 main.go:227] handling current node\nI0521 16:20:40.727301 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:20:40.727349 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:20:40.727593 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:20:40.727618 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:20:40.727745 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:20:40.727769 1 main.go:227] handling current node\nI0521 16:20:50.734206 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:20:50.734255 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:20:50.734483 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:20:50.734507 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:20:50.734641 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:20:50.734665 1 main.go:227] handling current node\nI0521 16:21:00.740666 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:21:00.740714 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:21:00.740949 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:21:00.740975 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:21:00.741133 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:21:00.741159 1 main.go:227] handling current node\nI0521 16:21:10.749548 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:21:10.749600 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:21:10.749910 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:21:10.749936 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:21:10.750060 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:21:10.750086 1 main.go:227] handling current node\nI0521 16:21:20.756859 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:21:20.756920 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:21:20.757154 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:21:20.757184 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:21:20.757316 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:21:20.757347 1 main.go:227] handling current node\nI0521 16:21:30.765406 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:21:30.765455 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:21:30.765675 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:21:30.765698 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:21:30.765855 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:21:30.765883 1 main.go:227] handling current node\nI0521 16:21:40.772477 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:21:40.772526 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:21:40.772746 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:21:40.772769 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:21:40.772907 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:21:40.772934 1 main.go:227] handling current node\nI0521 16:21:50.780840 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:21:50.780888 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:21:50.781110 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:21:50.781134 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:21:50.781269 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:21:50.781292 1 main.go:227] handling current node\nI0521 16:22:00.786660 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:22:00.786701 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:22:00.786896 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:22:00.786916 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:22:00.787026 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:22:00.787049 1 main.go:227] handling current node\nI0521 16:22:10.796154 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:22:10.796207 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:22:10.796437 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:22:10.796460 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:22:10.796602 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:22:10.796626 1 main.go:227] handling current node\nI0521 16:22:20.803114 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:22:20.803176 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:22:20.803432 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:22:20.803462 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:22:20.803613 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:22:20.803643 1 main.go:227] handling current node\nI0521 16:22:30.812321 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:22:30.812371 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:22:30.812597 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:22:30.812621 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:22:30.812746 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:22:30.812771 1 main.go:227] handling current node\nI0521 16:22:40.819116 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:22:40.819165 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:22:40.819403 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:22:40.819426 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:22:40.820009 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:22:40.820090 1 main.go:227] handling current node\nI0521 16:22:50.826520 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:22:50.826570 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:22:50.826815 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:22:50.826845 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:22:50.826998 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:22:50.827025 1 main.go:227] handling current node\nI0521 16:23:00.835017 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:23:00.835078 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:23:00.835309 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:23:00.835333 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:23:00.835456 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:23:00.835480 1 main.go:227] handling current node\nI0521 16:23:10.842660 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:23:10.842709 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:23:10.842928 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:23:10.842950 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:23:10.843075 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:23:10.843100 1 main.go:227] handling current node\nI0521 16:23:20.849396 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:23:20.849444 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:23:20.849666 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:23:20.849689 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:23:20.849851 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:23:20.849879 1 main.go:227] handling current node\nI0521 16:23:30.857574 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:23:30.857622 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:23:30.857880 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:23:30.857906 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:23:30.858031 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:23:30.858054 1 main.go:227] handling current node\nI0521 16:23:40.865009 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:23:40.865057 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:23:40.865276 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:23:40.865301 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:23:40.865428 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:23:40.865453 1 main.go:227] handling current node\nI0521 16:23:50.871835 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:23:50.871884 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:23:50.872107 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:23:50.872132 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:23:50.872272 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:23:50.872296 1 main.go:227] handling current node\nI0521 16:24:00.879319 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:24:00.879368 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:24:00.879604 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:24:00.879629 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:24:00.879755 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:24:00.879779 1 main.go:227] handling current node\nI0521 16:24:10.886849 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:24:10.886899 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:24:10.887132 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:24:10.887156 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:24:10.887278 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:24:10.887302 1 main.go:227] handling current node\nI0521 16:24:20.894860 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:24:20.894908 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:24:20.895128 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:24:20.895153 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:24:20.895934 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:24:20.896007 1 main.go:227] handling current node\nI0521 16:24:30.902827 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:24:30.902879 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:24:30.903159 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:24:30.903185 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:24:30.903316 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:24:30.903340 1 main.go:227] handling current node\nI0521 16:24:40.909775 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:24:40.909867 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:24:40.910094 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:24:40.910418 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:24:40.910541 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:24:40.910572 1 main.go:227] handling current node\nI0521 16:24:50.917347 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:24:50.917397 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:24:50.917623 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:24:50.917648 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:24:50.917771 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:24:50.917794 1 main.go:227] handling current node\nI0521 16:25:00.924534 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:25:00.924583 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:25:00.924804 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:25:00.924829 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:25:00.924956 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:25:00.924981 1 main.go:227] handling current node\nI0521 16:25:10.930930 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:25:10.930991 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:25:10.931218 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:25:10.931247 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:25:10.931375 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:25:10.931403 1 main.go:227] handling current node\nI0521 16:25:20.939463 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:25:20.939511 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:25:20.939756 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:25:20.939780 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:25:20.939902 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:25:20.939925 1 main.go:227] handling current node\nI0521 16:25:30.947386 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:25:30.947437 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:25:30.947657 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:25:30.947681 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:25:30.947802 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:25:30.947827 1 main.go:227] handling current node\nI0521 16:25:40.954534 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:25:40.954594 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:25:40.954818 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:25:40.954845 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:25:40.954973 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:25:40.955000 1 main.go:227] handling current node\nI0521 16:25:50.961660 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:25:50.961717 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:25:50.962000 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:25:50.962044 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:25:50.962188 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:25:50.962219 1 main.go:227] handling current node\nI0521 16:26:00.970052 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:26:00.970106 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:26:00.970336 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:26:00.970360 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:26:00.970482 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:26:00.970507 1 main.go:227] handling current node\nI0521 16:26:10.977750 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:26:10.977853 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:26:10.978169 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:26:10.978202 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:26:10.978324 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:26:10.978356 1 main.go:227] handling current node\nI0521 16:26:20.984343 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:26:20.984395 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:26:20.984622 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:26:20.984648 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:26:20.984770 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:26:20.984806 1 main.go:227] handling current node\nI0521 16:26:30.991128 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:26:30.991189 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:26:30.991419 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:26:30.991448 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:26:30.991571 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:26:30.991600 1 main.go:227] handling current node\nI0521 16:26:40.997847 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:26:40.997909 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:26:40.998157 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:26:40.998185 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:26:40.998307 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:26:40.998335 1 main.go:227] handling current node\nI0521 16:26:51.004802 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:26:51.004862 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:26:51.005081 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:26:51.005110 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:26:51.005238 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:26:51.005267 1 main.go:227] handling current node\nI0521 16:27:01.011194 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:27:01.011239 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:27:01.011473 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:27:01.011497 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:27:01.011626 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:27:01.011651 1 main.go:227] handling current node\nI0521 16:27:11.017779 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:27:11.017886 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:27:11.018109 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:27:11.018137 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:27:11.018272 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:27:11.018301 1 main.go:227] handling current node\nI0521 16:27:21.024596 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:27:21.024656 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:27:21.024878 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:27:21.024907 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:27:21.025041 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:27:21.025070 1 main.go:227] handling current node\nI0521 16:27:31.031080 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:27:31.031134 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:27:31.031362 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:27:31.031386 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:27:31.031994 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:27:31.032038 1 main.go:227] handling current node\nI0521 16:27:41.038696 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:27:41.038747 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:27:41.038973 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:27:41.038998 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:27:41.039118 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:27:41.039142 1 main.go:227] handling current node\nI0521 16:27:51.045280 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:27:51.045332 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:27:51.045549 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:27:51.045577 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:27:51.045715 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:27:51.045745 1 main.go:227] handling current node\nI0521 16:28:01.052145 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:28:01.052189 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:28:01.052429 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:28:01.052452 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:28:01.052575 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:28:01.052598 1 main.go:227] handling current node\nI0521 16:28:11.058918 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:28:11.058976 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:28:11.059211 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:28:11.059240 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:28:11.059364 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:28:11.059392 1 main.go:227] handling current node\nI0521 16:28:21.065414 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:28:21.065469 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:28:21.065700 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:28:21.065729 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:28:21.065884 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:28:21.065916 1 main.go:227] handling current node\nI0521 16:28:31.072206 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:28:31.072260 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:28:31.072469 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:28:31.072498 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:28:31.072624 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:28:31.072651 1 main.go:227] handling current node\nI0521 16:28:41.079181 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:28:41.079250 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:28:41.079576 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:28:41.079614 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:28:41.079796 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:28:41.079823 1 main.go:227] handling current node\nI0521 16:28:51.085293 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:28:51.085340 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:28:51.085565 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:28:51.085590 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:28:51.085711 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:28:51.085736 1 main.go:227] handling current node\nI0521 16:29:01.091916 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:29:01.091969 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:29:01.092192 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:29:01.092214 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:29:01.092336 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:29:01.092359 1 main.go:227] handling current node\nI0521 16:29:11.097761 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:29:11.097799 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:29:11.098095 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:29:11.098121 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:29:11.098260 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:29:11.098286 1 main.go:227] handling current node\nI0521 16:29:21.108084 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:29:21.108137 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:29:21.108357 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:29:21.108384 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:29:21.108507 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:29:21.108535 1 main.go:227] handling current node\nI0521 16:29:31.114571 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:29:31.114627 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:29:31.114855 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:29:31.114884 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:29:31.115007 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:29:31.115035 1 main.go:227] handling current node\nI0521 16:29:41.121309 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:29:41.121364 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:29:41.121605 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:29:41.121634 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:29:41.121757 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:29:41.121785 1 main.go:227] handling current node\nI0521 16:29:51.127921 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:29:51.127980 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:29:51.128216 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:29:51.128246 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:29:51.128381 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:29:51.128410 1 main.go:227] handling current node\nI0521 16:30:01.134903 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:30:01.134957 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:30:01.135189 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:30:01.135218 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:30:01.135347 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:30:01.135375 1 main.go:227] handling current node\nI0521 16:30:11.141126 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:30:11.141173 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:30:11.141464 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:30:11.141489 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:30:11.141656 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:30:11.141682 1 main.go:227] handling current node\nI0521 16:30:21.146959 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:30:21.146992 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:30:21.161541 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:30:21.161592 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:30:21.161755 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:30:21.161777 1 main.go:227] handling current node\nI0521 16:30:31.168049 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:30:31.168124 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:30:31.168383 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:30:31.168411 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:30:31.168545 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:30:31.168573 1 main.go:227] handling current node\nI0521 16:30:41.221693 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:30:41.221792 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:30:41.362883 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:30:41.362954 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:30:41.363142 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:30:41.363189 1 main.go:227] handling current node\nI0521 16:30:51.370302 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:30:51.370359 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:30:51.370619 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:30:51.370647 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:30:51.370808 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:30:51.370833 1 main.go:227] handling current node\nI0521 16:31:01.378351 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:31:01.378420 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:31:01.378652 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:31:01.378676 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:31:01.378814 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:31:01.378837 1 main.go:227] handling current node\nI0521 16:31:11.384848 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:31:11.384898 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:31:11.385165 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:31:11.385204 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:31:11.385359 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:31:11.385383 1 main.go:227] handling current node\nI0521 16:31:21.392482 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:31:21.392542 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:31:21.392791 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:31:21.392815 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:31:21.392958 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:31:21.392981 1 main.go:227] handling current node\nI0521 16:31:31.400604 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:31:31.400669 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:31:31.400967 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:31:31.400998 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:31:31.401141 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:31:31.401179 1 main.go:227] handling current node\nI0521 16:31:41.407680 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:31:41.407727 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:31:41.407987 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:31:41.408010 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:31:41.408147 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:31:41.408171 1 main.go:227] handling current node\nI0521 16:31:51.413692 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:31:51.413736 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:31:51.414017 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:31:51.414041 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:31:51.414156 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:31:51.414176 1 main.go:227] handling current node\nI0521 16:32:01.420502 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:32:01.420574 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:32:01.420839 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:32:01.420868 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:32:01.421062 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:32:01.421092 1 main.go:227] handling current node\nI0521 16:32:11.426527 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:32:11.426569 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:32:11.455436 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:32:11.455469 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:32:11.521988 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:32:11.522026 1 main.go:227] handling current node\nI0521 16:32:21.529742 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:32:21.529872 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:32:21.774932 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:32:21.775022 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:32:21.826834 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:32:21.826893 1 main.go:227] handling current node\nI0521 16:32:31.833250 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:32:31.833307 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:32:31.833626 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:32:31.833656 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:32:31.833869 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:32:31.833926 1 main.go:227] handling current node\nI0521 16:32:41.840367 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:32:41.840434 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:32:41.840725 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:32:41.840754 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:32:41.840908 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:32:41.840938 1 main.go:227] handling current node\nI0521 16:32:51.847022 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:32:51.847084 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:32:51.847353 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:32:51.847383 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:32:51.847531 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:32:51.847564 1 main.go:227] handling current node\nI0521 16:33:01.854927 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:33:01.854978 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:33:01.855190 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:33:01.855214 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:33:01.855372 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:33:01.855398 1 main.go:227] handling current node\nI0521 16:33:11.861625 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:33:11.861681 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:33:11.890249 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:33:11.890299 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:33:11.890464 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:33:11.890498 1 main.go:227] handling current node\nI0521 16:33:21.896511 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:33:21.896560 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:33:21.896832 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:33:21.896857 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:33:21.896995 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:33:21.897023 1 main.go:227] handling current node\nI0521 16:33:31.902845 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:33:31.902898 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:33:31.909697 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:33:31.909731 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:33:31.909895 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:33:31.909922 1 main.go:227] handling current node\nI0521 16:33:41.916184 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:33:41.916236 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:33:41.930725 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:33:41.930765 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:33:41.930924 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:33:41.930949 1 main.go:227] handling current node\nI0521 16:33:51.937784 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:33:52.019321 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:33:52.019700 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:33:52.019742 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:33:52.019862 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:33:52.019890 1 main.go:227] handling current node\nI0521 16:34:02.025800 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:34:02.025884 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:34:02.071632 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:34:02.071685 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:34:02.071840 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:34:02.071868 1 main.go:227] handling current node\nI0521 16:34:12.078618 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:34:12.078682 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:34:12.090868 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:34:12.090906 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:34:12.091130 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:34:12.091156 1 main.go:227] handling current node\nI0521 16:34:22.097933 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:34:22.097987 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:34:22.098222 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:34:22.098250 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:34:22.098406 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:34:22.098434 1 main.go:227] handling current node\nI0521 16:34:32.105114 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:34:32.105171 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:34:32.105416 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:34:32.105445 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:34:32.105578 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:34:32.105607 1 main.go:227] handling current node\nI0521 16:34:42.112802 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:34:42.112876 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:34:42.113128 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:34:42.113159 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:34:42.113291 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:34:42.113323 1 main.go:227] handling current node\nI0521 16:34:52.120331 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:34:52.120379 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:34:52.120628 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:34:52.120654 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:34:52.120788 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:34:52.120812 1 main.go:227] handling current node\nI0521 16:35:02.126614 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:35:02.126655 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:35:02.126847 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:35:02.126865 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:35:02.126960 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:35:02.126978 1 main.go:227] handling current node\nI0521 16:35:12.133315 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:35:12.133364 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:35:12.133593 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:35:12.133616 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:35:12.133758 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:35:12.133783 1 main.go:227] handling current node\nI0521 16:35:22.139192 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:35:22.139242 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:35:22.139513 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:35:22.139537 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:35:22.139662 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:35:22.139685 1 main.go:227] handling current node\nI0521 16:35:32.148176 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:35:32.148252 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:35:32.148520 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:35:32.148547 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:35:32.148713 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:35:32.148741 1 main.go:227] handling current node\nI0521 16:35:42.156260 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:35:42.156330 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:35:42.156548 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:35:42.156576 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:35:42.156699 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:35:42.156727 1 main.go:227] handling current node\nI0521 16:35:52.164375 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:35:52.164424 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:35:52.164660 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:35:52.164682 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:35:52.164806 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:35:52.164829 1 main.go:227] handling current node\nI0521 16:36:02.171394 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:36:02.171445 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:36:02.171665 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:36:02.171689 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:36:02.171820 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:36:02.171844 1 main.go:227] handling current node\nI0521 16:36:12.178494 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:36:12.178553 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:36:12.178785 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:36:12.178813 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:36:12.178940 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:36:12.178969 1 main.go:227] handling current node\nI0521 16:36:22.185385 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:36:22.185443 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:36:22.185671 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:36:22.185700 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:36:22.185881 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:36:22.185922 1 main.go:227] handling current node\nI0521 16:36:32.193267 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:36:32.193369 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:36:32.193756 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:36:32.193791 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:36:32.193967 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:36:32.193995 1 main.go:227] handling current node\nI0521 16:36:42.201108 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:36:42.201159 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:36:42.201401 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:36:42.201426 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:36:42.201555 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:36:42.201580 1 main.go:227] handling current node\nI0521 16:36:52.208198 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:36:52.208255 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:36:52.208470 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:36:52.208498 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:36:52.208629 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:36:52.208657 1 main.go:227] handling current node\nI0521 16:37:02.214967 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:37:02.215017 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:37:02.215216 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:37:02.215237 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:37:02.215359 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:37:02.215382 1 main.go:227] handling current node\nI0521 16:37:12.225388 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:37:12.225447 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:37:12.225748 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:37:12.225778 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:37:12.225936 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:37:12.225971 1 main.go:227] handling current node\nI0521 16:37:22.232297 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:37:22.232348 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:37:22.232585 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:37:22.232859 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:37:22.232987 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:37:22.233017 1 main.go:227] handling current node\nI0521 16:37:32.239625 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:37:32.239680 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:37:32.239914 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:37:32.239939 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:37:32.240064 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:37:32.240087 1 main.go:227] handling current node\nI0521 16:37:42.246581 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:37:42.246641 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:37:42.246900 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:37:42.246941 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:37:42.247081 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:37:42.247112 1 main.go:227] handling current node\nI0521 16:37:52.254402 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:37:52.254458 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:37:52.254711 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:37:52.254732 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:37:52.254855 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:37:52.254884 1 main.go:227] handling current node\nI0521 16:38:02.261379 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:38:02.261427 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:38:02.261656 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:38:02.261683 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:38:02.261866 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:38:02.261899 1 main.go:227] handling current node\nI0521 16:38:12.321084 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:38:12.321145 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:38:12.321364 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:38:12.321394 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:38:12.321533 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:38:12.321562 1 main.go:227] handling current node\nI0521 16:38:22.328066 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:38:22.328118 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:38:22.328351 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:38:22.328374 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:38:22.328510 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:38:22.328534 1 main.go:227] handling current node\nI0521 16:38:32.334650 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:38:32.334715 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:38:32.334988 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:38:32.335018 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:38:32.335145 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:38:32.335173 1 main.go:227] handling current node\nI0521 16:38:42.420630 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:38:42.420698 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:38:42.420895 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:38:42.420918 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:38:42.421027 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:38:42.421051 1 main.go:227] handling current node\nI0521 16:38:52.427902 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:38:52.427950 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:38:52.428177 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:38:52.428202 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:38:52.428326 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:38:52.428351 1 main.go:227] handling current node\nI0521 16:39:02.435118 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:39:02.435179 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:39:02.435396 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:39:02.435425 1 main.go:250] Node kali-worker has CIDR [10.244.1.0/24] \nI0521 16:39:02.435551 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:39:02.435580 1 main.go:227] handling current node\n==== END logs for container kindnet-cni of pod kube-system/kindnet-n7f64 ====\n==== START logs for container kindnet-cni of pod kube-system/kindnet-vlqfv ====\nI0521 15:13:54.324057 1 main.go:316] probe TCP address kali-control-plane:6443\nI0521 15:13:54.620273 1 main.go:102] connected to apiserver: https://kali-control-plane:6443\nI0521 15:13:54.620304 1 main.go:107] hostIP = 172.18.0.2\npodIP = 172.18.0.2\nI0521 15:13:54.620602 1 main.go:116] setting mtu 1500 for CNI \nI0521 15:13:54.620630 1 main.go:146] kindnetd IP family: \"ipv4\"\nI0521 15:13:54.620693 1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]\nI0521 15:13:55.721989 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:13:55.722094 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:13:55.722505 1 routes.go:46] Adding route {Ifindex: 0 Dst: 10.244.0.0/24 Src: Gw: 172.18.0.3 Flags: [] Table: 0} \nI0521 15:13:55.722681 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:13:55.722727 1 main.go:227] handling current node\nI0521 15:13:55.932094 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:13:55.932146 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:13:55.932320 1 routes.go:46] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: Gw: 172.18.0.4 Flags: [] Table: 0} \nI0521 15:14:05.939045 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:14:05.939094 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:14:05.939348 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:14:05.939378 1 main.go:227] handling current node\nI0521 15:14:05.939401 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:14:05.939416 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:14:15.945652 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:14:15.945712 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:14:15.945969 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:14:15.946057 1 main.go:227] handling current node\nI0521 15:14:15.946080 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:14:15.946119 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:14:25.952291 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:14:25.952339 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:14:25.952550 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:14:25.952578 1 main.go:227] handling current node\nI0521 15:14:25.952601 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:14:25.952621 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:14:35.958131 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:14:35.958180 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:14:35.958409 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:14:35.958436 1 main.go:227] handling current node\nI0521 15:14:35.958460 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:14:35.958475 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:14:45.963987 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:14:45.964035 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:14:45.964309 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:14:45.964338 1 main.go:227] handling current node\nI0521 15:14:45.964361 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:14:45.964375 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:14:55.970643 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:14:55.970688 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:14:55.970885 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:14:55.970906 1 main.go:227] handling current node\nI0521 15:14:55.970927 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:14:55.970942 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:15:05.978339 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:15:05.978386 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:15:05.978617 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:15:05.978645 1 main.go:227] handling current node\nI0521 15:15:05.978693 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:15:05.978710 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:15:15.983984 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:15:15.984032 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:15:15.984232 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:15:15.984260 1 main.go:227] handling current node\nI0521 15:15:15.984295 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:15:15.984311 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:15:25.990225 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:15:25.990272 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:15:25.990556 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:15:25.990584 1 main.go:227] handling current node\nI0521 15:15:25.990607 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:15:25.990625 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:15:36.005412 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:15:36.005485 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:15:36.005756 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:15:36.005798 1 main.go:227] handling current node\nI0521 15:15:36.005893 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:15:36.005927 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:15:46.020172 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:15:46.020253 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:15:46.020594 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:15:46.020624 1 main.go:227] handling current node\nI0521 15:15:46.020647 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:15:46.020660 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:15:56.026780 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:15:56.026834 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:15:56.027080 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:15:56.027107 1 main.go:227] handling current node\nI0521 15:15:56.027133 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:15:56.027146 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:16:06.031786 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:16:06.031828 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:16:06.032035 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:16:06.032062 1 main.go:227] handling current node\nI0521 15:16:06.032082 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:16:06.032093 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:16:16.037673 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:16:16.037733 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:16:16.038004 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:16:16.038036 1 main.go:227] handling current node\nI0521 15:16:16.038057 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:16:16.038069 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:16:26.044758 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:16:26.044817 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:16:26.045028 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:16:26.045058 1 main.go:227] handling current node\nI0521 15:16:26.045081 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:16:26.045100 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:16:36.050776 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:16:36.050828 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:16:36.051029 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:16:36.051057 1 main.go:227] handling current node\nI0521 15:16:36.051077 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:16:36.051095 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:16:46.057366 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:16:46.057423 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:16:46.057653 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:16:46.057685 1 main.go:227] handling current node\nI0521 15:16:46.057708 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:16:46.057733 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:16:56.064575 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:16:56.064635 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:16:56.064912 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:16:56.064946 1 main.go:227] handling current node\nI0521 15:16:56.064971 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:16:56.064989 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:17:06.071152 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:17:06.071204 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:17:06.071429 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:17:06.071456 1 main.go:227] handling current node\nI0521 15:17:06.071482 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:17:06.071498 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:17:16.082783 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:17:16.082865 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:17:16.083138 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:17:16.083173 1 main.go:227] handling current node\nI0521 15:17:16.083199 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:17:16.083214 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:17:26.088570 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:17:26.088624 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:17:26.088834 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:17:26.088863 1 main.go:227] handling current node\nI0521 15:17:26.088884 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:17:26.088909 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:17:36.094870 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:17:36.094928 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:17:36.095139 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:17:36.095170 1 main.go:227] handling current node\nI0521 15:17:36.095192 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:17:36.095211 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:17:46.101123 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:17:46.101182 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:17:46.101446 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:17:46.101479 1 main.go:227] handling current node\nI0521 15:17:46.101502 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:17:46.101524 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:17:56.107692 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:17:56.107748 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:17:56.107997 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:17:56.108028 1 main.go:227] handling current node\nI0521 15:17:56.108051 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:17:56.108071 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:18:06.114497 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:18:06.114565 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:18:06.114810 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:18:06.114844 1 main.go:227] handling current node\nI0521 15:18:06.114867 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:18:06.114880 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:18:16.123321 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:18:16.123367 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:18:16.123660 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:18:16.123692 1 main.go:227] handling current node\nI0521 15:18:16.123715 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:18:16.123727 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:18:26.129498 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:18:26.129544 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:18:26.129789 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:18:26.129863 1 main.go:227] handling current node\nI0521 15:18:26.129887 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:18:26.129902 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:18:36.136776 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:18:36.136825 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:18:36.137055 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:18:36.137083 1 main.go:227] handling current node\nI0521 15:18:36.137106 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:18:36.137121 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:18:46.144110 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:18:46.144159 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:18:46.144375 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:18:46.144404 1 main.go:227] handling current node\nI0521 15:18:46.144428 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:18:46.144443 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:18:56.150857 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:18:56.150911 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:18:56.151121 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:18:56.151149 1 main.go:227] handling current node\nI0521 15:18:56.151171 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:18:56.151187 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:19:06.157374 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:19:06.157420 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:19:06.157679 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:19:06.157707 1 main.go:227] handling current node\nI0521 15:19:06.157730 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:19:06.157743 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:19:16.164176 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:19:16.164222 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:19:16.164439 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:19:16.164466 1 main.go:227] handling current node\nI0521 15:19:16.164489 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:19:16.164509 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:19:26.170898 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:19:26.170943 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:19:26.171177 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:19:26.171203 1 main.go:227] handling current node\nI0521 15:19:26.171225 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:19:26.171242 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:19:36.179292 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:19:36.179344 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:19:36.179574 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:19:36.179603 1 main.go:227] handling current node\nI0521 15:19:36.179630 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:19:36.179645 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:19:46.219995 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:19:46.220068 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:19:46.220308 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:19:46.220337 1 main.go:227] handling current node\nI0521 15:19:46.220363 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:19:46.220385 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:19:56.229707 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:19:56.229760 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:19:56.230053 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:19:56.230306 1 main.go:227] handling current node\nI0521 15:19:56.230328 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:19:56.230345 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:20:06.236692 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:20:06.236738 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:20:06.236952 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:20:06.236979 1 main.go:227] handling current node\nI0521 15:20:06.237004 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:20:06.237019 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:20:16.243024 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:20:16.243072 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:20:16.243287 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:20:16.243315 1 main.go:227] handling current node\nI0521 15:20:16.243339 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:20:16.243355 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:20:26.249299 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:20:26.249344 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:20:26.249553 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:20:26.249578 1 main.go:227] handling current node\nI0521 15:20:26.249601 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:20:26.249615 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:20:36.255853 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:20:36.255916 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:20:36.256191 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:20:36.256225 1 main.go:227] handling current node\nI0521 15:20:36.256251 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:20:36.256264 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:20:46.263044 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:20:46.263093 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:20:46.263300 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:20:46.263335 1 main.go:227] handling current node\nI0521 15:20:46.263370 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:20:46.263387 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:20:56.269748 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:20:56.269795 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:20:56.270083 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:20:56.270113 1 main.go:227] handling current node\nI0521 15:20:56.270136 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:20:56.270149 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:21:06.275897 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:21:06.275954 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:21:06.276157 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:21:06.276218 1 main.go:227] handling current node\nI0521 15:21:06.276243 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:21:06.276262 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:21:16.283733 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:21:16.283811 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:21:16.284148 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:21:16.284181 1 main.go:227] handling current node\nI0521 15:21:16.284206 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:21:16.284236 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:21:26.290961 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:21:26.291007 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:21:26.291218 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:21:26.291244 1 main.go:227] handling current node\nI0521 15:21:26.291269 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:21:26.291283 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:21:36.297954 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:21:36.298000 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:21:36.298237 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:21:36.298265 1 main.go:227] handling current node\nI0521 15:21:36.298289 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:21:36.298304 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:21:46.304524 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:21:46.304571 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:21:46.304828 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:21:46.304858 1 main.go:227] handling current node\nI0521 15:21:46.304882 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:21:46.304894 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:21:56.311241 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:21:56.311289 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:21:56.311507 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:21:56.311534 1 main.go:227] handling current node\nI0521 15:21:56.311557 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:21:56.311571 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:22:06.317744 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:22:06.317791 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:22:06.318061 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:22:06.318089 1 main.go:227] handling current node\nI0521 15:22:06.318160 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:22:06.318177 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:22:16.324408 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:22:16.324464 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:22:16.324679 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:22:16.324712 1 main.go:227] handling current node\nI0521 15:22:16.324746 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:22:16.324767 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:22:26.331132 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:22:26.331178 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:22:26.331386 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:22:26.331413 1 main.go:227] handling current node\nI0521 15:22:26.331436 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:22:26.331451 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:22:36.338301 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:22:36.338349 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:22:36.338557 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:22:36.338584 1 main.go:227] handling current node\nI0521 15:22:36.338624 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:22:36.338638 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:22:46.344868 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:22:46.344915 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:22:46.345141 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:22:46.345170 1 main.go:227] handling current node\nI0521 15:22:46.345193 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:22:46.345211 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:22:56.353579 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:22:56.353658 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:22:56.353983 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:22:56.354016 1 main.go:227] handling current node\nI0521 15:22:56.354041 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:22:56.354058 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:23:06.360030 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:23:06.360076 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:23:06.360297 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:23:06.360325 1 main.go:227] handling current node\nI0521 15:23:06.360348 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:23:06.360361 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:23:16.366316 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:23:16.366372 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:23:16.366597 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:23:16.366625 1 main.go:227] handling current node\nI0521 15:23:16.366648 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:23:16.366666 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:23:26.373682 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:23:26.373729 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:23:26.373967 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:23:26.373996 1 main.go:227] handling current node\nI0521 15:23:26.374020 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:23:26.374033 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:23:36.380265 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:23:36.380311 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:23:36.380522 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:23:36.380549 1 main.go:227] handling current node\nI0521 15:23:36.380573 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:23:36.380587 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:23:46.386249 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:23:46.386295 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:23:46.386505 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:23:46.386532 1 main.go:227] handling current node\nI0521 15:23:46.386556 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:23:46.386571 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:23:56.393710 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:23:56.393757 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:23:56.393981 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:23:56.394010 1 main.go:227] handling current node\nI0521 15:23:56.394034 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:23:56.394048 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:24:06.399610 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:24:06.399663 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:24:06.399901 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:24:06.399929 1 main.go:227] handling current node\nI0521 15:24:06.399957 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:24:06.399982 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:24:16.405691 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:24:16.405759 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:24:16.406034 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:24:16.406080 1 main.go:227] handling current node\nI0521 15:24:16.406104 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:24:16.406120 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:24:26.419714 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:24:26.419766 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:24:26.420017 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:24:26.420042 1 main.go:227] handling current node\nI0521 15:24:26.420070 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:24:26.420085 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:24:36.426390 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:24:36.426453 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:24:36.426723 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:24:36.426757 1 main.go:227] handling current node\nI0521 15:24:36.426784 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:24:36.426804 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:24:46.433050 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:24:46.433101 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:24:46.433306 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:24:46.433334 1 main.go:227] handling current node\nI0521 15:24:46.433359 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:24:46.433372 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:24:56.441593 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:24:56.441653 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:24:56.441907 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:24:56.441939 1 main.go:227] handling current node\nI0521 15:24:56.441965 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:24:56.441984 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:25:06.448465 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:25:06.448523 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:25:06.448795 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:25:06.448825 1 main.go:227] handling current node\nI0521 15:25:06.448851 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:25:06.448869 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:25:16.455108 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:25:16.455171 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:25:16.455406 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:25:16.455440 1 main.go:227] handling current node\nI0521 15:25:16.455465 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:25:16.455482 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:25:26.461303 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:25:26.461361 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:25:26.461581 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:25:26.461614 1 main.go:227] handling current node\nI0521 15:25:26.461637 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:25:26.461659 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:25:36.467903 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:25:36.467960 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:25:36.468178 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:25:36.468210 1 main.go:227] handling current node\nI0521 15:25:36.468233 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:25:36.468252 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:25:46.474292 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:25:46.474350 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:25:46.474565 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:25:46.474596 1 main.go:227] handling current node\nI0521 15:25:46.474619 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:25:46.474638 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:25:56.480453 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:25:56.480515 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:25:56.480736 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:25:56.480763 1 main.go:227] handling current node\nI0521 15:25:56.480787 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:25:56.480802 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:26:06.487175 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:26:06.487246 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:26:06.487593 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:26:06.487635 1 main.go:227] handling current node\nI0521 15:26:06.487662 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:26:06.487677 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:26:16.493765 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:26:16.493858 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:26:16.494092 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:26:16.494121 1 main.go:227] handling current node\nI0521 15:26:16.494145 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:26:16.494158 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:26:26.500067 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:26:26.500131 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:26:26.500370 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:26:26.500403 1 main.go:227] handling current node\nI0521 15:26:26.500431 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:26:26.500445 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:26:36.506700 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:26:36.506758 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:26:36.506976 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:26:36.507011 1 main.go:227] handling current node\nI0521 15:26:36.507043 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:26:36.507072 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:26:46.513334 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:26:46.513397 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:26:46.513659 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:26:46.513689 1 main.go:227] handling current node\nI0521 15:26:46.513714 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:26:46.513729 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:26:56.520517 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:26:56.520595 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:26:56.520843 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:26:56.520881 1 main.go:227] handling current node\nI0521 15:26:56.520908 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:26:56.520979 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:27:06.621066 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:27:06.621145 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:27:06.621413 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:27:06.621448 1 main.go:227] handling current node\nI0521 15:27:06.621483 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:27:06.621505 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:27:16.627974 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:27:16.628020 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:27:16.628250 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:27:16.628275 1 main.go:227] handling current node\nI0521 15:27:16.628297 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:27:16.628310 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:27:26.634329 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:27:26.634378 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:27:26.634625 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:27:26.634654 1 main.go:227] handling current node\nI0521 15:27:26.634678 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:27:26.634693 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:27:36.640730 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:27:36.640777 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:27:36.640986 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:27:36.641014 1 main.go:227] handling current node\nI0521 15:27:36.641038 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:27:36.641051 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:27:46.647311 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:27:46.647360 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:27:46.647575 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:27:46.647601 1 main.go:227] handling current node\nI0521 15:27:46.647627 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:27:46.647641 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:27:56.653782 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:27:56.653878 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:27:56.654097 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:27:56.654126 1 main.go:227] handling current node\nI0521 15:27:56.654148 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:27:56.654164 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:28:06.660258 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:28:06.660305 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:28:06.660511 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:28:06.660534 1 main.go:227] handling current node\nI0521 15:28:06.660558 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:28:06.660574 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:28:16.666740 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:28:16.666789 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:28:16.666996 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:28:16.667025 1 main.go:227] handling current node\nI0521 15:28:16.667048 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:28:16.667065 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:28:26.673235 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:28:26.673282 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:28:26.673507 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:28:26.673534 1 main.go:227] handling current node\nI0521 15:28:26.673563 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:28:26.673576 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:28:36.679365 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:28:36.679412 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:28:36.679627 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:28:36.679655 1 main.go:227] handling current node\nI0521 15:28:36.679679 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:28:36.679700 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:28:46.687558 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:28:46.687664 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:28:46.687928 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:28:46.687964 1 main.go:227] handling current node\nI0521 15:28:46.688005 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:28:46.688080 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:28:56.694666 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:28:56.694716 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:28:56.694921 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:28:56.694948 1 main.go:227] handling current node\nI0521 15:28:56.694971 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:28:56.694984 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:29:06.701718 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:29:06.701767 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:29:06.702047 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:29:06.702077 1 main.go:227] handling current node\nI0521 15:29:06.702100 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:29:06.702117 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:29:16.709336 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:29:16.709391 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:29:16.709598 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:29:16.709623 1 main.go:227] handling current node\nI0521 15:29:16.709655 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:29:16.709678 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:29:26.715678 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:29:26.715743 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:29:26.715955 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:29:26.715983 1 main.go:227] handling current node\nI0521 15:29:26.716007 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:29:26.716023 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:29:36.722580 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:29:36.722626 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:29:36.722843 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:29:36.722871 1 main.go:227] handling current node\nI0521 15:29:36.722893 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:29:36.722908 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:29:46.729741 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:29:46.729791 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:29:46.730055 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:29:46.730084 1 main.go:227] handling current node\nI0521 15:29:46.730107 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:29:46.730126 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:29:56.736416 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:29:56.736464 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:29:56.736687 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:29:56.736715 1 main.go:227] handling current node\nI0521 15:29:56.736737 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:29:56.736752 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:30:06.742846 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:30:06.742910 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:30:06.743126 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:30:06.743159 1 main.go:227] handling current node\nI0521 15:30:06.743185 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:30:06.743201 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:30:16.749744 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:30:16.749792 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:30:16.750030 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:30:16.750059 1 main.go:227] handling current node\nI0521 15:30:16.750082 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:30:16.750098 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:30:26.756038 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:30:26.756099 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:30:26.756392 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:30:26.756425 1 main.go:227] handling current node\nI0521 15:30:26.756452 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:30:26.756472 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:30:36.763630 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:30:36.763716 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:30:36.764086 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:30:36.764123 1 main.go:227] handling current node\nI0521 15:30:36.764150 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:30:36.764181 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:30:46.769438 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:30:46.769486 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:30:46.769668 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:30:46.769687 1 main.go:227] handling current node\nI0521 15:30:46.769706 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:30:46.769717 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:30:56.776882 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:30:56.776945 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:30:56.777232 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:30:56.777267 1 main.go:227] handling current node\nI0521 15:30:56.777292 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:30:56.777316 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:31:06.784659 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:31:06.784719 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:31:06.785007 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:31:06.785039 1 main.go:227] handling current node\nI0521 15:31:06.785062 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:31:06.785084 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:31:16.792044 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:31:16.792098 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:31:16.792388 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:31:16.792419 1 main.go:227] handling current node\nI0521 15:31:16.792442 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:31:16.792463 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:31:26.799441 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:31:26.799496 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:31:26.799773 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:31:26.799806 1 main.go:227] handling current node\nI0521 15:31:26.799827 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:31:26.799851 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:31:36.806626 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:31:36.806689 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:31:36.807006 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:31:36.807042 1 main.go:227] handling current node\nI0521 15:31:36.807081 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:31:36.807111 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:31:46.813963 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:31:46.814020 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:31:46.814283 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:31:46.814315 1 main.go:227] handling current node\nI0521 15:31:46.814340 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:31:46.814359 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:31:56.820910 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:31:56.820995 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:31:56.821333 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:31:56.821372 1 main.go:227] handling current node\nI0521 15:31:56.821410 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:31:56.821434 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:32:06.827941 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:32:06.827998 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:32:06.828221 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:32:06.828252 1 main.go:227] handling current node\nI0521 15:32:06.828277 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:32:06.828289 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:32:16.836165 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:32:16.836261 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:32:16.837001 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:32:16.837084 1 main.go:227] handling current node\nI0521 15:32:16.837118 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:32:16.837135 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:32:26.843827 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:32:26.843885 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:32:26.844109 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:32:26.844140 1 main.go:227] handling current node\nI0521 15:32:26.844163 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:32:26.844187 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:32:36.850200 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:32:36.850270 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:32:36.850493 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:32:36.850525 1 main.go:227] handling current node\nI0521 15:32:36.850548 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:32:36.850567 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:32:46.856531 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:32:46.856584 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:32:46.856799 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:32:46.856829 1 main.go:227] handling current node\nI0521 15:32:46.856854 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:32:46.856874 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:32:56.862959 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:32:56.863015 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:32:56.863244 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:32:56.863275 1 main.go:227] handling current node\nI0521 15:32:56.863297 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:32:56.863319 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:33:06.869528 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:33:06.869585 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:33:06.869877 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:33:06.869911 1 main.go:227] handling current node\nI0521 15:33:06.869933 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:33:06.869958 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:33:16.876401 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:33:16.876473 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:33:16.876725 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:33:16.876757 1 main.go:227] handling current node\nI0521 15:33:16.876780 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:33:16.876792 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:33:26.882922 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:33:26.882981 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:33:26.883200 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:33:26.883231 1 main.go:227] handling current node\nI0521 15:33:26.883262 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:33:26.883278 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:33:36.889323 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:33:36.889381 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:33:36.889627 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:33:36.889660 1 main.go:227] handling current node\nI0521 15:33:36.889682 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:33:36.889698 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:33:46.896389 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:33:46.896464 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:33:46.896810 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:33:46.896928 1 main.go:227] handling current node\nI0521 15:33:46.897123 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:33:46.897191 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:33:56.905155 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:33:56.905217 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:33:56.905481 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:33:56.905524 1 main.go:227] handling current node\nI0521 15:33:56.905556 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:33:56.905572 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:34:06.912546 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:34:06.912605 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:34:06.912838 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:34:06.912869 1 main.go:227] handling current node\nI0521 15:34:06.912892 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:34:06.912914 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:34:16.919456 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:34:16.919526 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:34:16.919766 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:34:16.919797 1 main.go:227] handling current node\nI0521 15:34:16.919819 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:34:16.919831 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:34:26.925768 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:34:26.925870 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:34:26.926108 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:34:26.926139 1 main.go:227] handling current node\nI0521 15:34:26.926161 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:34:26.926174 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:34:36.932539 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:34:36.932598 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:34:36.932834 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:34:36.932866 1 main.go:227] handling current node\nI0521 15:34:36.932889 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:34:36.932909 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:34:46.939519 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:34:46.939581 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:34:46.939792 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:34:46.939825 1 main.go:227] handling current node\nI0521 15:34:46.939847 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:34:46.939868 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:34:56.945875 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:34:56.945928 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:34:56.946156 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:34:56.946183 1 main.go:227] handling current node\nI0521 15:34:56.946208 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:34:56.946226 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:35:07.220165 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:35:07.220239 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:35:07.220476 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:35:07.220502 1 main.go:227] handling current node\nI0521 15:35:07.220529 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:35:07.220544 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:35:17.226794 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:35:17.226860 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:35:17.227098 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:35:17.227127 1 main.go:227] handling current node\nI0521 15:35:17.227153 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:35:17.227168 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:35:27.234103 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:35:27.234150 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:35:27.234425 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:35:27.234453 1 main.go:227] handling current node\nI0521 15:35:27.234477 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:35:27.234497 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:35:37.241067 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:35:37.241137 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:35:37.241373 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:35:37.241409 1 main.go:227] handling current node\nI0521 15:35:37.241434 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:35:37.241453 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:35:47.248100 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:35:47.248156 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:35:47.248380 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:35:47.248412 1 main.go:227] handling current node\nI0521 15:35:47.248435 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:35:47.248456 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:35:57.255265 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:35:57.255318 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:35:57.255555 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:35:57.255588 1 main.go:227] handling current node\nI0521 15:35:57.255612 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:35:57.255631 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:36:07.262714 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:36:07.262760 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:36:07.262976 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:36:07.263003 1 main.go:227] handling current node\nI0521 15:36:07.263025 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:36:07.263040 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:36:17.270142 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:36:17.270192 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:36:17.270424 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:36:17.270452 1 main.go:227] handling current node\nI0521 15:36:17.270479 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:36:17.270497 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:36:27.321128 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:36:27.321271 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:36:27.321584 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:36:27.321620 1 main.go:227] handling current node\nI0521 15:36:27.321648 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:36:27.321669 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:36:37.328673 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:36:37.328721 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:36:37.328928 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:36:37.328952 1 main.go:227] handling current node\nI0521 15:36:37.328974 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:36:37.328987 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:36:47.335545 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:36:47.335610 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:36:47.335872 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:36:47.335907 1 main.go:227] handling current node\nI0521 15:36:47.335946 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:36:47.335974 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:36:57.342810 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:36:57.342873 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:36:57.343143 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:36:57.343175 1 main.go:227] handling current node\nI0521 15:36:57.343200 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:36:57.343221 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:37:07.349877 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:37:07.349928 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:37:07.350171 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:37:07.350197 1 main.go:227] handling current node\nI0521 15:37:07.350223 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:37:07.350241 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:37:17.357546 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:37:17.357608 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:37:17.357917 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:37:17.357954 1 main.go:227] handling current node\nI0521 15:37:17.357988 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:37:17.358003 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:37:27.364911 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:37:27.364971 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:37:27.365199 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:37:27.365231 1 main.go:227] handling current node\nI0521 15:37:27.365253 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:37:27.365274 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:37:37.372344 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:37:37.372400 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:37:37.372618 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:37:37.372648 1 main.go:227] handling current node\nI0521 15:37:37.372671 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:37:37.372690 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:37:47.379742 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:37:47.379806 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:37:47.380053 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:37:47.380085 1 main.go:227] handling current node\nI0521 15:37:47.380108 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:37:47.380137 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:37:57.387092 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:37:57.387166 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:37:57.387390 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:37:57.387420 1 main.go:227] handling current node\nI0521 15:37:57.387443 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:37:57.387456 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:38:07.395897 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:38:07.396004 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:38:07.420161 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:38:07.420235 1 main.go:227] handling current node\nI0521 15:38:07.420267 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:38:07.420283 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:38:17.427423 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:38:17.427482 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:38:17.427711 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:38:17.427743 1 main.go:227] handling current node\nI0521 15:38:17.427765 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:38:17.427784 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:38:27.434518 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:38:27.434579 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:38:27.434808 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:38:27.434840 1 main.go:227] handling current node\nI0521 15:38:27.434865 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:38:27.434885 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:38:37.441449 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:38:37.441511 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:38:37.441735 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:38:37.441767 1 main.go:227] handling current node\nI0521 15:38:37.441791 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:38:37.441843 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:38:47.448737 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:38:47.448790 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:38:47.449019 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:38:47.449046 1 main.go:227] handling current node\nI0521 15:38:47.449072 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:38:47.449086 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:38:57.456021 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:38:57.456086 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:38:57.456336 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:38:57.456368 1 main.go:227] handling current node\nI0521 15:38:57.456393 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:38:57.456412 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:39:07.463325 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:39:07.463375 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:39:07.463595 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:39:07.463622 1 main.go:227] handling current node\nI0521 15:39:07.463648 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:39:07.463663 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:39:17.470319 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:39:17.470370 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:39:17.470594 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:39:17.470621 1 main.go:227] handling current node\nI0521 15:39:17.470674 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:39:17.470691 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:39:27.477088 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:39:27.477150 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:39:27.477378 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:39:27.477408 1 main.go:227] handling current node\nI0521 15:39:27.477434 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:39:27.477449 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:39:37.483964 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:39:37.484015 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:39:37.484235 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:39:37.484262 1 main.go:227] handling current node\nI0521 15:39:37.484287 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:39:37.484305 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:39:47.520747 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:39:47.520832 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:39:47.521104 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:39:47.521135 1 main.go:227] handling current node\nI0521 15:39:47.521161 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:39:47.521179 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:39:57.527191 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:39:57.527253 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:39:57.527484 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:39:57.527517 1 main.go:227] handling current node\nI0521 15:39:57.527540 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:39:57.527559 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:40:07.535095 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:40:07.535156 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:40:07.535382 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:40:07.535413 1 main.go:227] handling current node\nI0521 15:40:07.535434 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:40:07.535454 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:40:17.542776 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:40:17.542834 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:40:17.543067 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:40:17.543099 1 main.go:227] handling current node\nI0521 15:40:17.543121 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:40:17.543146 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:40:27.550315 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:40:27.550372 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:40:27.550636 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:40:27.550669 1 main.go:227] handling current node\nI0521 15:40:27.550691 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:40:27.550712 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:40:37.558285 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:40:37.558342 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:40:37.558573 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:40:37.558676 1 main.go:227] handling current node\nI0521 15:40:37.558698 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:40:37.558711 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:40:47.566297 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:40:47.566356 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:40:47.566612 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:40:47.566650 1 main.go:227] handling current node\nI0521 15:40:47.566672 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:40:47.566684 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:40:57.573798 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:40:57.573900 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:40:57.574122 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:40:57.574156 1 main.go:227] handling current node\nI0521 15:40:57.574178 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:40:57.574200 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:41:07.719750 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:41:07.719825 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:41:07.720071 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:41:07.720098 1 main.go:227] handling current node\nI0521 15:41:07.720124 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:41:07.720142 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:41:17.726415 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:41:17.726466 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:41:17.726691 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:41:17.726718 1 main.go:227] handling current node\nI0521 15:41:17.726743 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:41:17.726761 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:41:27.732820 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:41:27.732872 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:41:27.733131 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:41:27.733160 1 main.go:227] handling current node\nI0521 15:41:27.733184 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:41:27.733197 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:41:37.739428 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:41:37.739488 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:41:37.739697 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:41:37.739730 1 main.go:227] handling current node\nI0521 15:41:37.739787 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:41:37.739809 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:41:47.745757 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:41:47.745831 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:41:47.746059 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:41:47.746086 1 main.go:227] handling current node\nI0521 15:41:47.746112 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:41:47.746125 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:41:57.752592 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:41:57.752651 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:41:57.752873 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:41:57.752905 1 main.go:227] handling current node\nI0521 15:41:57.752929 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:41:57.752947 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:42:07.759044 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:42:07.759090 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:42:07.759315 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:42:07.759342 1 main.go:227] handling current node\nI0521 15:42:07.759365 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:42:07.759381 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:42:17.765518 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:42:17.765566 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:42:17.765998 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:42:17.766029 1 main.go:227] handling current node\nI0521 15:42:17.766052 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:42:17.766064 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:42:27.773351 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:42:27.773445 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:42:27.820028 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:42:27.820118 1 main.go:227] handling current node\nI0521 15:42:27.820144 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:42:27.820159 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:42:37.826822 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:42:37.826896 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:42:37.827166 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:42:37.827203 1 main.go:227] handling current node\nI0521 15:42:37.827242 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:42:37.827268 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:42:47.833653 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:42:47.833704 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:42:47.833958 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:42:47.833986 1 main.go:227] handling current node\nI0521 15:42:47.834011 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:42:47.834025 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:42:57.840699 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:42:57.840757 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:42:57.840977 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:42:57.841005 1 main.go:227] handling current node\nI0521 15:42:57.841028 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:42:57.841042 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:43:07.850858 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:43:07.850906 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:43:07.851121 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:43:07.851148 1 main.go:227] handling current node\nI0521 15:43:07.851172 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:43:07.851184 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:43:17.857340 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:43:17.857404 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:43:17.857638 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:43:17.857671 1 main.go:227] handling current node\nI0521 15:43:17.857697 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:43:17.857717 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:43:27.863960 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:43:27.864021 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:43:27.864240 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:43:27.864272 1 main.go:227] handling current node\nI0521 15:43:27.864294 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:43:27.864307 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:43:37.870881 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:43:37.870942 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:43:37.871149 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:43:37.871181 1 main.go:227] handling current node\nI0521 15:43:37.871202 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:43:37.871223 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:43:47.877678 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:43:47.877730 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:43:47.877990 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:43:47.878020 1 main.go:227] handling current node\nI0521 15:43:47.878047 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:43:47.878061 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:43:57.883766 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:43:57.883823 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:43:57.884050 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:43:57.884082 1 main.go:227] handling current node\nI0521 15:43:57.884105 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:43:57.884128 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:44:07.921557 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:44:07.921629 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:44:07.921956 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:44:07.922000 1 main.go:227] handling current node\nI0521 15:44:07.922025 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:44:07.922040 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:44:17.928277 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:44:17.928334 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:44:17.928556 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:44:17.928587 1 main.go:227] handling current node\nI0521 15:44:17.928610 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:44:17.928629 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:44:27.934886 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:44:27.934943 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:44:27.935160 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:44:27.935184 1 main.go:227] handling current node\nI0521 15:44:27.935206 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:44:27.935218 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:44:37.941455 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:44:37.941505 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:44:37.941729 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:44:37.941756 1 main.go:227] handling current node\nI0521 15:44:37.941780 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:44:37.941793 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:44:47.947979 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:44:47.948037 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:44:47.948272 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:44:47.948317 1 main.go:227] handling current node\nI0521 15:44:47.948352 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:44:47.948367 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:44:57.953381 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:44:57.953440 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:44:57.953660 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:44:57.953693 1 main.go:227] handling current node\nI0521 15:44:57.953715 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:44:57.953738 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:45:07.959785 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:45:07.959838 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:45:07.960059 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:45:07.960087 1 main.go:227] handling current node\nI0521 15:45:07.960113 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:45:07.960127 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:45:17.966168 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:45:17.966220 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:45:17.966448 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:45:17.966482 1 main.go:227] handling current node\nI0521 15:45:17.966507 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:45:17.966520 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:45:27.972870 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:45:27.972919 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:45:27.973147 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:45:27.973185 1 main.go:227] handling current node\nI0521 15:45:27.973220 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:45:27.973244 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:45:37.979087 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:45:37.979139 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:45:37.979402 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:45:37.979437 1 main.go:227] handling current node\nI0521 15:45:37.979470 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:45:37.979485 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:45:47.985323 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:45:47.985373 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:45:47.985599 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:45:47.985628 1 main.go:227] handling current node\nI0521 15:45:47.985652 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:45:47.985665 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:45:57.993617 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:45:57.993699 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:45:58.020060 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:45:58.020127 1 main.go:227] handling current node\nI0521 15:45:58.020157 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:45:58.020173 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:46:08.026417 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:46:08.026478 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:46:08.026773 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:46:08.026806 1 main.go:227] handling current node\nI0521 15:46:08.026829 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:46:08.026848 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:46:18.033207 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:46:18.033267 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:46:18.033491 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:46:18.033524 1 main.go:227] handling current node\nI0521 15:46:18.033546 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:46:18.033559 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:46:28.039969 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:46:28.040029 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:46:28.040245 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:46:28.040277 1 main.go:227] handling current node\nI0521 15:46:28.040300 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:46:28.040319 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:46:38.046394 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:46:38.046456 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:46:38.046691 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:46:38.046724 1 main.go:227] handling current node\nI0521 15:46:38.046747 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:46:38.046759 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:46:48.055838 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:46:48.055897 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:46:48.056126 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:46:48.056159 1 main.go:227] handling current node\nI0521 15:46:48.056182 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:46:48.056201 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:46:58.062381 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:46:58.062440 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:46:58.062677 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:46:58.062710 1 main.go:227] handling current node\nI0521 15:46:58.062733 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:46:58.062755 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:47:08.068660 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:47:08.068718 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:47:08.068933 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:47:08.068965 1 main.go:227] handling current node\nI0521 15:47:08.068986 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:47:08.069009 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:47:18.075426 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:47:18.075485 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:47:18.075704 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:47:18.075736 1 main.go:227] handling current node\nI0521 15:47:18.075768 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:47:18.075782 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:47:28.082753 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:47:28.082996 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:47:28.219932 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:47:28.219987 1 main.go:227] handling current node\nI0521 15:47:28.220012 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:47:28.220026 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:47:38.225784 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:47:38.225864 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:47:38.226079 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:47:38.226110 1 main.go:227] handling current node\nI0521 15:47:38.226133 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:47:38.226155 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:47:48.232613 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:47:48.232667 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:47:48.232877 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:47:48.232912 1 main.go:227] handling current node\nI0521 15:47:48.232937 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:47:48.232965 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:47:58.240269 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:47:58.240326 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:47:58.240539 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:47:58.240571 1 main.go:227] handling current node\nI0521 15:47:58.240593 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:47:58.240613 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:48:08.247363 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:48:08.247420 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:48:08.247656 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:48:08.247689 1 main.go:227] handling current node\nI0521 15:48:08.247712 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:48:08.247732 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:48:18.254253 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:48:18.254310 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:48:18.254523 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:48:18.254552 1 main.go:227] handling current node\nI0521 15:48:18.254574 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:48:18.254586 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:48:28.260751 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:48:28.260808 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:48:28.261030 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:48:28.261061 1 main.go:227] handling current node\nI0521 15:48:28.261085 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:48:28.261101 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:48:38.267403 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:48:38.267463 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:48:38.267693 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:48:38.267725 1 main.go:227] handling current node\nI0521 15:48:38.267748 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:48:38.267761 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:48:48.273919 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:48:48.273980 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:48:48.274200 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:48:48.274232 1 main.go:227] handling current node\nI0521 15:48:48.274255 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:48:48.274275 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:48:58.280499 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:48:58.280560 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:48:58.280782 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:48:58.280814 1 main.go:227] handling current node\nI0521 15:48:58.280836 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:48:58.280858 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:49:08.420900 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:49:08.420976 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:49:08.421288 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:49:08.421321 1 main.go:227] handling current node\nI0521 15:49:08.421344 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:49:08.421357 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:49:18.427580 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:49:18.427629 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:49:18.427855 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:49:18.427883 1 main.go:227] handling current node\nI0521 15:49:18.427907 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:49:18.427924 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:49:28.434260 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:49:28.434306 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:49:28.434520 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:49:28.434547 1 main.go:227] handling current node\nI0521 15:49:28.434570 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:49:28.434584 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:49:38.441094 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:49:38.441143 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:49:38.441362 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:49:38.441389 1 main.go:227] handling current node\nI0521 15:49:38.441411 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:49:38.441426 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:49:48.447730 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:49:48.447778 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:49:48.447988 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:49:48.448016 1 main.go:227] handling current node\nI0521 15:49:48.448041 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:49:48.448055 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:49:58.454453 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:49:58.454501 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:49:58.454723 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:49:58.454751 1 main.go:227] handling current node\nI0521 15:49:58.454776 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:49:58.454792 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:50:08.460739 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:50:08.460787 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:50:08.461013 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:50:08.461040 1 main.go:227] handling current node\nI0521 15:50:08.461063 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:50:08.461080 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:50:18.467369 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:50:18.467419 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:50:18.467650 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:50:18.467677 1 main.go:227] handling current node\nI0521 15:50:18.467700 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:50:18.467716 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:50:28.474060 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:50:28.474114 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:50:28.474338 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:50:28.474365 1 main.go:227] handling current node\nI0521 15:50:28.474388 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:50:28.474400 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:50:38.482667 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:50:38.482736 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:50:38.482989 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:50:38.483032 1 main.go:227] handling current node\nI0521 15:50:38.483058 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:50:38.483079 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:50:48.489062 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:50:48.489110 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:50:48.489326 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:50:48.489352 1 main.go:227] handling current node\nI0521 15:50:48.489375 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:50:48.489394 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:50:58.495555 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:50:58.495599 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:50:58.495819 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:50:58.495845 1 main.go:227] handling current node\nI0521 15:50:58.495867 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:50:58.495879 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:51:08.502137 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:51:08.502184 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:51:08.502401 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:51:08.502428 1 main.go:227] handling current node\nI0521 15:51:08.502452 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:51:08.502465 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:51:18.508766 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:51:18.508813 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:51:18.509033 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:51:18.509061 1 main.go:227] handling current node\nI0521 15:51:18.509085 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:51:18.509098 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:51:28.515578 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:51:28.515626 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:51:28.515837 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:51:28.515865 1 main.go:227] handling current node\nI0521 15:51:28.515887 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:51:28.515904 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:51:38.521866 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:51:38.521917 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:51:38.522136 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:51:38.522164 1 main.go:227] handling current node\nI0521 15:51:38.522186 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:51:38.522202 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:51:48.528527 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:51:48.528573 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:51:48.528798 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:51:48.528825 1 main.go:227] handling current node\nI0521 15:51:48.528848 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:51:48.528867 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:51:58.534606 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:51:58.534653 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:51:58.534867 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:51:58.534894 1 main.go:227] handling current node\nI0521 15:51:58.534917 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:51:58.534932 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:52:08.541314 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:52:08.541362 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:52:08.541614 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:52:08.541642 1 main.go:227] handling current node\nI0521 15:52:08.541666 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:52:08.541678 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:52:18.548335 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:52:18.548385 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:52:18.548625 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:52:18.548653 1 main.go:227] handling current node\nI0521 15:52:18.548676 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:52:18.548691 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:52:28.556579 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:52:28.556645 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:52:28.556957 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:52:28.556999 1 main.go:227] handling current node\nI0521 15:52:28.557024 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:52:28.557042 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:52:38.563293 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:52:38.563343 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:52:38.563572 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:52:38.563600 1 main.go:227] handling current node\nI0521 15:52:38.563624 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:52:38.563639 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:52:48.569756 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:52:48.569851 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:52:48.570077 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:52:48.570107 1 main.go:227] handling current node\nI0521 15:52:48.570132 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:52:48.570146 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:52:58.575981 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:52:58.576029 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:52:58.576256 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:52:58.576285 1 main.go:227] handling current node\nI0521 15:52:58.576308 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:52:58.576326 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:53:08.582556 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:53:08.582604 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:53:08.582866 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:53:08.582896 1 main.go:227] handling current node\nI0521 15:53:08.582919 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:53:08.582932 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:53:18.588759 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:53:18.588806 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:53:18.589677 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:53:18.589708 1 main.go:227] handling current node\nI0521 15:53:18.589731 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:53:18.589745 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:53:28.595448 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:53:28.595505 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:53:28.595734 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:53:28.595763 1 main.go:227] handling current node\nI0521 15:53:28.595785 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:53:28.595805 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:53:38.602840 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:53:38.602888 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:53:38.603111 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:53:38.603139 1 main.go:227] handling current node\nI0521 15:53:38.603161 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:53:38.603175 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:53:48.609951 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:53:48.610006 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:53:48.610236 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:53:48.610263 1 main.go:227] handling current node\nI0521 15:53:48.610286 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:53:48.610298 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:53:58.617348 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:53:58.617398 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:53:58.617627 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:53:58.617655 1 main.go:227] handling current node\nI0521 15:53:58.617679 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:53:58.617698 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:54:08.720455 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:54:08.720526 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:54:08.720790 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:54:08.720820 1 main.go:227] handling current node\nI0521 15:54:08.720845 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:54:08.720860 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:54:18.726958 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:54:18.727016 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:54:18.727238 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:54:18.727268 1 main.go:227] handling current node\nI0521 15:54:18.727290 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:54:18.727312 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:54:28.733502 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:54:28.733563 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:54:28.733844 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:54:28.733880 1 main.go:227] handling current node\nI0521 15:54:28.733906 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:54:28.733928 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:54:38.740266 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:54:38.740328 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:54:38.740553 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:54:38.740586 1 main.go:227] handling current node\nI0521 15:54:38.740609 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:54:38.740628 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:54:48.746570 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:54:48.746637 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:54:48.746872 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:54:48.746905 1 main.go:227] handling current node\nI0521 15:54:48.746927 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:54:48.746946 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:54:58.752852 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:54:58.752910 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:54:58.753148 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:54:58.753181 1 main.go:227] handling current node\nI0521 15:54:58.753203 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:54:58.753225 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:55:08.759039 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:55:08.759100 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:55:08.759347 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:55:08.759381 1 main.go:227] handling current node\nI0521 15:55:08.759404 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:55:08.759433 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:55:18.765539 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:55:18.765603 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:55:18.765881 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:55:18.765916 1 main.go:227] handling current node\nI0521 15:55:18.765941 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:55:18.765961 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:55:28.771855 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:55:28.771913 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:55:28.772139 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:55:28.772172 1 main.go:227] handling current node\nI0521 15:55:28.772194 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:55:28.772213 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:55:38.778520 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:55:38.778589 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:55:38.778907 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:55:38.778936 1 main.go:227] handling current node\nI0521 15:55:38.778960 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:55:38.778978 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:55:48.785308 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:55:48.785372 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:55:48.785616 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:55:48.785649 1 main.go:227] handling current node\nI0521 15:55:48.785676 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:55:48.785689 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:55:58.791675 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:55:58.791725 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:55:58.791939 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:55:58.791967 1 main.go:227] handling current node\nI0521 15:55:58.791990 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:55:58.792005 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:56:08.798151 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:56:08.798217 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:56:08.798504 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:56:08.798537 1 main.go:227] handling current node\nI0521 15:56:08.798564 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:56:08.798583 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:56:18.804748 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:56:18.804815 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:56:18.805048 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:56:18.805077 1 main.go:227] handling current node\nI0521 15:56:18.805102 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:56:18.805117 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:56:28.811013 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:56:28.811065 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:56:28.811271 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:56:28.811297 1 main.go:227] handling current node\nI0521 15:56:28.811322 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:56:28.811334 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:56:38.817164 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:56:38.817229 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:56:38.817463 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:56:38.817498 1 main.go:227] handling current node\nI0521 15:56:38.817524 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:56:38.817537 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:56:48.824445 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:56:48.824494 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:56:48.824721 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:56:48.824748 1 main.go:227] handling current node\nI0521 15:56:48.824773 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:56:48.824788 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:56:58.831245 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:56:58.831311 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:56:58.831547 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:56:58.831580 1 main.go:227] handling current node\nI0521 15:56:58.831606 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:56:58.831623 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:57:08.838505 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:57:08.838553 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:57:08.838776 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:57:08.838803 1 main.go:227] handling current node\nI0521 15:57:08.838825 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:57:08.838838 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:57:18.845291 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:57:18.845344 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:57:18.845570 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:57:18.845597 1 main.go:227] handling current node\nI0521 15:57:18.845622 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:57:18.845638 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:57:28.852237 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:57:28.852303 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:57:28.852595 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:57:28.852628 1 main.go:227] handling current node\nI0521 15:57:28.852795 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:57:28.852906 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:57:38.925246 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:57:38.925293 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:57:38.925511 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:57:38.925538 1 main.go:227] handling current node\nI0521 15:57:38.925561 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:57:38.925573 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:57:48.930853 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:57:48.930892 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:57:48.931065 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:57:48.931087 1 main.go:227] handling current node\nI0521 15:57:48.931115 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:57:48.931128 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:57:58.935869 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:57:58.935908 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:57:58.936094 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:57:58.936117 1 main.go:227] handling current node\nI0521 15:57:58.936135 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:57:58.936146 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:58:08.941722 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:58:08.941759 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:58:08.942006 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:58:08.942030 1 main.go:227] handling current node\nI0521 15:58:08.942050 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:58:08.942065 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:58:18.946877 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:58:18.946951 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:58:18.947254 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:58:18.947301 1 main.go:227] handling current node\nI0521 15:58:18.947335 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:58:18.947365 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:58:28.953862 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:58:28.953906 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:58:28.954108 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:58:28.954132 1 main.go:227] handling current node\nI0521 15:58:28.954159 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:58:28.954173 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:58:38.959753 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:58:38.959801 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:58:38.960008 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:58:38.960037 1 main.go:227] handling current node\nI0521 15:58:38.960057 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:58:38.960076 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:58:48.966284 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:58:48.966334 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:58:48.991864 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:58:48.991905 1 main.go:227] handling current node\nI0521 15:58:48.991929 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:58:48.991950 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:58:59.005097 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:58:59.005139 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:58:59.005344 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:58:59.005414 1 main.go:227] handling current node\nI0521 15:58:59.005434 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:58:59.005447 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:59:09.010482 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:59:09.010528 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:59:09.010750 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:59:09.010775 1 main.go:227] handling current node\nI0521 15:59:09.010795 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:59:09.010805 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:59:19.017031 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:59:19.017083 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:59:19.017388 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:59:19.017414 1 main.go:227] handling current node\nI0521 15:59:19.017437 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:59:19.017461 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:59:29.022684 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:59:29.022726 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:59:29.022948 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:59:29.022970 1 main.go:227] handling current node\nI0521 15:59:29.022991 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:59:29.023003 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:59:39.028018 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:59:39.028067 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:59:39.028294 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:59:39.028321 1 main.go:227] handling current node\nI0521 15:59:39.028343 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:59:39.028355 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:59:49.033753 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:59:49.033849 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:59:49.034061 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:59:49.034091 1 main.go:227] handling current node\nI0521 15:59:49.034114 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:59:49.034132 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 15:59:59.040455 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 15:59:59.040522 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 15:59:59.040824 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 15:59:59.040854 1 main.go:227] handling current node\nI0521 15:59:59.040878 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 15:59:59.040893 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:00:09.047637 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:00:09.047690 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:00:09.047942 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:00:09.047975 1 main.go:227] handling current node\nI0521 16:00:09.047995 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:00:09.048015 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:00:19.053230 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:00:19.053271 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:00:19.053514 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:00:19.053537 1 main.go:227] handling current node\nI0521 16:00:19.053558 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:00:19.053570 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:00:29.058979 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:00:29.059023 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:00:29.059301 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:00:29.059326 1 main.go:227] handling current node\nI0521 16:00:29.059348 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:00:29.059363 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:00:39.064528 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:00:39.064566 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:00:39.064813 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:00:39.064834 1 main.go:227] handling current node\nI0521 16:00:39.064853 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:00:39.064863 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:00:49.069596 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:00:49.069636 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:00:49.069856 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:00:49.069879 1 main.go:227] handling current node\nI0521 16:00:49.069899 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:00:49.070070 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:00:59.076507 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:00:59.076552 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:00:59.076812 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:00:59.076839 1 main.go:227] handling current node\nI0521 16:00:59.076862 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:00:59.076875 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:01:09.084224 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:01:09.084297 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:01:09.084586 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:01:09.084622 1 main.go:227] handling current node\nI0521 16:01:09.084649 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:01:09.084666 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:01:19.091329 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:01:19.091387 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:01:19.091634 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:01:19.091672 1 main.go:227] handling current node\nI0521 16:01:19.091693 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:01:19.091706 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:01:29.097472 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:01:29.097524 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:01:29.097745 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:01:29.097774 1 main.go:227] handling current node\nI0521 16:01:29.097798 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:01:29.097905 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:01:39.103559 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:01:39.103604 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:01:39.103815 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:01:39.103839 1 main.go:227] handling current node\nI0521 16:01:39.103862 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:01:39.103874 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:01:49.110643 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:01:49.110725 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:01:49.110995 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:01:49.111033 1 main.go:227] handling current node\nI0521 16:01:49.111074 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:01:49.111100 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:01:59.116929 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:01:59.116982 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:01:59.117233 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:01:59.117262 1 main.go:227] handling current node\nI0521 16:01:59.117289 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:01:59.117310 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:02:09.123882 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:02:09.123929 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:02:09.124205 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:02:09.124234 1 main.go:227] handling current node\nI0521 16:02:09.124257 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:02:09.124270 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:02:19.131698 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:02:19.131750 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:02:19.132002 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:02:19.132034 1 main.go:227] handling current node\nI0521 16:02:19.132056 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:02:19.132068 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:02:29.139706 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:02:29.139770 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:02:29.140133 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:02:29.140165 1 main.go:227] handling current node\nI0521 16:02:29.140188 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:02:29.140209 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:02:39.145711 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:02:39.145760 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:02:39.578585 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:02:39.578662 1 main.go:227] handling current node\nI0521 16:02:39.578701 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:02:39.578718 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:02:49.753672 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:02:49.753729 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:02:49.754102 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:02:49.754139 1 main.go:227] handling current node\nI0521 16:02:49.754163 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:02:49.754182 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:02:59.759504 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:02:59.759545 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:02:59.759757 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:02:59.759780 1 main.go:227] handling current node\nI0521 16:02:59.759800 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:02:59.759812 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:03:09.766468 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:03:09.766531 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:03:09.766760 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:03:09.766792 1 main.go:227] handling current node\nI0521 16:03:09.766817 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:03:09.766830 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:03:19.771822 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:03:19.771877 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:03:19.772152 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:03:19.772179 1 main.go:227] handling current node\nI0521 16:03:19.772208 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:03:19.772225 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:03:29.777046 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:03:29.777088 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:03:29.777259 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:03:29.777280 1 main.go:227] handling current node\nI0521 16:03:29.777296 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:03:29.777306 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:03:39.782935 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:03:39.782989 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:03:39.783232 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:03:39.783261 1 main.go:227] handling current node\nI0521 16:03:39.783283 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:03:39.783296 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:03:49.790686 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:03:49.790745 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:03:49.791077 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:03:49.791105 1 main.go:227] handling current node\nI0521 16:03:49.791126 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:03:49.791138 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:03:59.796362 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:03:59.796399 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:03:59.796586 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:03:59.796605 1 main.go:227] handling current node\nI0521 16:03:59.796632 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:03:59.796643 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:04:09.802976 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:04:09.803034 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:04:09.803269 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:04:09.803296 1 main.go:227] handling current node\nI0521 16:04:09.803316 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:04:09.803333 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:04:19.809695 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:04:19.809745 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:04:19.810067 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:04:19.810101 1 main.go:227] handling current node\nI0521 16:04:19.810122 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:04:19.810136 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:04:29.816333 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:04:29.816379 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:04:29.816642 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:04:29.816668 1 main.go:227] handling current node\nI0521 16:04:29.816691 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:04:29.816704 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:04:39.823737 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:04:39.823775 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:04:39.823995 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:04:39.824014 1 main.go:227] handling current node\nI0521 16:04:39.824038 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:04:39.824050 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:04:49.831349 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:04:49.831398 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:04:49.831699 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:04:49.831726 1 main.go:227] handling current node\nI0521 16:04:49.831756 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:04:49.831774 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:04:59.837692 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:04:59.837746 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:04:59.838057 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:04:59.838091 1 main.go:227] handling current node\nI0521 16:04:59.838117 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:04:59.838191 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:05:09.845004 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:05:09.845048 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:05:09.845267 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:05:09.845293 1 main.go:227] handling current node\nI0521 16:05:09.845315 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:05:09.845330 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:05:19.851494 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:05:19.851545 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:05:19.851756 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:05:19.851785 1 main.go:227] handling current node\nI0521 16:05:19.851808 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:05:19.851821 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:05:30.020865 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:05:30.020933 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:05:30.021204 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:05:30.021238 1 main.go:227] handling current node\nI0521 16:05:30.021261 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:05:30.021279 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:05:40.027066 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:05:40.027122 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:05:40.027339 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:05:40.027369 1 main.go:227] handling current node\nI0521 16:05:40.027392 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:05:40.027412 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:05:50.033732 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:05:50.033781 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:05:50.034044 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:05:50.034072 1 main.go:227] handling current node\nI0521 16:05:50.034097 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:05:50.034113 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:06:00.039803 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:06:00.039849 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:06:00.040082 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:06:00.040111 1 main.go:227] handling current node\nI0521 16:06:00.040134 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:06:00.040152 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:06:10.046098 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:06:10.046159 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:06:10.046379 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:06:10.046410 1 main.go:227] handling current node\nI0521 16:06:10.046435 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:06:10.046454 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:06:20.052453 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:06:20.052510 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:06:20.052715 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:06:20.052746 1 main.go:227] handling current node\nI0521 16:06:20.052769 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:06:20.052785 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:06:30.059003 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:06:30.059050 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:06:30.059275 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:06:30.059305 1 main.go:227] handling current node\nI0521 16:06:30.059328 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:06:30.059342 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:06:40.065567 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:06:40.065627 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:06:40.065882 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:06:40.065918 1 main.go:227] handling current node\nI0521 16:06:40.065942 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:06:40.065954 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:06:50.071819 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:06:50.071879 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:06:50.072080 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:06:50.072111 1 main.go:227] handling current node\nI0521 16:06:50.072138 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:06:50.072152 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:07:00.078982 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:07:00.079078 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:07:00.079469 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:07:00.079520 1 main.go:227] handling current node\nI0521 16:07:00.079558 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:07:00.079589 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:07:10.086643 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:07:10.086703 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:07:10.086908 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:07:10.086940 1 main.go:227] handling current node\nI0521 16:07:10.086965 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:07:10.086985 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:07:20.093323 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:07:20.093375 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:07:20.093598 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:07:20.093625 1 main.go:227] handling current node\nI0521 16:07:20.093650 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:07:20.093663 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:07:30.099718 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:07:30.099760 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:07:30.099934 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:07:30.099955 1 main.go:227] handling current node\nI0521 16:07:30.099988 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:07:30.100003 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:07:40.106219 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:07:40.106271 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:07:40.106505 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:07:40.106538 1 main.go:227] handling current node\nI0521 16:07:40.106564 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:07:40.106577 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:07:50.112964 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:07:50.113021 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:07:50.113249 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:07:50.113275 1 main.go:227] handling current node\nI0521 16:07:50.113297 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:07:50.113313 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:08:00.119028 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:08:00.119081 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:08:00.119299 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:08:00.119326 1 main.go:227] handling current node\nI0521 16:08:00.119348 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:08:00.119363 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:08:10.126063 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:08:10.126109 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:08:10.126333 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:08:10.126360 1 main.go:227] handling current node\nI0521 16:08:10.126382 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:08:10.126396 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:08:20.131919 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:08:20.131974 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:08:20.132201 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:08:20.132231 1 main.go:227] handling current node\nI0521 16:08:20.132256 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:08:20.132272 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:08:30.140779 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:08:30.140872 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:08:30.141251 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:08:30.141291 1 main.go:227] handling current node\nI0521 16:08:30.141329 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:08:30.141375 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:08:40.151279 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:08:40.151342 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:08:40.151591 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:08:40.151622 1 main.go:227] handling current node\nI0521 16:08:40.151646 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:08:40.151659 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:08:50.158580 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:08:50.158637 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:08:50.158838 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:08:50.158868 1 main.go:227] handling current node\nI0521 16:08:50.158890 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:08:50.158903 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:09:00.164367 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:09:00.164420 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:09:00.164623 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:09:00.164652 1 main.go:227] handling current node\nI0521 16:09:00.164676 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:09:00.164688 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:09:10.170821 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:09:10.170882 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:09:10.171091 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:09:10.171123 1 main.go:227] handling current node\nI0521 16:09:10.171146 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:09:10.171168 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:09:20.177660 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:09:20.177718 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:09:20.177997 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:09:20.178033 1 main.go:227] handling current node\nI0521 16:09:20.178057 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:09:20.178070 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:09:30.183977 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:09:30.184028 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:09:30.184249 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:09:30.184277 1 main.go:227] handling current node\nI0521 16:09:30.184301 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:09:30.184319 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:09:40.190980 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:09:40.191032 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:09:40.191318 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:09:40.191359 1 main.go:227] handling current node\nI0521 16:09:40.191397 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:09:40.191414 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:09:50.204754 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:09:50.204812 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:09:50.205018 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:09:50.205049 1 main.go:227] handling current node\nI0521 16:09:50.205076 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:09:50.205110 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:10:00.210663 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:10:00.210712 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:10:00.210930 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:10:00.210959 1 main.go:227] handling current node\nI0521 16:10:00.210982 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:10:00.211052 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:10:10.320026 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:10:10.320105 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:10:10.320348 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:10:10.320376 1 main.go:227] handling current node\nI0521 16:10:10.320403 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:10:10.320421 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:10:20.326700 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:10:20.326746 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:10:20.326982 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:10:20.327010 1 main.go:227] handling current node\nI0521 16:10:20.327032 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:10:20.327045 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:10:30.333182 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:10:30.333243 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:10:30.333476 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:10:30.333511 1 main.go:227] handling current node\nI0521 16:10:30.333539 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:10:30.333552 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:10:40.340289 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:10:40.340346 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:10:40.340576 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:10:40.340609 1 main.go:227] handling current node\nI0521 16:10:40.340633 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:10:40.340646 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:10:50.346937 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:10:50.346989 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:10:50.347212 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:10:50.347241 1 main.go:227] handling current node\nI0521 16:10:50.347263 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:10:50.347278 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:11:00.353386 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:11:00.353434 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:11:00.353661 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:11:00.353687 1 main.go:227] handling current node\nI0521 16:11:00.353717 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:11:00.353735 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:11:10.360181 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:11:10.360229 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:11:10.360453 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:11:10.360481 1 main.go:227] handling current node\nI0521 16:11:10.360505 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:11:10.360524 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:11:20.366681 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:11:20.366727 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:11:20.366944 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:11:20.366970 1 main.go:227] handling current node\nI0521 16:11:20.367001 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:11:20.367025 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:11:30.373679 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:11:30.373730 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:11:30.373992 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:11:30.374022 1 main.go:227] handling current node\nI0521 16:11:30.374048 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:11:30.374063 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:11:40.380376 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:11:40.380433 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:11:40.380661 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:11:40.380696 1 main.go:227] handling current node\nI0521 16:11:40.380728 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:11:40.380744 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:11:50.388051 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:11:50.388095 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:11:50.388304 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:11:50.388330 1 main.go:227] handling current node\nI0521 16:11:50.388353 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:11:50.388375 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:12:00.395194 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:12:00.395239 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:12:00.395457 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:12:00.395479 1 main.go:227] handling current node\nI0521 16:12:00.395501 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:12:00.395550 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:12:10.402110 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:12:10.402163 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:12:10.402391 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:12:10.402432 1 main.go:227] handling current node\nI0521 16:12:10.402461 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:12:10.402476 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:12:20.408949 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:12:20.409007 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:12:20.409241 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:12:20.409274 1 main.go:227] handling current node\nI0521 16:12:20.409296 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:12:20.409314 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:12:30.415359 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:12:30.415428 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:12:30.415635 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:12:30.415667 1 main.go:227] handling current node\nI0521 16:12:30.415690 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:12:30.415710 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:12:40.421537 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:12:40.421588 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:12:40.421766 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:12:40.421792 1 main.go:227] handling current node\nI0521 16:12:40.421850 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:12:40.421907 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:12:50.428109 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:12:50.428165 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:12:50.428376 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:12:50.428408 1 main.go:227] handling current node\nI0521 16:12:50.428431 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:12:50.428450 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:13:00.434592 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:13:00.434652 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:13:00.434871 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:13:00.434904 1 main.go:227] handling current node\nI0521 16:13:00.434926 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:13:00.434940 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:13:10.520925 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:13:10.521008 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:13:10.521261 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:13:10.521294 1 main.go:227] handling current node\nI0521 16:13:10.521317 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:13:10.521338 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:13:20.527866 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:13:20.527938 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:13:20.528177 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:13:20.528219 1 main.go:227] handling current node\nI0521 16:13:20.528249 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:13:20.528270 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:13:30.535248 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:13:30.535309 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:13:30.535530 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:13:30.535561 1 main.go:227] handling current node\nI0521 16:13:30.535584 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:13:30.535603 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:13:40.542420 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:13:40.542471 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:13:40.542694 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:13:40.542726 1 main.go:227] handling current node\nI0521 16:13:40.542752 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:13:40.542768 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:13:50.549163 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:13:50.549214 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:13:50.549439 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:13:50.549465 1 main.go:227] handling current node\nI0521 16:13:50.549490 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:13:50.549505 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:14:00.556313 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:14:00.556365 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:14:00.556579 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:14:00.556607 1 main.go:227] handling current node\nI0521 16:14:00.556633 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:14:00.556652 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:14:10.563636 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:14:10.563690 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:14:10.563900 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:14:10.563927 1 main.go:227] handling current node\nI0521 16:14:10.563952 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:14:10.563967 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:14:20.570071 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:14:20.570118 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:14:20.570338 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:14:20.570365 1 main.go:227] handling current node\nI0521 16:14:20.570389 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:14:20.570408 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:14:30.577056 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:14:30.577134 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:14:30.577401 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:14:30.577438 1 main.go:227] handling current node\nI0521 16:14:30.577476 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:14:30.577499 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:14:40.583449 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:14:40.583508 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:14:40.583742 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:14:40.583774 1 main.go:227] handling current node\nI0521 16:14:40.583797 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:14:40.583814 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:14:50.720897 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:14:50.720963 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:14:50.721318 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:14:50.721346 1 main.go:227] handling current node\nI0521 16:14:50.721371 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:14:50.721387 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:15:00.728205 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:15:00.728265 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:15:00.728508 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:15:00.728544 1 main.go:227] handling current node\nI0521 16:15:00.728567 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:15:00.728587 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:15:10.735270 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:15:10.735322 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:15:10.735532 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:15:10.735558 1 main.go:227] handling current node\nI0521 16:15:10.735584 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:15:10.735599 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:15:20.742104 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:15:20.742158 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:15:20.742388 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:15:20.742417 1 main.go:227] handling current node\nI0521 16:15:20.742444 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:15:20.742459 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:15:30.748387 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:15:30.748437 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:15:30.748684 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:15:30.748718 1 main.go:227] handling current node\nI0521 16:15:30.748744 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:15:30.748758 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:15:40.755682 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:15:40.755752 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:15:40.756026 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:15:40.756062 1 main.go:227] handling current node\nI0521 16:15:40.756097 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:15:40.756125 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:15:50.762903 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:15:50.762955 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:15:50.763163 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:15:50.763190 1 main.go:227] handling current node\nI0521 16:15:50.763214 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:15:50.763228 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:16:00.769629 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:16:00.769688 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:16:00.769937 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:16:00.769970 1 main.go:227] handling current node\nI0521 16:16:00.769993 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:16:00.770007 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:16:10.776258 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:16:10.777149 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:16:10.777503 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:16:10.777550 1 main.go:227] handling current node\nI0521 16:16:10.777581 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:16:10.777603 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:16:20.926238 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:16:20.926296 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:16:20.926507 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:16:20.926545 1 main.go:227] handling current node\nI0521 16:16:20.926578 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:16:20.926601 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:16:30.933298 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:16:30.933358 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:16:30.933569 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:16:30.933601 1 main.go:227] handling current node\nI0521 16:16:30.933624 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:16:30.933643 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:16:40.940258 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:16:40.940314 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:16:40.940542 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:16:40.940575 1 main.go:227] handling current node\nI0521 16:16:40.940599 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:16:40.940623 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:16:50.947644 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:16:50.947691 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:16:50.947901 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:16:50.947928 1 main.go:227] handling current node\nI0521 16:16:50.947950 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:16:50.947964 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:17:00.954519 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:17:00.954594 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:17:00.954817 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:17:00.954849 1 main.go:227] handling current node\nI0521 16:17:00.954872 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:17:00.954893 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:17:10.960974 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:17:10.961036 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:17:10.961250 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:17:10.961284 1 main.go:227] handling current node\nI0521 16:17:10.961310 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:17:10.961364 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:17:20.971027 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:17:20.971090 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:17:20.971319 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:17:20.971352 1 main.go:227] handling current node\nI0521 16:17:20.971377 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:17:20.971390 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:17:30.977382 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:17:30.977441 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:17:30.977670 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:17:30.977701 1 main.go:227] handling current node\nI0521 16:17:30.977724 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:17:30.977742 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:17:40.986876 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:17:40.986924 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:17:40.987107 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:17:40.987125 1 main.go:227] handling current node\nI0521 16:17:40.987143 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:17:40.987159 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:17:50.994645 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:17:50.994729 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:17:50.995049 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:17:50.995091 1 main.go:227] handling current node\nI0521 16:17:50.995116 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:17:50.995133 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:18:01.001565 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:18:01.001634 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:18:01.001917 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:18:01.001953 1 main.go:227] handling current node\nI0521 16:18:01.001978 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:18:01.002000 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:18:11.008604 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:18:11.008664 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:18:11.008881 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:18:11.008913 1 main.go:227] handling current node\nI0521 16:18:11.008935 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:18:11.009024 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:18:21.015210 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:18:21.015261 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:18:21.015496 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:18:21.015524 1 main.go:227] handling current node\nI0521 16:18:21.015557 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:18:21.015577 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:18:31.022567 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:18:31.022624 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:18:31.022832 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:18:31.022864 1 main.go:227] handling current node\nI0521 16:18:31.022889 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:18:31.022908 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:18:41.028635 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:18:41.028686 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:18:41.028948 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:18:41.028975 1 main.go:227] handling current node\nI0521 16:18:41.029010 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:18:41.029029 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:18:51.035773 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:18:51.035823 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:18:51.036060 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:18:51.036093 1 main.go:227] handling current node\nI0521 16:18:51.036124 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:18:51.036139 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:19:01.042534 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:19:01.042593 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:19:01.042809 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:19:01.042839 1 main.go:227] handling current node\nI0521 16:19:01.042862 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:19:01.042881 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:19:11.048572 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:19:11.048634 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:19:11.048824 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:19:11.048857 1 main.go:227] handling current node\nI0521 16:19:11.048879 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:19:11.048894 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:19:21.220800 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:19:21.220876 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:19:21.221123 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:19:21.221151 1 main.go:227] handling current node\nI0521 16:19:21.221178 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:19:21.221196 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:19:31.227079 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:19:31.227145 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:19:31.227418 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:19:31.227453 1 main.go:227] handling current node\nI0521 16:19:31.227477 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:19:31.227491 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:19:41.233611 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:19:41.233671 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:19:41.233947 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:19:41.233983 1 main.go:227] handling current node\nI0521 16:19:41.234007 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:19:41.234029 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:19:51.240319 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:19:51.240364 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:19:51.240606 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:19:51.240645 1 main.go:227] handling current node\nI0521 16:19:51.240681 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:19:51.240704 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:20:01.247107 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:20:01.247155 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:20:01.247378 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:20:01.247405 1 main.go:227] handling current node\nI0521 16:20:01.247427 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:20:01.247446 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:20:11.254097 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:20:11.254143 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:20:11.254372 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:20:11.254399 1 main.go:227] handling current node\nI0521 16:20:11.254421 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:20:11.254434 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:20:21.261533 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:20:21.261580 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:20:21.261872 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:20:21.261921 1 main.go:227] handling current node\nI0521 16:20:21.261951 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:20:21.261968 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:20:31.268628 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:20:31.268688 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:20:31.268941 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:20:31.268975 1 main.go:227] handling current node\nI0521 16:20:31.268998 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:20:31.269018 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:20:41.275897 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:20:41.275942 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:20:41.276166 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:20:41.276192 1 main.go:227] handling current node\nI0521 16:20:41.276213 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:20:41.276229 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:20:51.284426 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:20:51.284530 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:20:51.319530 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:20:51.319606 1 main.go:227] handling current node\nI0521 16:20:51.319632 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:20:51.319648 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:21:01.325908 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:21:01.325972 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:21:01.326282 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:21:01.326377 1 main.go:227] handling current node\nI0521 16:21:01.326410 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:21:01.326438 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:21:11.333583 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:21:11.333637 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:21:11.333905 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:21:11.333939 1 main.go:227] handling current node\nI0521 16:21:11.333961 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:21:11.333980 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:21:21.340662 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:21:21.340718 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:21:21.340934 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:21:21.340965 1 main.go:227] handling current node\nI0521 16:21:21.340987 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:21:21.341014 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:21:31.347243 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:21:31.347286 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:21:31.347498 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:21:31.347521 1 main.go:227] handling current node\nI0521 16:21:31.347540 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:21:31.347552 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:21:41.354360 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:21:41.354427 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:21:41.354853 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:21:41.354895 1 main.go:227] handling current node\nI0521 16:21:41.354928 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:21:41.354953 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:21:51.361875 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:21:51.361931 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:21:51.362134 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:21:51.362164 1 main.go:227] handling current node\nI0521 16:21:51.362186 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:21:51.362205 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:22:01.367785 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:22:01.367855 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:22:01.394675 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:22:01.394714 1 main.go:227] handling current node\nI0521 16:22:01.394736 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:22:01.394751 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:22:11.400659 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:22:11.400716 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:22:11.400958 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:22:11.400989 1 main.go:227] handling current node\nI0521 16:22:11.401013 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:22:11.401034 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:22:21.407318 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:22:21.407366 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:22:21.407600 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:22:21.407628 1 main.go:227] handling current node\nI0521 16:22:21.407651 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:22:21.407665 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:22:31.415825 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:22:31.415907 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:22:31.416206 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:22:31.416244 1 main.go:227] handling current node\nI0521 16:22:31.416295 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:22:31.416320 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:22:41.423683 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:22:41.423738 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:22:41.423929 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:22:41.423948 1 main.go:227] handling current node\nI0521 16:22:41.423970 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:22:41.423988 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:22:51.430979 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:22:51.431029 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:22:51.431250 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:22:51.431277 1 main.go:227] handling current node\nI0521 16:22:51.431302 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:22:51.431317 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:23:01.438145 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:23:01.438209 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:23:01.438468 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:23:01.438516 1 main.go:227] handling current node\nI0521 16:23:01.438550 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:23:01.438572 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:23:11.445344 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:23:11.445405 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:23:11.445633 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:23:11.445661 1 main.go:227] handling current node\nI0521 16:23:11.445686 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:23:11.445701 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:23:21.453036 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:23:21.453094 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:23:21.453306 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:23:21.453339 1 main.go:227] handling current node\nI0521 16:23:21.453362 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:23:21.453385 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:23:31.460806 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:23:31.460864 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:23:31.461064 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:23:31.461095 1 main.go:227] handling current node\nI0521 16:23:31.461117 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:23:31.461139 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:23:41.468644 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:23:41.468696 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:23:41.468921 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:23:41.468948 1 main.go:227] handling current node\nI0521 16:23:41.468973 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:23:41.468992 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:23:51.475608 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:23:51.475670 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:23:51.475895 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:23:51.475928 1 main.go:227] handling current node\nI0521 16:23:51.475952 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:23:51.475965 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:24:01.483332 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:24:01.483389 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:24:01.483603 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:24:01.483636 1 main.go:227] handling current node\nI0521 16:24:01.483659 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:24:01.483680 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:24:11.492507 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:24:11.492582 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:24:11.492876 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:24:11.492912 1 main.go:227] handling current node\nI0521 16:24:11.492936 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:24:11.492962 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:24:21.499103 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:24:21.499158 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:24:21.499374 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:24:21.499406 1 main.go:227] handling current node\nI0521 16:24:21.499427 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:24:21.499439 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:24:31.506990 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:24:31.507043 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:24:31.507272 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:24:31.507298 1 main.go:227] handling current node\nI0521 16:24:31.507323 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:24:31.507627 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:24:41.514589 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:24:41.514646 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:24:41.514863 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:24:41.514894 1 main.go:227] handling current node\nI0521 16:24:41.514919 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:24:41.514940 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:24:51.521928 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:24:51.521998 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:24:51.522212 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:24:51.522245 1 main.go:227] handling current node\nI0521 16:24:51.522268 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:24:51.522291 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:25:01.529359 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:25:01.529416 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:25:01.529621 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:25:01.529653 1 main.go:227] handling current node\nI0521 16:25:01.529675 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:25:01.529697 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:25:11.537057 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:25:11.537115 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:25:11.537334 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:25:11.537366 1 main.go:227] handling current node\nI0521 16:25:11.537389 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:25:11.537408 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:25:21.544880 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:25:21.544937 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:25:21.545157 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:25:21.545197 1 main.go:227] handling current node\nI0521 16:25:21.545225 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:25:21.545238 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:25:31.551634 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:25:31.551676 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:25:31.551881 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:25:31.551903 1 main.go:227] handling current node\nI0521 16:25:31.551923 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:25:31.551939 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:25:41.558477 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:25:41.558534 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:25:41.558758 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:25:41.558790 1 main.go:227] handling current node\nI0521 16:25:41.558813 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:25:41.558831 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:25:51.565252 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:25:51.565304 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:25:51.565521 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:25:51.565551 1 main.go:227] handling current node\nI0521 16:25:51.565574 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:25:51.565593 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:26:01.573490 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:26:01.573570 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:26:01.719211 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:26:01.719260 1 main.go:227] handling current node\nI0521 16:26:01.719289 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:26:01.719303 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:26:11.726047 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:26:11.726108 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:26:11.726338 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:26:11.726372 1 main.go:227] handling current node\nI0521 16:26:11.726396 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:26:11.726415 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:26:21.733234 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:26:21.733309 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:26:21.733535 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:26:21.733568 1 main.go:227] handling current node\nI0521 16:26:21.733594 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:26:21.733606 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:26:31.740361 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:26:31.740414 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:26:31.740648 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:26:31.740677 1 main.go:227] handling current node\nI0521 16:26:31.740706 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:26:31.740719 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:26:41.747714 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:26:41.747770 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:26:41.747983 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:26:41.748016 1 main.go:227] handling current node\nI0521 16:26:41.748039 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:26:41.748059 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:26:51.755818 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:26:51.755867 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:26:51.756095 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:26:51.756123 1 main.go:227] handling current node\nI0521 16:26:51.756149 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:26:51.756165 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:27:01.767055 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:27:01.767102 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:27:01.767281 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:27:01.767303 1 main.go:227] handling current node\nI0521 16:27:01.767325 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:27:01.767339 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:27:11.773309 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:27:11.773363 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:27:11.773607 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:27:11.773636 1 main.go:227] handling current node\nI0521 16:27:11.773661 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:27:11.773676 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:27:21.779846 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:27:21.779897 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:27:21.780130 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:27:21.780157 1 main.go:227] handling current node\nI0521 16:27:21.780183 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:27:21.780196 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:27:31.786389 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:27:31.786438 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:27:31.786659 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:27:31.786686 1 main.go:227] handling current node\nI0521 16:27:31.786709 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:27:31.786729 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:27:41.920847 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:27:41.920950 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:27:41.921234 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:27:41.921268 1 main.go:227] handling current node\nI0521 16:27:41.921298 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:27:41.921324 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:27:51.928299 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:27:51.928353 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:27:51.928578 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:27:51.928606 1 main.go:227] handling current node\nI0521 16:27:51.928630 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:27:51.928648 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:28:01.934961 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:28:01.935009 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:28:01.935229 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:28:01.935257 1 main.go:227] handling current node\nI0521 16:28:01.935280 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:28:01.935297 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:28:11.941853 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:28:11.941903 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:28:11.942131 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:28:11.942158 1 main.go:227] handling current node\nI0521 16:28:11.942181 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:28:11.942196 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:28:21.948104 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:28:21.948152 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:28:21.948384 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:28:21.948414 1 main.go:227] handling current node\nI0521 16:28:21.948437 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:28:21.948450 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:28:31.955878 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:28:31.955925 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:28:31.956156 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:28:31.956182 1 main.go:227] handling current node\nI0521 16:28:31.956205 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:28:31.956220 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:28:41.962711 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:28:41.962758 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:28:41.963034 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:28:41.963066 1 main.go:227] handling current node\nI0521 16:28:41.963096 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:28:41.963113 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:28:51.969655 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:28:51.969700 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:28:51.969948 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:28:51.970209 1 main.go:227] handling current node\nI0521 16:28:51.970232 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:28:51.970246 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:29:01.978061 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:29:01.978112 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:29:01.978324 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:29:01.978351 1 main.go:227] handling current node\nI0521 16:29:01.978376 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:29:01.978391 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:29:11.984194 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:29:11.984250 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:29:11.984487 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:29:11.984517 1 main.go:227] handling current node\nI0521 16:29:11.984540 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:29:11.984558 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:29:21.990790 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:29:21.990837 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:29:21.991051 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:29:21.991078 1 main.go:227] handling current node\nI0521 16:29:21.991100 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:29:21.991115 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:29:31.997156 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:29:31.997219 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:29:31.997432 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:29:31.997464 1 main.go:227] handling current node\nI0521 16:29:31.997489 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:29:31.997508 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:29:42.003868 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:29:42.003919 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:29:42.004136 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:29:42.004164 1 main.go:227] handling current node\nI0521 16:29:42.004190 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:29:42.004204 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:29:52.009059 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:29:52.009107 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:29:52.009313 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:29:52.009345 1 main.go:227] handling current node\nI0521 16:29:52.009368 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:29:52.009382 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:30:02.016058 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:30:02.016106 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:30:02.016308 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:30:02.016336 1 main.go:227] handling current node\nI0521 16:30:02.016359 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:30:02.016374 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:30:12.022895 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:30:12.022957 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:30:12.023308 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:30:12.023887 1 main.go:227] handling current node\nI0521 16:30:12.023920 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:30:12.023939 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:30:22.031184 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:30:22.031233 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:30:22.031485 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:30:22.031511 1 main.go:227] handling current node\nI0521 16:30:22.031532 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:30:22.031544 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:30:32.038270 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:30:32.038334 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:30:32.038592 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:30:32.038634 1 main.go:227] handling current node\nI0521 16:30:32.038664 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:30:32.038683 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:30:42.045224 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:30:42.045283 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:30:42.045521 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:30:42.045552 1 main.go:227] handling current node\nI0521 16:30:42.045575 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:30:42.045595 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:30:52.051551 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:30:52.051610 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:30:52.051841 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:30:52.051871 1 main.go:227] handling current node\nI0521 16:30:52.051897 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:30:52.051909 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:31:02.058223 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:31:02.058274 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:31:02.058500 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:31:02.058526 1 main.go:227] handling current node\nI0521 16:31:02.058551 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:31:02.058564 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:31:12.064661 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:31:12.064703 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:31:12.064912 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:31:12.064934 1 main.go:227] handling current node\nI0521 16:31:12.064953 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:31:12.064963 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:31:22.070988 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:31:22.071032 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:31:22.071277 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:31:22.071313 1 main.go:227] handling current node\nI0521 16:31:22.071335 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:31:22.071347 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:31:32.077689 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:31:32.077741 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:31:32.078054 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:31:32.078162 1 main.go:227] handling current node\nI0521 16:31:32.078184 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:31:32.078196 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:31:42.084388 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:31:42.084447 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:31:42.084678 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:31:42.084709 1 main.go:227] handling current node\nI0521 16:31:42.084732 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:31:42.084750 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:31:52.121893 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:31:52.121973 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:31:52.123773 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:31:52.123813 1 main.go:227] handling current node\nI0521 16:31:52.123853 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:31:52.123882 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:32:02.129941 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:32:02.129984 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:32:02.130204 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:32:02.130250 1 main.go:227] handling current node\nI0521 16:32:02.130276 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:32:02.130294 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:32:12.136110 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:32:12.136165 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:32:12.136426 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:32:12.136463 1 main.go:227] handling current node\nI0521 16:32:12.136488 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:32:12.136530 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:32:22.142182 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:32:22.142256 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:32:22.150525 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:32:22.150564 1 main.go:227] handling current node\nI0521 16:32:22.150588 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:32:22.150600 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:32:32.158265 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:32:32.158324 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:32:32.203750 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:32:32.203796 1 main.go:227] handling current node\nI0521 16:32:32.203820 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:32:32.203835 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:32:42.210072 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:32:42.210123 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:32:42.210366 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:32:42.210398 1 main.go:227] handling current node\nI0521 16:32:42.210420 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:32:42.210431 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:32:52.217064 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:32:52.217125 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:32:52.217405 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:32:52.217438 1 main.go:227] handling current node\nI0521 16:32:52.217461 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:32:52.217485 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:33:02.223436 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:33:02.223478 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:33:02.223664 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:33:02.223688 1 main.go:227] handling current node\nI0521 16:33:02.223708 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:33:02.223724 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:33:12.230746 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:33:12.230792 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:33:12.231018 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:33:12.231045 1 main.go:227] handling current node\nI0521 16:33:12.231068 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:33:12.231086 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:33:22.239695 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:33:22.239773 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:33:22.240106 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:33:22.240138 1 main.go:227] handling current node\nI0521 16:33:22.240166 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:33:22.240183 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:33:32.246277 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:33:32.246352 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:33:32.246574 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:33:32.246606 1 main.go:227] handling current node\nI0521 16:33:32.246629 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:33:32.246649 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:33:42.253133 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:33:42.253195 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:33:42.253422 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:33:42.253454 1 main.go:227] handling current node\nI0521 16:33:42.253483 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:33:42.253504 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:33:52.259272 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:33:52.259327 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:33:52.259545 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:33:52.259577 1 main.go:227] handling current node\nI0521 16:33:52.259599 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:33:52.259621 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:34:02.265487 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:34:02.265534 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:34:02.265710 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:34:02.265736 1 main.go:227] handling current node\nI0521 16:34:02.265755 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:34:02.265769 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:34:12.272833 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:34:12.272906 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:34:12.273174 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:34:12.273207 1 main.go:227] handling current node\nI0521 16:34:12.273241 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:34:12.273257 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:34:22.280026 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:34:22.280089 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:34:22.280312 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:34:22.280344 1 main.go:227] handling current node\nI0521 16:34:22.280372 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:34:22.280386 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:34:32.286616 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:34:32.286672 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:34:32.286888 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:34:32.286918 1 main.go:227] handling current node\nI0521 16:34:32.286943 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:34:32.286956 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:34:42.293799 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:34:42.293900 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:34:42.294144 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:34:42.294178 1 main.go:227] handling current node\nI0521 16:34:42.294200 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:34:42.294220 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:34:52.300895 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:34:52.300954 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:34:52.301204 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:34:52.301237 1 main.go:227] handling current node\nI0521 16:34:52.301259 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:34:52.301279 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:35:02.307734 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:35:02.307791 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:35:02.308046 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:35:02.308079 1 main.go:227] handling current node\nI0521 16:35:02.308113 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:35:02.308128 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:35:12.314241 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:35:12.314291 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:35:12.314539 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:35:12.314567 1 main.go:227] handling current node\nI0521 16:35:12.314589 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:35:12.314602 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:35:22.321049 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:35:22.321102 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:35:22.321365 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:35:22.321394 1 main.go:227] handling current node\nI0521 16:35:22.321433 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:35:22.321452 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:35:32.328557 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:35:32.328616 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:35:32.328829 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:35:32.328863 1 main.go:227] handling current node\nI0521 16:35:32.328888 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:35:32.328902 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:35:42.335019 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:35:42.335072 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:35:42.335302 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:35:42.335331 1 main.go:227] handling current node\nI0521 16:35:42.335357 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:35:42.335371 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:35:52.341844 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:35:52.341905 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:35:52.342187 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:35:52.342220 1 main.go:227] handling current node\nI0521 16:35:52.342249 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:35:52.342265 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:36:02.348826 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:36:02.348900 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:36:02.349120 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:36:02.349154 1 main.go:227] handling current node\nI0521 16:36:02.349179 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:36:02.349199 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:36:12.355638 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:36:12.355696 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:36:12.355958 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:36:12.355998 1 main.go:227] handling current node\nI0521 16:36:12.356034 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:36:12.356058 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:36:22.362132 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:36:22.362191 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:36:22.362407 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:36:22.362438 1 main.go:227] handling current node\nI0521 16:36:22.362464 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:36:22.362480 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:36:32.369665 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:36:32.419262 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:36:32.520020 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:36:32.520078 1 main.go:227] handling current node\nI0521 16:36:32.520108 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:36:32.520122 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:36:42.526739 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:36:42.526798 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:36:42.527007 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:36:42.527038 1 main.go:227] handling current node\nI0521 16:36:42.527060 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:36:42.527079 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:36:52.533957 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:36:52.534008 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:36:52.534229 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:36:52.534266 1 main.go:227] handling current node\nI0521 16:36:52.534289 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:36:52.534301 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:37:02.540972 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:37:02.541033 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:37:02.541256 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:37:02.541289 1 main.go:227] handling current node\nI0521 16:37:02.541311 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:37:02.541331 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:37:12.547775 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:37:12.547821 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:37:12.548017 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:37:12.548042 1 main.go:227] handling current node\nI0521 16:37:12.548065 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:37:12.548080 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:37:22.555107 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:37:22.555177 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:37:22.555395 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:37:22.555429 1 main.go:227] handling current node\nI0521 16:37:22.555452 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:37:22.555471 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:37:32.562130 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:37:32.562192 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:37:32.562406 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:37:32.562442 1 main.go:227] handling current node\nI0521 16:37:32.562469 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:37:32.562482 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:37:42.569000 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:37:42.569046 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:37:42.569250 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:37:42.569275 1 main.go:227] handling current node\nI0521 16:37:42.569298 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:37:42.569315 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:37:52.576199 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:37:52.576245 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:37:52.576427 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:37:52.576456 1 main.go:227] handling current node\nI0521 16:37:52.576481 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:37:52.576502 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:38:02.583030 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:38:02.583090 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:38:02.583304 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:38:02.583335 1 main.go:227] handling current node\nI0521 16:38:02.583358 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:38:02.583379 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:38:12.590232 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:38:12.590285 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:38:12.590520 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:38:12.590548 1 main.go:227] handling current node\nI0521 16:38:12.590574 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:38:12.590587 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:38:22.596684 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:38:22.596747 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:38:22.596963 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:38:22.596991 1 main.go:227] handling current node\nI0521 16:38:22.597014 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:38:22.597029 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:38:32.603373 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:38:32.603420 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:38:32.603638 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:38:32.603666 1 main.go:227] handling current node\nI0521 16:38:32.603688 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:38:32.603700 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:38:42.608536 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:38:42.608566 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:38:42.608679 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:38:42.608689 1 main.go:227] handling current node\nI0521 16:38:42.608701 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:38:42.608707 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:38:52.615028 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:38:52.615090 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:38:52.615292 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:38:52.615325 1 main.go:227] handling current node\nI0521 16:38:52.615348 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:38:52.615360 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \nI0521 16:39:02.621770 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]\nI0521 16:39:02.621861 1 main.go:250] Node kali-control-plane has CIDR [10.244.0.0/24] \nI0521 16:39:02.622081 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]\nI0521 16:39:02.622109 1 main.go:227] handling current node\nI0521 16:39:02.622135 1 main.go:223] Handling node with IPs: map[172.18.0.4:{}]\nI0521 16:39:02.622148 1 main.go:250] Node kali-worker2 has CIDR [10.244.2.0/24] \n==== END logs for container kindnet-cni of pod kube-system/kindnet-vlqfv ====\n==== START logs for container kube-apiserver of pod kube-system/kube-apiserver-kali-control-plane ====\nFlag --insecure-port has been deprecated, This flag will be removed in a future version.\nI0521 15:13:07.994244 1 server.go:625] external host was not specified, using 172.18.0.3\nI0521 15:13:07.994585 1 server.go:163] Version: v1.19.11\nI0521 15:13:08.464395 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.\nI0521 15:13:08.464432 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.\nI0521 15:13:08.466224 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.\nI0521 15:13:08.466249 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.\nI0521 15:13:08.468415 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:08.468455 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nW0521 15:13:08.468784 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting...\nI0521 15:13:09.462250 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:09.462317 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nW0521 15:13:09.462732 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting...\nW0521 15:13:09.469520 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\". Reconnecting...\nI0521 15:13:10.962083 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:10.962142 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:10.975504 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:13:10.975589 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:13:10.975605 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:13:10.977381 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:10.977427 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.035624 1 master.go:271] Using reconciler: lease\nI0521 15:13:11.036446 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.036486 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.058845 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.058880 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.073702 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.073753 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.087967 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.088017 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.099038 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.099090 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.109933 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.109989 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.122303 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.122353 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.133049 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.133094 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.149618 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.149669 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.161591 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.161690 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.176445 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.176496 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.190678 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.190716 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.202658 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.202696 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.213639 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.213682 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.227715 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.227750 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.239277 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.239329 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.253411 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.253460 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.266756 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.266809 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.413964 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.413999 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.428631 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.428680 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.444092 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.444149 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.458202 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.458248 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.470439 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.470486 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.482587 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.482625 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.497349 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.497399 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.514666 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.514715 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.527415 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.527509 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.542008 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.542057 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.556352 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.556393 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.567761 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.567809 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.578771 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.578811 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.592171 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.592238 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.605953 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.606010 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.615761 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.615814 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.628635 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.628695 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.646303 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.646365 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.660440 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.660493 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.672542 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.672607 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.686369 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.686451 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.701294 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.701340 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.710637 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.710696 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.726333 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.726908 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.740138 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.740185 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.753390 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.753427 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.792195 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.792257 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.807093 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.807138 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.821892 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.821935 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.836119 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.836176 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.847707 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.847748 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.859803 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.859855 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.872372 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.872432 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.884148 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.884186 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.925379 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.925432 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.939758 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.939818 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.951946 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.952004 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.966165 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.966216 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.979172 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.979220 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:11.993721 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:11.993768 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:12.006918 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:12.006952 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:12.017517 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:12.017592 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:12.031036 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:12.031093 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:12.043290 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:12.043327 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:12.054565 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:12.054602 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:12.068967 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:12.069016 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:12.081590 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:12.081629 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:12.092190 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:12.092227 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nW0521 15:13:12.274881 1 genericapiserver.go:418] Skipping API batch/v2alpha1 because it has no resources.\nW0521 15:13:12.296136 1 genericapiserver.go:418] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.\nW0521 15:13:12.320761 1 genericapiserver.go:418] Skipping API node.k8s.io/v1alpha1 because it has no resources.\nW0521 15:13:12.347664 1 genericapiserver.go:418] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.\nW0521 15:13:12.351552 1 genericapiserver.go:418] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.\nW0521 15:13:12.368756 1 genericapiserver.go:418] Skipping API storage.k8s.io/v1alpha1 because it has no resources.\nW0521 15:13:12.404934 1 genericapiserver.go:418] Skipping API apps/v1beta2 because it has no resources.\nW0521 15:13:12.404959 1 genericapiserver.go:418] Skipping API apps/v1beta1 because it has no resources.\nI0521 15:13:12.420886 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.\nI0521 15:13:12.420907 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.\nI0521 15:13:12.425153 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:12.425186 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:12.436005 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:13:12.436041 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:13:14.752210 1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt\nI0521 15:13:14.752246 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt\nI0521 15:13:14.752560 1 dynamic_serving_content.go:130] Starting serving-cert::/etc/kubernetes/pki/apiserver.crt::/etc/kubernetes/pki/apiserver.key\nI0521 15:13:14.752920 1 secure_serving.go:197] Serving securely on [::]:6443\nI0521 15:13:14.752965 1 tlsconfig.go:240] Starting DynamicServingCertificateController\nI0521 15:13:14.752986 1 autoregister_controller.go:141] Starting autoregister controller\nI0521 15:13:14.752993 1 cache.go:32] Waiting for caches to sync for autoregister controller\nI0521 15:13:14.753035 1 customresource_discovery_controller.go:209] Starting DiscoveryController\nI0521 15:13:14.753071 1 controller.go:83] Starting OpenAPI AggregationController\nI0521 15:13:14.753133 1 controller.go:86] Starting OpenAPI controller\nI0521 15:13:14.753166 1 naming_controller.go:291] Starting NamingConditionController\nI0521 15:13:14.753192 1 establishing_controller.go:76] Starting EstablishingController\nI0521 15:13:14.753200 1 apiservice_controller.go:97] Starting APIServiceRegistrationController\nI0521 15:13:14.753214 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller\nI0521 15:13:14.753273 1 crd_finalizer.go:266] Starting CRDFinalizer\nI0521 15:13:14.753304 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController\nI0521 15:13:14.753341 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController\nI0521 15:13:14.753721 1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/etc/kubernetes/pki/front-proxy-client.crt::/etc/kubernetes/pki/front-proxy-client.key\nI0521 15:13:14.754067 1 available_controller.go:475] Starting AvailableConditionController\nI0521 15:13:14.754150 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller\nI0521 15:13:14.754755 1 crdregistration_controller.go:111] Starting crd-autoregister controller\nI0521 15:13:14.754783 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller\nI0521 15:13:14.754790 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister\nI0521 15:13:14.754801 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller\nI0521 15:13:14.754856 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt\nI0521 15:13:14.755045 1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt\nE0521 15:13:14.756534 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.18.0.3, ResourceVersion: 0, AdditionalErrorMsg: \nI0521 15:13:14.853065 1 cache.go:39] Caches are synced for autoregister controller\nI0521 15:13:14.853354 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller\nI0521 15:13:14.854357 1 cache.go:39] Caches are synced for AvailableConditionController controller\nI0521 15:13:14.854891 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller \nI0521 15:13:14.854925 1 shared_informer.go:247] Caches are synced for crd-autoregister \nI0521 15:13:15.752249 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).\nI0521 15:13:15.752285 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).\nI0521 15:13:15.759145 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000\nI0521 15:13:15.763893 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000\nI0521 15:13:15.763932 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.\nI0521 15:13:16.248527 1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io\nI0521 15:13:16.300749 1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io\nW0521 15:13:16.404438 1 lease.go:233] Resetting endpoints for master service \"kubernetes\" to [172.18.0.3]\nI0521 15:13:16.405580 1 controller.go:609] quota admission added evaluator for: endpoints\nI0521 15:13:16.410670 1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io\nI0521 15:13:17.714734 1 controller.go:609] quota admission added evaluator for: serviceaccounts\nI0521 15:13:17.748502 1 controller.go:609] quota admission added evaluator for: deployments.apps\nI0521 15:13:17.841257 1 controller.go:609] quota admission added evaluator for: daemonsets.apps\nI0521 15:13:18.911320 1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io\nI0521 15:13:35.104327 1 controller.go:609] quota admission added evaluator for: replicasets.apps\nI0521 15:13:35.129960 1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps\nI0521 15:13:41.063212 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:13:41.063293 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:13:41.063306 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:14:14.845706 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:14:14.845797 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:14:14.845839 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:14:49.231271 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:14:49.231349 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:14:49.231365 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:15:20.015339 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:15:20.015413 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:15:20.015429 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:15:54.549999 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:15:54.550072 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:15:54.550090 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:16:04.976930 1 controller.go:609] quota admission added evaluator for: jobs.batch\nI0521 15:16:07.259155 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:16:07.259193 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:16:07.323100 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:16:07.323139 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:16:07.372523 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:16:07.372555 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:16:07.412609 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:16:07.412641 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:16:26.695862 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:16:26.695922 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:16:26.695937 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:16:56.751023 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:16:56.751104 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:16:56.751120 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:17:27.642290 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:17:27.642363 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:17:27.642380 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:18:06.720002 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:18:06.720093 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:18:06.720110 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:18:46.364695 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:18:46.364764 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:18:46.364781 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:19:29.262115 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:19:29.262189 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:19:29.262214 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:20:06.469236 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:20:06.469330 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:20:06.469347 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:20:51.146470 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:20:51.146541 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:20:51.146558 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:21:22.828629 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:21:22.828705 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:21:22.828723 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:21:57.442069 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:21:57.442144 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:21:57.442161 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:22:36.261788 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:22:36.261880 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:22:36.261898 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:23:18.742111 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:23:18.742190 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:23:18.742206 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:23:56.783674 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:23:56.783761 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:23:56.783778 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:24:29.970761 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:24:29.970846 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:24:29.970863 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:25:13.379067 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:25:13.379138 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:25:13.379154 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:25:52.078984 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:25:52.079055 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:25:52.079081 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:26:32.870776 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:26:32.870851 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:26:32.870869 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:26:48.168226 1 controller.go:609] quota admission added evaluator for: statefulsets.apps\nI0521 15:27:05.115985 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:27:05.116066 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:27:05.116081 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:27:35.249756 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:27:35.249845 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:27:35.249863 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:28:16.636765 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:28:16.636841 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:28:16.636859 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:28:55.804276 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:28:55.804511 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:28:55.804816 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:29:25.841582 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:29:25.841653 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:29:25.841670 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:29:57.150623 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:29:57.150711 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:29:57.150729 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:30:40.394157 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:30:40.394240 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:30:40.394257 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:31:22.820573 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:31:22.820655 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:31:22.820672 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:31:57.398121 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:31:57.398205 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:31:57.398221 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:32:36.398269 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:32:36.398339 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:32:36.398356 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:33:20.618453 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:33:20.618534 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:33:20.618552 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:34:00.604980 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:34:00.605058 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:34:00.605074 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:34:30.871242 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:34:30.871326 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:34:30.871343 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:35:10.718790 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:35:10.718871 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:35:10.718888 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:35:47.000793 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:35:47.000865 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:35:47.000882 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:36:18.784310 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:36:18.784379 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:36:18.784395 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:36:53.049673 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:36:53.049758 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:36:53.049775 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:37:24.412134 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:37:24.412208 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:37:24.412224 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:37:55.011368 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:37:55.011443 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:37:55.011459 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:38:28.654851 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:38:28.654928 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:38:28.654945 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:39:02.004810 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:39:02.004895 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:39:02.004918 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:39:35.240840 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:39:35.240909 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:39:35.240925 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:40:09.336368 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:40:09.336440 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:40:09.336459 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:40:48.619532 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:40:48.619612 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:40:48.619629 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:41:18.822308 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:41:18.822391 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:41:18.822410 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:41:51.134244 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:41:51.134323 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:41:51.134340 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:42:31.986720 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:42:31.986800 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:42:31.986826 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:43:10.286061 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:43:10.286139 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:43:10.286156 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:43:55.114057 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:43:55.114135 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:43:55.114151 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:44:39.972058 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:44:39.972132 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:44:39.972149 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:45:24.118790 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:45:24.118860 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:45:24.118876 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:46:06.902468 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:46:06.902537 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:46:06.902553 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:46:45.500179 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:46:45.500255 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:46:45.500272 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:47:24.305097 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:47:24.305180 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:47:24.305197 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:48:04.794855 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:48:04.794932 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:48:04.794949 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:48:42.615996 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:48:42.616071 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:48:42.616088 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:49:19.142898 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:49:19.142977 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:49:19.142994 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:49:59.945413 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:49:59.945493 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:49:59.945509 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:50:41.998155 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:50:41.998224 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:50:41.998240 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:51:24.280413 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:51:24.280480 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:51:24.280496 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:52:02.057709 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:52:02.057781 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:52:02.057800 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:52:46.311422 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:52:46.311500 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:52:46.311517 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:53:24.730677 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:53:24.730748 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:53:24.730764 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:54:08.070287 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:54:08.070360 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:54:08.070376 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:54:46.261011 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:54:46.261076 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:54:46.261092 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:55:31.258157 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:55:31.258225 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:55:31.258240 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:56:15.607080 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:56:15.607154 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:56:15.607173 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:56:47.442494 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:56:47.442556 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:56:47.442572 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:57:28.799934 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:57:28.800010 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:57:28.800028 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:57:53.472550 1 trace.go:205] Trace[1080594237]: \"Get\" url:/api/v1/namespaces/configmap-2323/pods/pod-configmaps-b24eb373-0019-46ca-ab88-d8568c34c427/log,user-agent:e2e.test/v1.19.11 (linux/amd64) kubernetes/c6a2f08 -- [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance],client:172.18.0.1 (21-May-2021 15:57:52.894) (total time: 577ms):\nTrace[1080594237]: ---\"Transformed response object\" 575ms (15:57:00.472)\nTrace[1080594237]: [577.613924ms] [577.613924ms] END\nI0521 15:57:53.489881 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:57:53.489924 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:57:53.672388 1 trace.go:205] Trace[249328410]: \"Get\" url:/api/v1/namespaces/var-expansion-1585/pods/var-expansion-c18409db-f624-48f2-a08f-ded307c0fb69/log,user-agent:e2e.test/v1.19.11 (linux/amd64) kubernetes/c6a2f08 -- [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance],client:172.18.0.1 (21-May-2021 15:57:52.905) (total time: 766ms):\nTrace[249328410]: ---\"Transformed response object\" 764ms (15:57:00.672)\nTrace[249328410]: [766.785569ms] [766.785569ms] END\nI0521 15:57:56.981688 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:57:56.981720 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:57:57.945518 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:57:57.945560 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:58:05.899701 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:58:05.899780 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:58:05.899806 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 15:58:12.622100 1 controller.go:609] quota admission added evaluator for: namespaces\nW0521 15:58:18.258504 1 dispatcher.go:142] rejected by webhook \"validating-is-webhook-configuration-ready.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"validating-is-webhook-configuration-ready.k8s.io\\\" denied the request: this webhook denies all requests\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW0521 15:58:18.268132 1 dispatcher.go:142] rejected by webhook \"deny-unwanted-configmap-data.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-configmap-data.k8s.io\\\" denied the request: the configmap contains unwanted key and value\", Reason:\"the configmap contains unwanted key and value\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW0521 15:58:18.290018 1 dispatcher.go:142] rejected by webhook \"deny-unwanted-configmap-data.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-configmap-data.k8s.io\\\" denied the request: the configmap contains unwanted key and value\", Reason:\"the configmap contains unwanted key and value\", Details:(*v1.StatusDetails)(nil), Code:400}}\nI0521 15:58:29.791894 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:58:29.791935 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:58:29.806145 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:58:29.806181 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:58:29.829702 1 controller.go:609] quota admission added evaluator for: e2e-test-webhook-46-crds.webhook.example.com\nE0521 15:58:34.556337 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0070f96d0)}\nE0521 15:58:35.560265 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008147810)}\nE0521 15:58:36.560056 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc003033540)}\nE0521 15:58:37.560232 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006a80f50)}\nE0521 15:58:38.560822 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007be9f40)}\nE0521 15:58:39.560536 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00344f770)}\nI0521 15:58:40.295973 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:58:40.296031 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:58:40.296045 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nE0521 15:58:40.560837 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0085953b0)}\nE0521 15:58:41.560426 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0080b8460)}\nE0521 15:58:42.560391 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007b9cf50)}\nE0521 15:58:43.561100 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc003882be0)}\nE0521 15:58:44.560713 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00616eff0)}\nE0521 15:58:45.561036 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006270460)}\nE0521 15:58:46.560673 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007112690)}\nE0521 15:58:47.560748 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0073aa500)}\nE0521 15:58:48.561547 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00662ba40)}\nE0521 15:58:49.560903 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006a3c3c0)}\nE0521 15:58:50.559782 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007f6c820)}\nE0521 15:58:51.560643 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007a0b7c0)}\nE0521 15:58:52.561133 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008216640)}\nE0521 15:58:53.560977 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0082b70e0)}\nE0521 15:58:54.560320 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0086a20f0)}\nE0521 15:58:55.560264 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0089328c0)}\nE0521 15:58:56.560314 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008a53cc0)}\nE0521 15:58:57.560137 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008b7a370)}\nE0521 15:58:58.560293 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0078a4190)}\nE0521 15:58:59.561063 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0070d1400)}\nE0521 15:59:00.560651 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007936cd0)}\nE0521 15:59:01.561263 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00701cb40)}\nE0521 15:59:02.561349 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc002fb33b0)}\nE0521 15:59:03.560509 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007224d20)}\nE0521 15:59:04.560877 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc003234be0)}\nE0521 15:59:05.560570 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007998320)}\nE0521 15:59:06.561198 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc000a1c3c0)}\nE0521 15:59:07.560385 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0081a8be0)}\nE0521 15:59:08.560745 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0064414f0)}\nE0521 15:59:09.560368 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0035a0640)}\nE0521 15:59:10.561319 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007150960)}\nE0521 15:59:11.560347 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008b8edc0)}\nI0521 15:59:12.034493 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:59:12.034566 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:59:12.034581 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nE0521 15:59:12.561033 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00768e550)}\nE0521 15:59:13.560879 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006a78e60)}\nE0521 15:59:14.560192 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0064aa5a0)}\nE0521 15:59:15.560247 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007c444b0)}\nI0521 15:59:15.664053 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:59:15.664099 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:59:15.703140 1 controller.go:609] quota admission added evaluator for: podtemplates\nE0521 15:59:16.560238 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007b0e280)}\nE0521 15:59:17.561308 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc002e7af50)}\nE0521 15:59:18.561037 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00898c2d0)}\nE0521 15:59:19.561620 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc003253900)}\nE0521 15:59:20.561328 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00856e230)}\nE0521 15:59:21.562189 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0030f4960)}\nE0521 15:59:22.560996 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc003635720)}\nE0521 15:59:23.560489 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0085007d0)}\nE0521 15:59:24.561021 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0069656d0)}\nI0521 15:59:24.800157 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:59:24.800189 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nE0521 15:59:25.560188 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007346820)}\nE0521 15:59:26.561162 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006d2b450)}\nE0521 15:59:27.560539 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0015efef0)}\nE0521 15:59:28.560553 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007128050)}\nI0521 15:59:28.914422 1 controller.go:609] quota admission added evaluator for: e2e-test-crd-publish-openapi-8667-crds.crd-publish-openapi-test-unknown-in-nested.example.com\nE0521 15:59:29.561276 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007be2b40)}\nE0521 15:59:30.560773 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007d4b860)}\nE0521 15:59:31.560847 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007da26e0)}\nE0521 15:59:32.560527 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00780b8b0)}\nE0521 15:59:33.560559 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007614ff0)}\nE0521 15:59:34.560838 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc000a4ab40)}\nE0521 15:59:34.564312 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc000a4b3b0)}\nI0521 15:59:35.186447 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:59:35.186492 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:59:35.203162 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:59:35.203207 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:59:45.728468 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:59:45.728511 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:59:45.744704 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 15:59:45.744758 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 15:59:52.540113 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 15:59:52.540171 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 15:59:52.540186 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nW0521 15:59:53.366646 1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured\nE0521 16:00:11.108558 1 available_controller.go:508] v1alpha1.wardle.example.com failed with: failing or missing response from https://10.96.227.173:7443/apis/wardle.example.com/v1alpha1: bad status from https://10.96.227.173:7443/apis/wardle.example.com/v1alpha1: 403\nE0521 16:00:11.110394 1 available_controller.go:508] v1alpha1.wardle.example.com failed with: failing or missing response from https://10.96.227.173:7443/apis/wardle.example.com/v1alpha1: bad status from https://10.96.227.173:7443/apis/wardle.example.com/v1alpha1: 403\nE0521 16:00:11.115287 1 available_controller.go:508] v1alpha1.wardle.example.com failed with: failing or missing response from https://10.96.227.173:7443/apis/wardle.example.com/v1alpha1: bad status from https://10.96.227.173:7443/apis/wardle.example.com/v1alpha1: 403\nE0521 16:00:11.137240 1 available_controller.go:508] v1alpha1.wardle.example.com failed with: failing or missing response from https://10.96.227.173:7443/apis/wardle.example.com/v1alpha1: bad status from https://10.96.227.173:7443/apis/wardle.example.com/v1alpha1: 403\nE0521 16:00:11.179710 1 available_controller.go:508] v1alpha1.wardle.example.com failed with: failing or missing response from https://10.96.227.173:7443/apis/wardle.example.com/v1alpha1: bad status from https://10.96.227.173:7443/apis/wardle.example.com/v1alpha1: 403\nI0521 16:00:11.811174 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:00:11.811214 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 16:00:13.876234 1 controller.go:132] OpenAPI AggregationController: action for item v1alpha1.wardle.example.com: Nothing (removed from the queue).\nI0521 16:00:15.971315 1 controller.go:609] quota admission added evaluator for: e2e-test-crd-publish-openapi-6671-crds.crd-publish-openapi-test-foo.example.com\nI0521 16:00:25.339033 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:00:25.339125 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:00:25.339150 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:00:29.684460 1 controller.go:609] quota admission added evaluator for: events.events.k8s.io\nI0521 16:00:32.239332 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:00:32.239371 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 16:00:32.251725 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:00:32.251771 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 16:00:32.397673 1 controller.go:609] quota admission added evaluator for: e2e-test-webhook-1069-crds.webhook.example.com\nI0521 16:00:32.473176 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:00:32.473228 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 16:00:32.487456 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:00:32.487501 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 16:00:48.987032 1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io\nW0521 16:00:54.762056 1 dispatcher.go:142] rejected by webhook \"validating-is-webhook-configuration-ready.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"validating-is-webhook-configuration-ready.k8s.io\\\" denied the request: this webhook denies all requests\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nI0521 16:00:55.765222 1 trace.go:205] Trace[1771024812]: \"Call validating webhook\" configuration:webhook-9408,webhook:allow-configmap-with-delay-webhook.k8s.io,resource:/v1, Resource=configmaps,subresource:,operation:CREATE,UID:91ad84c0-b80c-4777-bed5-c0745c831a4f (21-May-2021 16:00:54.764) (total time: 1000ms):\nTrace[1771024812]: [1.000948108s] [1.000948108s] END\nW0521 16:00:55.765300 1 dispatcher.go:134] Failed calling webhook, failing closed allow-configmap-with-delay-webhook.k8s.io: failed calling webhook \"allow-configmap-with-delay-webhook.k8s.io\": Post \"https://e2e-test-webhook.webhook-9408.svc:8443/always-allow-delay-5s?timeout=1s\": context deadline exceeded\nI0521 16:00:55.765960 1 trace.go:205] Trace[2019350493]: \"Create\" url:/api/v1/namespaces/webhook-9408/configmaps,user-agent:e2e.test/v1.19.11 (linux/amd64) kubernetes/c6a2f08 -- [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance],client:172.18.0.1 (21-May-2021 16:00:54.763) (total time: 1002ms):\nTrace[2019350493]: [1.002410941s] [1.002410941s] END\nW0521 16:00:55.780024 1 dispatcher.go:142] rejected by webhook \"validating-is-webhook-configuration-ready.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"validating-is-webhook-configuration-ready.k8s.io\\\" denied the request: this webhook denies all requests\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nI0521 16:00:56.782658 1 trace.go:205] Trace[302107559]: \"Call validating webhook\" configuration:webhook-9408,webhook:allow-configmap-with-delay-webhook.k8s.io,resource:/v1, Resource=configmaps,subresource:,operation:CREATE,UID:3f823dcc-2429-4525-816a-ff612dbfeff6 (21-May-2021 16:00:55.782) (total time: 1000ms):\nTrace[302107559]: [1.000589253s] [1.000589253s] END\nW0521 16:00:56.782723 1 dispatcher.go:129] Failed calling webhook, failing open allow-configmap-with-delay-webhook.k8s.io: failed calling webhook \"allow-configmap-with-delay-webhook.k8s.io\": Post \"https://e2e-test-webhook.webhook-9408.svc:8443/always-allow-delay-5s?timeout=1s\": context deadline exceeded\nE0521 16:00:56.782768 1 dispatcher.go:130] failed calling webhook \"allow-configmap-with-delay-webhook.k8s.io\": Post \"https://e2e-test-webhook.webhook-9408.svc:8443/always-allow-delay-5s?timeout=1s\": context deadline exceeded\nI0521 16:00:56.784515 1 trace.go:205] Trace[801603802]: \"Create\" url:/api/v1/namespaces/webhook-9408/configmaps,user-agent:e2e.test/v1.19.11 (linux/amd64) kubernetes/c6a2f08 -- [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance],client:172.18.0.1 (21-May-2021 16:00:55.781) (total time: 1002ms):\nTrace[801603802]: ---\"Object stored in database\" 1002ms (16:00:00.784)\nTrace[801603802]: [1.002971365s] [1.002971365s] END\nW0521 16:00:56.805583 1 dispatcher.go:142] rejected by webhook \"validating-is-webhook-configuration-ready.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"validating-is-webhook-configuration-ready.k8s.io\\\" denied the request: this webhook denies all requests\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nI0521 16:01:01.814970 1 trace.go:205] Trace[180251291]: \"Call validating webhook\" configuration:webhook-9408,webhook:allow-configmap-with-delay-webhook.k8s.io,resource:/v1, Resource=configmaps,subresource:,operation:CREATE,UID:b505a13f-30fc-430c-87a4-a67f0f9f9bbd (21-May-2021 16:00:56.807) (total time: 5007ms):\nTrace[180251291]: ---\"Request completed\" 5007ms (16:01:00.814)\nTrace[180251291]: [5.007257101s] [5.007257101s] END\nI0521 16:01:01.816632 1 trace.go:205] Trace[1200528978]: \"Create\" url:/api/v1/namespaces/webhook-9408/configmaps,user-agent:e2e.test/v1.19.11 (linux/amd64) kubernetes/c6a2f08 -- [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance],client:172.18.0.1 (21-May-2021 16:00:56.807) (total time: 5009ms):\nTrace[1200528978]: ---\"Object stored in database\" 5009ms (16:01:00.816)\nTrace[1200528978]: [5.009370164s] [5.009370164s] END\nW0521 16:01:01.833844 1 dispatcher.go:142] rejected by webhook \"validating-is-webhook-configuration-ready.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"validating-is-webhook-configuration-ready.k8s.io\\\" denied the request: this webhook denies all requests\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nI0521 16:01:03.263546 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:01:03.263657 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:01:03.263692 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:01:06.837462 1 trace.go:205] Trace[1833465613]: \"Call validating webhook\" configuration:webhook-9408,webhook:allow-configmap-with-delay-webhook.k8s.io,resource:/v1, Resource=configmaps,subresource:,operation:CREATE,UID:04b08b4c-1336-43c2-93bc-c674d6440109 (21-May-2021 16:01:01.835) (total time: 5001ms):\nTrace[1833465613]: ---\"Request completed\" 5001ms (16:01:00.837)\nTrace[1833465613]: [5.00150833s] [5.00150833s] END\nI0521 16:01:06.839342 1 trace.go:205] Trace[1925761941]: \"Create\" url:/api/v1/namespaces/webhook-9408/configmaps,user-agent:e2e.test/v1.19.11 (linux/amd64) kubernetes/c6a2f08 -- [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance],client:172.18.0.1 (21-May-2021 16:01:01.835) (total time: 5003ms):\nTrace[1925761941]: ---\"Object stored in database\" 5003ms (16:01:00.839)\nTrace[1925761941]: [5.003931139s] [5.003931139s] END\nI0521 16:01:21.849629 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:01:21.849674 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 16:01:21.863804 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:01:21.863848 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 16:01:23.958307 1 trace.go:205] Trace[1286672375]: \"Delete\" url:/api/v1/namespaces/svc-latency-2745/endpoints (21-May-2021 16:01:23.373) (total time: 585ms):\nTrace[1286672375]: [585.198064ms] [585.198064ms] END\nI0521 16:01:24.557087 1 trace.go:205] Trace[613858849]: \"Delete\" url:/apis/discovery.k8s.io/v1beta1/namespaces/svc-latency-2745/endpointslices (21-May-2021 16:01:23.967) (total time: 589ms):\nTrace[613858849]: [589.940584ms] [589.940584ms] END\nI0521 16:01:26.843716 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:01:26.843753 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 16:01:27.166681 1 controller.go:609] quota admission added evaluator for: e2e-test-webhook-2901-crds.webhook.example.com\nI0521 16:01:28.130742 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:01:28.130778 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nW0521 16:01:32.690869 1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured\nI0521 16:01:32.704742 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:01:32.704775 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 16:01:36.370531 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:01:36.370604 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:01:36.370618 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:01:45.280937 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:01:45.280975 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 16:01:45.419730 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:01:45.419767 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 16:01:48.438577 1 trace.go:205] Trace[1043439799]: \"Call validating webhook\" configuration:webhook-2443,webhook:validating-is-webhook-configuration-ready.k8s.io,resource:/v1, Resource=configmaps,subresource:,operation:CREATE,UID:e665ac01-7d66-409e-b9f8-98bbdf20ceb9 (21-May-2021 16:01:47.403) (total time: 1035ms):\nTrace[1043439799]: ---\"Request completed\" 1035ms (16:01:00.438)\nTrace[1043439799]: [1.035315779s] [1.035315779s] END\nW0521 16:01:48.438616 1 dispatcher.go:142] rejected by webhook \"validating-is-webhook-configuration-ready.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"validating-is-webhook-configuration-ready.k8s.io\\\" denied the request: this webhook denies all requests\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nI0521 16:01:48.438762 1 trace.go:205] Trace[842751101]: \"Create\" url:/api/v1/namespaces/webhook-2443-markers/configmaps,user-agent:e2e.test/v1.19.11 (linux/amd64) kubernetes/c6a2f08 -- [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance],client:172.18.0.1 (21-May-2021 16:01:47.402) (total time: 1036ms):\nTrace[842751101]: [1.036009071s] [1.036009071s] END\nW0521 16:01:50.589782 1 dispatcher.go:142] rejected by webhook \"deny-attaching-pod.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-attaching-pod.k8s.io\\\" denied the request: attaching to pod 'to-be-attached-pod' is not allowed\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW0521 16:01:58.077037 1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured\nI0521 16:02:16.461373 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:02:16.461447 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:02:16.461464 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nE0521 16:02:23.717958 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc009542280)}\nE0521 16:02:24.722519 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0074e60f0)}\nW0521 16:02:24.806540 1 dispatcher.go:142] rejected by webhook \"validating-is-webhook-configuration-ready.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"validating-is-webhook-configuration-ready.k8s.io\\\" denied the request: this webhook denies all requests\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nI0521 16:02:24.839845 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:02:24.839887 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nW0521 16:02:24.864318 1 dispatcher.go:142] rejected by webhook \"deny-unwanted-custom-resource-data.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-custom-resource-data.k8s.io\\\" denied the request: the custom resource contains unwanted data\", Reason:\"the custom resource contains unwanted data\", Details:(*v1.StatusDetails)(nil), Code:400}}\nI0521 16:02:24.870031 1 controller.go:609] quota admission added evaluator for: e2e-test-webhook-9786-crds.webhook.example.com\nW0521 16:02:24.882182 1 dispatcher.go:142] rejected by webhook \"deny-unwanted-custom-resource-data.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-custom-resource-data.k8s.io\\\" denied the request: the custom resource contains unwanted data\", Reason:\"the custom resource contains unwanted data\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW0521 16:02:24.885562 1 dispatcher.go:142] rejected by webhook \"deny-unwanted-custom-resource-data.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-custom-resource-data.k8s.io\\\" denied the request: the custom resource contains unwanted data\", Reason:\"the custom resource contains unwanted data\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW0521 16:02:24.892885 1 dispatcher.go:142] rejected by webhook \"deny-unwanted-custom-resource-data.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-custom-resource-data.k8s.io\\\" denied the request: the custom resource cannot be deleted because it contains unwanted key and value\", Reason:\"the custom resource cannot be deleted because it contains unwanted key and value\", Details:(*v1.StatusDetails)(nil), Code:400}}\nE0521 16:02:25.721840 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00323e7d0)}\nE0521 16:02:26.723464 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008195d10)}\nE0521 16:02:27.721859 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0037f8d70)}\nE0521 16:02:28.722690 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0077f8730)}\nE0521 16:02:29.722327 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00880c140)}\nE0521 16:02:30.721382 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007d0a050)}\nE0521 16:02:31.722089 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0083af9f0)}\nE0521 16:02:32.721851 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0064a7220)}\nE0521 16:02:33.541170 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc002edf0e0)}\nE0521 16:02:33.722373 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008d89ea0)}\nE0521 16:02:34.545447 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0090092c0)}\nE0521 16:02:34.722380 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007eb7950)}\nE0521 16:02:35.546055 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc003263d10)}\nE0521 16:02:35.722140 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008cd7a40)}\nE0521 16:02:36.544921 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0069b0a50)}\nE0521 16:02:36.721525 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007c61090)}\nE0521 16:02:37.546106 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007ba6550)}\nE0521 16:02:37.722179 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007497ae0)}\nE0521 16:02:38.545742 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc000a4b590)}\nE0521 16:02:38.722190 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc000a4bb80)}\nE0521 16:02:39.546034 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006852550)}\nE0521 16:02:39.721676 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0065984b0)}\nE0521 16:02:40.545341 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc003182aa0)}\nE0521 16:02:40.721889 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0075d3540)}\nE0521 16:02:41.545453 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00798f130)}\nE0521 16:02:41.721939 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007fe6730)}\nE0521 16:02:42.545407 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc003676230)}\nE0521 16:02:42.722169 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0089b1680)}\nE0521 16:02:43.544665 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0037f85f0)}\nE0521 16:02:43.721882 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008a16000)}\nE0521 16:02:44.545613 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00905b040)}\nE0521 16:02:44.722507 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00905b630)}\nE0521 16:02:45.544345 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00323e1e0)}\nE0521 16:02:45.721909 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0011b15e0)}\nE0521 16:02:46.544767 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006446460)}\nE0521 16:02:46.721849 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008c373b0)}\nE0521 16:02:47.545968 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008837b80)}\nE0521 16:02:47.721776 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008deec30)}\nE0521 16:02:48.546178 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0082979f0)}\nE0521 16:02:48.722976 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00731d360)}\nE0521 16:02:49.545628 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006556f00)}\nE0521 16:02:49.722739 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006557040)}\nE0521 16:02:50.545559 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00728dea0)}\nE0521 16:02:50.722335 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00667ad20)}\nI0521 16:02:51.377449 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:02:51.377483 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nE0521 16:02:51.544814 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008defdb0)}\nE0521 16:02:51.721386 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007d182d0)}\nE0521 16:02:52.545156 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0089aa500)}\nE0521 16:02:52.721678 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00699f900)}\nI0521 16:02:53.042821 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:02:53.042901 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:02:53.042917 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nE0521 16:02:53.546403 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc003768820)}\nE0521 16:02:53.721845 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008a93810)}\nE0521 16:02:54.544831 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008b85c20)}\nE0521 16:02:54.721337 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006780ff0)}\nI0521 16:02:55.000406 1 controller.go:609] quota admission added evaluator for: e2e-test-crd-publish-openapi-8351-crds.crd-publish-openapi-test-unknown-at-root.example.com\nE0521 16:02:55.546512 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00867a460)}\nE0521 16:02:55.722010 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00891ee10)}\nI0521 16:02:56.208935 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:02:56.208975 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 16:02:56.223848 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:02:56.223884 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 16:02:56.236401 1 controller.go:609] quota admission added evaluator for: e2e-test-crd-webhook-8063-crds.stable.example.com\nE0521 16:02:56.544951 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0090114a0)}\nE0521 16:02:56.721589 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008558550)}\nW0521 16:02:56.802172 1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured\nE0521 16:02:57.545313 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00729bb30)}\nE0521 16:02:57.721575 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008d3d900)}\nE0521 16:02:58.541326 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0062a71d0)}\nE0521 16:02:58.544371 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc009c62230)}\nE0521 16:02:58.721926 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006753c70)}\nE0521 16:02:59.544788 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008e834a0)}\nE0521 16:02:59.544789 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00891ef50)}\nE0521 16:02:59.722924 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc009c7e3c0)}\nE0521 16:03:00.545987 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc01ba3c9b0)}\nE0521 16:03:00.545987 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008f17860)}\nE0521 16:03:00.721711 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007039a40)}\nE0521 16:03:01.545399 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007e8ef00)}\nE0521 16:03:01.545455 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008727630)}\nE0521 16:03:01.722263 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007e8f9f0)}\nE0521 16:03:02.546084 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc009119ef0)}\nE0521 16:03:02.546225 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00773ae10)}\nE0521 16:03:02.722369 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0095e7090)}\nE0521 16:03:03.544428 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008660370)}\nE0521 16:03:03.544434 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007d18190)}\nE0521 16:03:03.722365 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0031bbea0)}\nE0521 16:03:04.546167 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008d06f00)}\nE0521 16:03:04.546272 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0037bc960)}\nE0521 16:03:04.722589 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008b98e60)}\nE0521 16:03:05.547277 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc003897090)}\nE0521 16:03:05.547582 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006931400)}\nE0521 16:03:05.721531 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00878e910)}\nE0521 16:03:06.546111 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc009017d10)}\nE0521 16:03:06.546224 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0075dc690)}\nE0521 16:03:06.722684 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008a88460)}\nE0521 16:03:07.545869 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0076b5270)}\nE0521 16:03:07.546339 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0070bb450)}\nE0521 16:03:07.722315 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00655f450)}\nE0521 16:03:08.545679 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc003635c70)}\nE0521 16:03:08.545748 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0084c1d10)}\nE0521 16:03:08.722748 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0078fc0a0)}\nE0521 16:03:09.545231 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0070c0a50)}\nE0521 16:03:09.545231 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00843e410)}\nE0521 16:03:09.722346 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0062074f0)}\nE0521 16:03:10.546267 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007837bd0)}\nE0521 16:03:10.546272 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007e494a0)}\nE0521 16:03:10.723104 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007e49630)}\nE0521 16:03:11.545120 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00811f130)}\nE0521 16:03:11.545120 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008f16aa0)}\nE0521 16:03:11.722042 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0089ed220)}\nI0521 16:03:12.027516 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:03:12.027567 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 16:03:12.041841 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:03:12.041869 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nE0521 16:03:12.545748 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008e48f00)}\nE0521 16:03:12.545861 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0076617c0)}\nE0521 16:03:12.722432 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007d4a230)}\nE0521 16:03:13.545250 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00905a460)}\nE0521 16:03:13.545251 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc003222b90)}\nE0521 16:03:13.721386 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007aadbd0)}\nE0521 16:03:14.544704 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008b99180)}\nE0521 16:03:14.545204 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008ed14a0)}\nE0521 16:03:14.721889 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007f6c0f0)}\nE0521 16:03:15.544656 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007ecfa40)}\nE0521 16:03:15.544656 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0030f4a50)}\nE0521 16:03:15.721767 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0061875e0)}\nE0521 16:03:16.545348 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00362f180)}\nE0521 16:03:16.545348 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0085c0960)}\nE0521 16:03:16.722010 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007ce25a0)}\nE0521 16:03:17.544892 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008dcf9a0)}\nE0521 16:03:17.544901 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008dc34a0)}\nE0521 16:03:17.721774 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008df7cc0)}\nE0521 16:03:18.545672 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0090d9f40)}\nE0521 16:03:18.545688 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc002ecfc70)}\nE0521 16:03:18.722363 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007ff3220)}\nE0521 16:03:19.545305 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0086617c0)}\nE0521 16:03:19.545305 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007d18230)}\nE0521 16:03:19.721892 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008ce6c30)}\nE0521 16:03:20.545086 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0079e0410)}\nE0521 16:03:20.545200 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008e659f0)}\nE0521 16:03:20.722045 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00696fa40)}\nE0521 16:03:21.545523 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0079828c0)}\nE0521 16:03:21.545533 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007f6c6e0)}\nE0521 16:03:21.721980 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0064bb360)}\nE0521 16:03:22.545153 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0032cb040)}\nE0521 16:03:22.545190 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0031eb9a0)}\nE0521 16:03:22.721700 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0090df8b0)}\nI0521 16:03:22.990314 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:03:22.990362 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 16:03:23.006508 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:03:23.006546 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nE0521 16:03:23.546030 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0087ff590)}\nE0521 16:03:23.546032 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0076cec30)}\nE0521 16:03:23.721895 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00323e050)}\nE0521 16:03:23.725170 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0001c4370)}\nE0521 16:03:24.545141 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0075aebe0)}\nE0521 16:03:24.545287 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007cb78b0)}\nE0521 16:03:25.545200 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008dda0a0)}\nE0521 16:03:25.545250 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006265950)}\nE0521 16:03:26.545197 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007971c70)}\nE0521 16:03:26.545422 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00314f310)}\nI0521 16:03:26.548864 1 trace.go:205] Trace[1303474853]: \"Call validating webhook\" configuration:webhook-7612,webhook:validating-is-webhook-configuration-ready.k8s.io,resource:/v1, Resource=configmaps,subresource:,operation:CREATE,UID:b88843c1-77d4-424c-bfc5-bc7a4d64e384 (21-May-2021 16:03:25.509) (total time: 1039ms):\nTrace[1303474853]: ---\"Request completed\" 1038ms (16:03:00.548)\nTrace[1303474853]: [1.039019944s] [1.039019944s] END\nW0521 16:03:26.548904 1 dispatcher.go:142] rejected by webhook \"validating-is-webhook-configuration-ready.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"validating-is-webhook-configuration-ready.k8s.io\\\" denied the request: this webhook denies all requests\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nI0521 16:03:26.549177 1 trace.go:205] Trace[1079003646]: \"Create\" url:/api/v1/namespaces/webhook-7612-markers/configmaps,user-agent:e2e.test/v1.19.11 (linux/amd64) kubernetes/c6a2f08 -- [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance],client:172.18.0.1 (21-May-2021 16:03:25.508) (total time: 1040ms):\nTrace[1079003646]: [1.040334074s] [1.040334074s] END\nW0521 16:03:26.569531 1 dispatcher.go:142] rejected by webhook \"deny-unwanted-pod-container-name-and-label.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-pod-container-name-and-label.k8s.io\\\" denied the request: the pod contains unwanted label; the pod contains unwanted container name;\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW0521 16:03:26.901940 1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured\nE0521 16:03:27.545429 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0075bcbe0)}\nE0521 16:03:27.545644 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006340d70)}\nI0521 16:03:28.335718 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:03:28.335785 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:03:28.335800 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nE0521 16:03:28.545124 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008e7bf90)}\nE0521 16:03:28.545124 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0074a1860)}\nE0521 16:03:29.545719 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0076ce370)}\nE0521 16:03:29.545719 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0084efae0)}\nE0521 16:03:30.544864 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0034a25f0)}\nE0521 16:03:30.545296 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0037c7950)}\nE0521 16:03:31.545772 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00867acd0)}\nE0521 16:03:31.545795 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0078f7680)}\nE0521 16:03:32.544895 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008024b40)}\nE0521 16:03:32.544895 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007684b90)}\nE0521 16:03:33.547469 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0034605a0)}\nE0521 16:03:33.550200 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc003460ff0)}\nE0521 16:03:33.554024 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc003461950)}\nE0521 16:03:34.545767 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008e6df90)}\nE0521 16:03:35.545996 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0079ad090)}\nE0521 16:03:36.546076 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008c7b090)}\nI0521 16:03:36.571967 1 trace.go:205] Trace[1680506185]: \"Call validating webhook\" configuration:webhook-7612,webhook:deny-unwanted-pod-container-name-and-label.k8s.io,resource:/v1, Resource=pods,subresource:,operation:CREATE,UID:d6b721eb-722f-4fa5-ba03-0f22299674e5 (21-May-2021 16:03:26.571) (total time: 10000ms):\nTrace[1680506185]: [10.000369237s] [10.000369237s] END\nW0521 16:03:36.572010 1 dispatcher.go:134] Failed calling webhook, failing closed deny-unwanted-pod-container-name-and-label.k8s.io: failed calling webhook \"deny-unwanted-pod-container-name-and-label.k8s.io\": Post \"https://e2e-test-webhook.webhook-7612.svc:8443/pods?timeout=10s\": context deadline exceeded\nI0521 16:03:36.572429 1 trace.go:205] Trace[1162820630]: \"Create\" url:/api/v1/namespaces/webhook-7612/pods,user-agent:e2e.test/v1.19.11 (linux/amd64) kubernetes/c6a2f08 -- [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance],client:172.18.0.1 (21-May-2021 16:03:26.570) (total time: 10001ms):\nTrace[1162820630]: [10.001562169s] [10.001562169s] END\nW0521 16:03:36.590296 1 dispatcher.go:142] rejected by webhook \"deny-unwanted-configmap-data.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-configmap-data.k8s.io\\\" denied the request: the configmap contains unwanted key and value\", Reason:\"the configmap contains unwanted key and value\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW0521 16:03:36.630383 1 dispatcher.go:129] Failed calling webhook, failing open fail-open.k8s.io: failed calling webhook \"fail-open.k8s.io\": Post \"https://e2e-test-webhook.webhook-7612.svc:8443/configmaps?timeout=10s\": x509: certificate signed by unknown authority\nE0521 16:03:36.630425 1 dispatcher.go:130] failed calling webhook \"fail-open.k8s.io\": Post \"https://e2e-test-webhook.webhook-7612.svc:8443/configmaps?timeout=10s\": x509: certificate signed by unknown authority\nW0521 16:03:36.641219 1 dispatcher.go:129] Failed calling webhook, failing open fail-open.k8s.io: failed calling webhook \"fail-open.k8s.io\": Post \"https://e2e-test-webhook.webhook-7612.svc:8443/configmaps?timeout=10s\": x509: certificate signed by unknown authority\nE0521 16:03:36.641250 1 dispatcher.go:130] failed calling webhook \"fail-open.k8s.io\": Post \"https://e2e-test-webhook.webhook-7612.svc:8443/configmaps?timeout=10s\": x509: certificate signed by unknown authority\nW0521 16:03:36.648328 1 dispatcher.go:142] rejected by webhook \"deny-unwanted-configmap-data.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-configmap-data.k8s.io\\\" denied the request: the configmap contains unwanted key and value\", Reason:\"the configmap contains unwanted key and value\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW0521 16:03:36.651022 1 dispatcher.go:142] rejected by webhook \"deny-unwanted-configmap-data.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-configmap-data.k8s.io\\\" denied the request: the configmap contains unwanted key and value\", Reason:\"the configmap contains unwanted key and value\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW0521 16:03:36.654677 1 dispatcher.go:142] rejected by webhook \"deny-unwanted-configmap-data.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-configmap-data.k8s.io\\\" denied the request: the configmap contains unwanted key and value\", Reason:\"the configmap contains unwanted key and value\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW0521 16:03:36.657014 1 dispatcher.go:142] rejected by webhook \"deny-unwanted-configmap-data.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-configmap-data.k8s.io\\\" denied the request: the configmap contains unwanted key and value\", Reason:\"the configmap contains unwanted key and value\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW0521 16:03:36.667357 1 dispatcher.go:129] Failed calling webhook, failing open fail-open.k8s.io: failed calling webhook \"fail-open.k8s.io\": Post \"https://e2e-test-webhook.webhook-7612.svc:8443/configmaps?timeout=10s\": x509: certificate signed by unknown authority\nE0521 16:03:36.667397 1 dispatcher.go:130] failed calling webhook \"fail-open.k8s.io\": Post \"https://e2e-test-webhook.webhook-7612.svc:8443/configmaps?timeout=10s\": x509: certificate signed by unknown authority\nE0521 16:03:37.545771 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0076b59a0)}\nE0521 16:03:38.545774 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006548f50)}\nE0521 16:03:39.545188 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00912f810)}\nE0521 16:03:40.545452 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00904a910)}\nE0521 16:03:41.545293 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0036921e0)}\nE0521 16:03:42.545001 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0080c5f90)}\nE0521 16:03:43.544672 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006d2b9f0)}\nE0521 16:03:44.545114 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006a81450)}\nE0521 16:03:45.545524 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007e72320)}\nE0521 16:03:46.544838 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc002feb9a0)}\nI0521 16:03:47.136672 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:03:47.136709 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 16:03:47.150199 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:03:47.150231 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 16:03:47.406685 1 controller.go:609] quota admission added evaluator for: e2e-test-crd-webhook-8499-crds.stable.example.com\nI0521 16:03:47.484059 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:03:47.484096 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 16:03:47.498941 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:03:47.499003 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nE0521 16:03:47.545120 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007190d70)}\nE0521 16:03:48.546030 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006931270)}\nI0521 16:03:49.431517 1 trace.go:205] Trace[61854499]: \"Call validating webhook\" configuration:webhook-7196-4,webhook:validating-is-webhook-configuration-ready.k8s.io,resource:/v1, Resource=configmaps,subresource:,operation:CREATE,UID:b21f9ff3-ad2d-444f-844f-55963d8d5a86 (21-May-2021 16:03:48.413) (total time: 1017ms):\nTrace[61854499]: ---\"Request completed\" 1017ms (16:03:00.431)\nTrace[61854499]: [1.017952641s] [1.017952641s] END\nW0521 16:03:49.431571 1 dispatcher.go:142] rejected by webhook \"validating-is-webhook-configuration-ready.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"validating-is-webhook-configuration-ready.k8s.io\\\" denied the request: this webhook denies all requests\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nI0521 16:03:49.431526 1 trace.go:205] Trace[492017415]: \"Call validating webhook\" configuration:webhook-7196-3,webhook:validating-is-webhook-configuration-ready.k8s.io,resource:/v1, Resource=configmaps,subresource:,operation:CREATE,UID:ffdcb3b4-d7ad-435c-b124-a12f184a0bf5 (21-May-2021 16:03:48.413) (total time: 1017ms):\nTrace[492017415]: ---\"Request completed\" 1017ms (16:03:00.431)\nTrace[492017415]: [1.017890481s] [1.017890481s] END\nW0521 16:03:49.431629 1 dispatcher.go:142] rejected by webhook \"validating-is-webhook-configuration-ready.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"validating-is-webhook-configuration-ready.k8s.io\\\" denied the request: this webhook denies all requests\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nI0521 16:03:49.432539 1 trace.go:205] Trace[1558760267]: \"Call validating webhook\" configuration:webhook-7196-1,webhook:validating-is-webhook-configuration-ready.k8s.io,resource:/v1, Resource=configmaps,subresource:,operation:CREATE,UID:fe18c165-86ca-4f83-9de1-fcb39eff5c02 (21-May-2021 16:03:48.413) (total time: 1018ms):\nTrace[1558760267]: ---\"Request completed\" 1018ms (16:03:00.432)\nTrace[1558760267]: [1.018924311s] [1.018924311s] END\nW0521 16:03:49.432585 1 dispatcher.go:142] rejected by webhook \"validating-is-webhook-configuration-ready.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"validating-is-webhook-configuration-ready.k8s.io\\\" denied the request: this webhook denies all requests\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nI0521 16:03:49.433150 1 trace.go:205] Trace[134943008]: \"Call validating webhook\" configuration:webhook-7196-0,webhook:validating-is-webhook-configuration-ready.k8s.io,resource:/v1, Resource=configmaps,subresource:,operation:CREATE,UID:524f3c7b-d401-48e1-8e05-695e94f891ca (21-May-2021 16:03:48.413) (total time: 1019ms):\nTrace[134943008]: ---\"Request completed\" 1019ms (16:03:00.433)\nTrace[134943008]: [1.019492316s] [1.019492316s] END\nI0521 16:03:49.433211 1 trace.go:205] Trace[1401871633]: \"Call validating webhook\" configuration:webhook-7196-5,webhook:validating-is-webhook-configuration-ready.k8s.io,resource:/v1, Resource=configmaps,subresource:,operation:CREATE,UID:d607278f-a618-404b-a354-ea9d626f7074 (21-May-2021 16:03:48.413) (total time: 1019ms):\nTrace[1401871633]: ---\"Request completed\" 1019ms (16:03:00.433)\nTrace[1401871633]: [1.019511918s] [1.019511918s] END\nW0521 16:03:49.433250 1 dispatcher.go:142] rejected by webhook \"validating-is-webhook-configuration-ready.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"validating-is-webhook-configuration-ready.k8s.io\\\" denied the request: this webhook denies all requests\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW0521 16:03:49.433188 1 dispatcher.go:142] rejected by webhook \"validating-is-webhook-configuration-ready.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"validating-is-webhook-configuration-ready.k8s.io\\\" denied the request: this webhook denies all requests\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nI0521 16:03:49.433490 1 trace.go:205] Trace[266076039]: \"Call validating webhook\" configuration:webhook-7196-7,webhook:validating-is-webhook-configuration-ready.k8s.io,resource:/v1, Resource=configmaps,subresource:,operation:CREATE,UID:92595094-12db-4fa9-a43b-b08cb583dfdc (21-May-2021 16:03:48.413) (total time: 1019ms):\nTrace[266076039]: ---\"Request completed\" 1019ms (16:03:00.433)\nTrace[266076039]: [1.019949824s] [1.019949824s] END\nI0521 16:03:49.433512 1 trace.go:205] Trace[660035677]: \"Call validating webhook\" configuration:webhook-7196-6,webhook:validating-is-webhook-configuration-ready.k8s.io,resource:/v1, Resource=configmaps,subresource:,operation:CREATE,UID:a8b385a1-3748-425c-8a1f-93ff3ae15698 (21-May-2021 16:03:48.413) (total time: 1019ms):\nTrace[660035677]: ---\"Request completed\" 1019ms (16:03:00.433)\nTrace[660035677]: [1.019772002s] [1.019772002s] END\nW0521 16:03:49.433522 1 dispatcher.go:142] rejected by webhook \"validating-is-webhook-configuration-ready.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"validating-is-webhook-configuration-ready.k8s.io\\\" denied the request: this webhook denies all requests\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW0521 16:03:49.433534 1 dispatcher.go:142] rejected by webhook \"validating-is-webhook-configuration-ready.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"validating-is-webhook-configuration-ready.k8s.io\\\" denied the request: this webhook denies all requests\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nI0521 16:03:49.434030 1 trace.go:205] Trace[1045527073]: \"Call validating webhook\" configuration:webhook-7196-2,webhook:validating-is-webhook-configuration-ready.k8s.io,resource:/v1, Resource=configmaps,subresource:,operation:CREATE,UID:46840d53-52c2-4a48-a589-4f4f150cd66a (21-May-2021 16:03:48.413) (total time: 1020ms):\nTrace[1045527073]: ---\"Request completed\" 1020ms (16:03:00.433)\nTrace[1045527073]: [1.020508977s] [1.020508977s] END\nW0521 16:03:49.434064 1 dispatcher.go:142] rejected by webhook \"validating-is-webhook-configuration-ready.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"validating-is-webhook-configuration-ready.k8s.io\\\" denied the request: this webhook denies all requests\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nI0521 16:03:49.434316 1 trace.go:205] Trace[917550389]: \"Call validating webhook\" configuration:webhook-7196-8,webhook:validating-is-webhook-configuration-ready.k8s.io,resource:/v1, Resource=configmaps,subresource:,operation:CREATE,UID:0079c962-3c0c-44ce-9e6b-b429321a444b (21-May-2021 16:03:48.413) (total time: 1020ms):\nTrace[917550389]: ---\"Request completed\" 1020ms (16:03:00.434)\nTrace[917550389]: [1.020764655s] [1.020764655s] END\nW0521 16:03:49.434353 1 dispatcher.go:142] rejected by webhook \"validating-is-webhook-configuration-ready.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"validating-is-webhook-configuration-ready.k8s.io\\\" denied the request: this webhook denies all requests\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nI0521 16:03:49.434441 1 trace.go:205] Trace[11373885]: \"Call validating webhook\" configuration:webhook-7196-9,webhook:validating-is-webhook-configuration-ready.k8s.io,resource:/v1, Resource=configmaps,subresource:,operation:CREATE,UID:846c8a4e-0bb1-44d0-af94-a3b15bfd4dcf (21-May-2021 16:03:48.413) (total time: 1021ms):\nTrace[11373885]: ---\"Request completed\" 1020ms (16:03:00.434)\nTrace[11373885]: [1.021001441s] [1.021001441s] END\nW0521 16:03:49.434472 1 dispatcher.go:142] rejected by webhook \"validating-is-webhook-configuration-ready.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"validating-is-webhook-configuration-ready.k8s.io\\\" denied the request: this webhook denies all requests\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nE0521 16:03:49.434539 1 dispatcher.go:159] admission webhook \"validating-is-webhook-configuration-ready.k8s.io\" denied the request: this webhook denies all requests\nE0521 16:03:49.434561 1 dispatcher.go:159] admission webhook \"validating-is-webhook-configuration-ready.k8s.io\" denied the request: this webhook denies all requests\nE0521 16:03:49.435642 1 dispatcher.go:159] admission webhook \"validating-is-webhook-configuration-ready.k8s.io\" denied the request: this webhook denies all requests\nE0521 16:03:49.436734 1 dispatcher.go:159] admission webhook \"validating-is-webhook-configuration-ready.k8s.io\" denied the request: this webhook denies all requests\nE0521 16:03:49.437864 1 dispatcher.go:159] admission webhook \"validating-is-webhook-configuration-ready.k8s.io\" denied the request: this webhook denies all requests\nE0521 16:03:49.438912 1 dispatcher.go:159] admission webhook \"validating-is-webhook-configuration-ready.k8s.io\" denied the request: this webhook denies all requests\nE0521 16:03:49.440056 1 dispatcher.go:159] admission webhook \"validating-is-webhook-configuration-ready.k8s.io\" denied the request: this webhook denies all requests\nE0521 16:03:49.441179 1 dispatcher.go:159] admission webhook \"validating-is-webhook-configuration-ready.k8s.io\" denied the request: this webhook denies all requests\nE0521 16:03:49.442296 1 dispatcher.go:159] admission webhook \"validating-is-webhook-configuration-ready.k8s.io\" denied the request: this webhook denies all requests\nI0521 16:03:49.443535 1 trace.go:205] Trace[491423215]: \"Create\" url:/api/v1/namespaces/webhook-7196-markers/configmaps,user-agent:e2e.test/v1.19.11 (linux/amd64) kubernetes/c6a2f08 -- [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance],client:172.18.0.1 (21-May-2021 16:03:48.412) (total time: 1031ms):\nTrace[491423215]: [1.031128172s] [1.031128172s] END\nW0521 16:03:49.459711 1 dispatcher.go:142] rejected by webhook \"deny-unwanted-configmap-data.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-configmap-data.k8s.io\\\" denied the request: the configmap contains unwanted key and value\", Reason:\"the configmap contains unwanted key and value\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW0521 16:03:49.459755 1 dispatcher.go:142] rejected by webhook \"deny-unwanted-configmap-data.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-configmap-data.k8s.io\\\" denied the request: the configmap contains unwanted key and value\", Reason:\"the configmap contains unwanted key and value\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW0521 16:03:49.459933 1 dispatcher.go:142] rejected by webhook \"deny-unwanted-configmap-data.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-configmap-data.k8s.io\\\" denied the request: the configmap contains unwanted key and value\", Reason:\"the configmap contains unwanted key and value\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW0521 16:03:49.459934 1 dispatcher.go:142] rejected by webhook \"deny-unwanted-configmap-data.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-configmap-data.k8s.io\\\" denied the request: the configmap contains unwanted key and value\", Reason:\"the configmap contains unwanted key and value\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW0521 16:03:49.459941 1 dispatcher.go:142] rejected by webhook \"deny-unwanted-configmap-data.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-configmap-data.k8s.io\\\" denied the request: the configmap contains unwanted key and value\", Reason:\"the configmap contains unwanted key and value\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW0521 16:03:49.460015 1 dispatcher.go:142] rejected by webhook \"deny-unwanted-configmap-data.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-configmap-data.k8s.io\\\" denied the request: the configmap contains unwanted key and value\", Reason:\"the configmap contains unwanted key and value\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW0521 16:03:49.460086 1 dispatcher.go:142] rejected by webhook \"deny-unwanted-configmap-data.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-configmap-data.k8s.io\\\" denied the request: the configmap contains unwanted key and value\", Reason:\"the configmap contains unwanted key and value\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW0521 16:03:49.460134 1 dispatcher.go:142] rejected by webhook \"deny-unwanted-configmap-data.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-configmap-data.k8s.io\\\" denied the request: the configmap contains unwanted key and value\", Reason:\"the configmap contains unwanted key and value\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW0521 16:03:49.460227 1 dispatcher.go:142] rejected by webhook \"deny-unwanted-configmap-data.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-configmap-data.k8s.io\\\" denied the request: the configmap contains unwanted key and value\", Reason:\"the configmap contains unwanted key and value\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW0521 16:03:49.460237 1 dispatcher.go:142] rejected by webhook \"deny-unwanted-configmap-data.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-unwanted-configmap-data.k8s.io\\\" denied the request: the configmap contains unwanted key and value\", Reason:\"the configmap contains unwanted key and value\", Details:(*v1.StatusDetails)(nil), Code:400}}\nE0521 16:03:49.460311 1 dispatcher.go:159] admission webhook \"deny-unwanted-configmap-data.k8s.io\" denied the request: the configmap contains unwanted key and value\nE0521 16:03:49.460334 1 dispatcher.go:159] admission webhook \"deny-unwanted-configmap-data.k8s.io\" denied the request: the configmap contains unwanted key and value\nE0521 16:03:49.461453 1 dispatcher.go:159] admission webhook \"deny-unwanted-configmap-data.k8s.io\" denied the request: the configmap contains unwanted key and value\nE0521 16:03:49.462544 1 dispatcher.go:159] admission webhook \"deny-unwanted-configmap-data.k8s.io\" denied the request: the configmap contains unwanted key and value\nE0521 16:03:49.463593 1 dispatcher.go:159] admission webhook \"deny-unwanted-configmap-data.k8s.io\" denied the request: the configmap contains unwanted key and value\nE0521 16:03:49.464609 1 dispatcher.go:159] admission webhook \"deny-unwanted-configmap-data.k8s.io\" denied the request: the configmap contains unwanted key and value\nE0521 16:03:49.465718 1 dispatcher.go:159] admission webhook \"deny-unwanted-configmap-data.k8s.io\" denied the request: the configmap contains unwanted key and value\nE0521 16:03:49.466848 1 dispatcher.go:159] admission webhook \"deny-unwanted-configmap-data.k8s.io\" denied the request: the configmap contains unwanted key and value\nE0521 16:03:49.467870 1 dispatcher.go:159] admission webhook \"deny-unwanted-configmap-data.k8s.io\" denied the request: the configmap contains unwanted key and value\nE0521 16:03:49.546552 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0079ba550)}\nE0521 16:03:50.545198 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0036ae550)}\nE0521 16:03:51.545861 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0069eaaf0)}\nE0521 16:03:52.545958 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006556af0)}\nE0521 16:03:53.545141 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007aa0230)}\nE0521 16:03:54.545421 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008e53270)}\nE0521 16:03:55.545684 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008ed0c30)}\nE0521 16:03:56.227295 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007de4640)}\nE0521 16:03:56.545614 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007e9c960)}\nE0521 16:03:57.232326 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0089aabe0)}\nE0521 16:03:57.545858 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0089ab590)}\nE0521 16:03:58.231574 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0073566e0)}\nE0521 16:03:58.545464 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0064abae0)}\nE0521 16:03:58.549125 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008112370)}\nE0521 16:03:59.231909 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006206dc0)}\nE0521 16:04:00.231515 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0067992c0)}\nE0521 16:04:01.231724 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc003859e50)}\nW0521 16:04:01.797001 1 dispatcher.go:142] rejected by webhook \"validating-is-webhook-configuration-ready.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"validating-is-webhook-configuration-ready.k8s.io\\\" denied the request: this webhook denies all requests\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nE0521 16:04:02.231651 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0069bf5e0)}\nE0521 16:04:03.231791 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007ccfb30)}\nW0521 16:04:03.933669 1 dispatcher.go:142] rejected by webhook \"validating-is-webhook-configuration-ready.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"validating-is-webhook-configuration-ready.k8s.io\\\" denied the request: this webhook denies all requests\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW0521 16:04:03.945593 1 dispatcher.go:134] Failed calling webhook, failing closed fail-closed.k8s.io: failed calling webhook \"fail-closed.k8s.io\": Post \"https://e2e-test-webhook.webhook-8085.svc:8443/configmaps?timeout=10s\": x509: certificate signed by unknown authority\nE0521 16:04:04.230843 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0072ee370)}\nI0521 16:04:04.558773 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:04:04.558810 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 16:04:04.604268 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:04:04.604302 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 16:04:04.653474 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:04:04.653505 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nE0521 16:04:05.231004 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008ea9f90)}\nI0521 16:04:05.616655 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:04:05.616700 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nE0521 16:04:06.231229 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0086502d0)}\nE0521 16:04:07.231292 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007fc3bd0)}\nE0521 16:04:08.232127 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007ffea50)}\nE0521 16:04:09.230449 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0074a1b80)}\nI0521 16:04:09.654228 1 controller.go:609] quota admission added evaluator for: e2e-test-crd-publish-openapi-103-crds.crd-publish-openapi-test-empty.example.com\nE0521 16:04:10.231185 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0071a1bd0)}\nE0521 16:04:11.231436 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008734a00)}\nE0521 16:04:12.231461 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0077f9900)}\nI0521 16:04:12.769841 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:04:12.769901 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:04:12.769917 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nE0521 16:04:13.231630 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00766f450)}\n2021/05/21 16:04:13 httputil: ReverseProxy read error during body copy: unexpected EOF\nE0521 16:04:14.231637 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0030464b0)}\nE0521 16:04:15.232263 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0086a3810)}\nE0521 16:04:16.231545 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0071912c0)}\nE0521 16:04:17.231383 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008ba2e10)}\nE0521 16:04:18.231558 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00341aa00)}\nE0521 16:04:19.231705 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0090113b0)}\nW0521 16:04:20.039032 1 dispatcher.go:142] rejected by webhook \"validating-is-webhook-configuration-ready.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"validating-is-webhook-configuration-ready.k8s.io\\\" denied the request: this webhook denies all requests\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW0521 16:04:20.055576 1 dispatcher.go:142] rejected by webhook \"deny-crd-with-unwanted-label.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-crd-with-unwanted-label.k8s.io\\\" denied the request: the crd contains unwanted label\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nE0521 16:04:20.232013 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00617f680)}\nE0521 16:04:21.231685 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008d00690)}\nE0521 16:04:22.231541 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc003769bd0)}\nE0521 16:04:23.231632 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006781090)}\nE0521 16:04:24.231442 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0067aaa00)}\nE0521 16:04:25.230867 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007a88190)}\nE0521 16:04:26.232156 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006744640)}\nE0521 16:04:27.231688 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0074801e0)}\nE0521 16:04:28.231787 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc002feb360)}\nE0521 16:04:29.232297 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc003858d20)}\nE0521 16:04:30.231727 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0065acfa0)}\nE0521 16:04:31.231442 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007599040)}\nE0521 16:04:32.231289 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008f2ac80)}\nE0521 16:04:33.231494 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008fffb80)}\nE0521 16:04:34.231795 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007f736d0)}\nE0521 16:04:35.231875 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0067271d0)}\nI0521 16:04:35.280987 1 controller.go:609] quota admission added evaluator for: limitranges\nE0521 16:04:36.231320 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007e54370)}\nE0521 16:04:37.231138 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008734460)}\nE0521 16:04:38.231027 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0088c76d0)}\nE0521 16:04:39.231995 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0088ee460)}\nE0521 16:04:40.230553 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc002c8ae60)}\nE0521 16:04:41.232248 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007247400)}\nE0521 16:04:42.231780 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007039360)}\nE0521 16:04:43.232989 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008c3edc0)}\nE0521 16:04:44.231432 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008673810)}\nE0521 16:04:45.231801 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0032c3540)}\nE0521 16:04:46.231956 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008a89a40)}\nE0521 16:04:47.231858 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00886db30)}\nE0521 16:04:48.231457 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0035bd450)}\nE0521 16:04:49.231849 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007e9d810)}\nE0521 16:04:50.231570 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006549bd0)}\nE0521 16:04:51.232020 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007b39680)}\nE0521 16:04:52.232082 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0079da8c0)}\nE0521 16:04:53.232055 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00642cbe0)}\nE0521 16:04:54.231900 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007f7ae10)}\nE0521 16:04:55.231429 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008a09cc0)}\nE0521 16:04:56.232077 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007048190)}\nE0521 16:04:56.236123 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0082af4a0)}\nI0521 16:04:57.257710 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:04:57.257773 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:04:57.257790 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nE0521 16:05:12.617521 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00701d9a0)}\nE0521 16:05:13.622333 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006a39cc0)}\nE0521 16:05:14.621854 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc009f20690)}\nI0521 16:05:14.918132 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:05:14.918170 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nE0521 16:05:15.622504 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007a72000)}\nE0521 16:05:16.622083 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00381eaf0)}\nE0521 16:05:17.622243 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0064f7f40)}\nE0521 16:05:18.622324 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0091a5540)}\nE0521 16:05:19.621938 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0078cdc20)}\nE0521 16:05:20.622412 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008f4b900)}\nE0521 16:05:21.622638 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00648cc80)}\nE0521 16:05:22.622132 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0081dd9a0)}\nE0521 16:05:23.622340 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008784500)}\nE0521 16:05:24.621668 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008c6f270)}\nE0521 16:05:25.622360 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0083c50e0)}\nE0521 16:05:26.622056 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00863d6d0)}\nE0521 16:05:27.622283 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0077a07d0)}\nE0521 16:05:28.622436 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008918410)}\nE0521 16:05:29.621544 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0091256d0)}\nE0521 16:05:30.622014 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006556550)}\nE0521 16:05:31.622227 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00910ad70)}\nE0521 16:05:32.622408 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007cc54a0)}\nE0521 16:05:33.622567 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008195040)}\nE0521 16:05:34.622500 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00894c320)}\nI0521 16:05:34.720003 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:05:34.720077 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:05:34.720099 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nE0521 16:05:35.622409 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008a72000)}\nE0521 16:05:36.623952 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008a73310)}\nE0521 16:05:37.622796 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc01c2313b0)}\nE0521 16:05:38.622187 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008a52690)}\nE0521 16:05:39.621708 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006a02cd0)}\nE0521 16:05:40.621962 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00880da40)}\nE0521 16:05:41.622285 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00702d4a0)}\nE0521 16:05:42.622213 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006847d60)}\nE0521 16:05:43.622086 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0064de230)}\nE0521 16:05:44.622099 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00376dd10)}\nE0521 16:05:45.622136 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006821f90)}\nE0521 16:05:46.622457 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0080799f0)}\nE0521 16:05:47.622636 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006a6f090)}\nE0521 16:05:48.622454 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00694cf50)}\nE0521 16:05:49.622448 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00864a050)}\nE0521 16:05:50.622114 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007346230)}\nE0521 16:05:51.622309 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc002f5a320)}\nE0521 16:05:52.621997 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008c17ea0)}\nE0521 16:05:53.622565 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0080bc000)}\nE0521 16:05:54.623008 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0080bc8c0)}\nE0521 16:05:55.621564 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00772a820)}\nE0521 16:05:56.622280 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0075a2280)}\nE0521 16:05:57.622034 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0098190e0)}\nE0521 16:05:58.622685 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0074ea550)}\nE0521 16:05:59.623175 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00785a190)}\nE0521 16:06:00.622695 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc002e709b0)}\nE0521 16:06:01.622031 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0084ccb90)}\nE0521 16:06:02.622496 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0030cc690)}\nE0521 16:06:03.622719 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc003256140)}\nE0521 16:06:04.622943 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007be9130)}\nE0521 16:06:05.622347 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008ae2140)}\nE0521 16:06:06.622413 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc009f2e050)}\nE0521 16:06:07.622274 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0073d67d0)}\nE0521 16:06:08.622528 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007a3e3c0)}\nE0521 16:06:09.622477 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007ec5b80)}\nI0521 16:06:10.220072 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:06:10.220143 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:06:10.220159 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nE0521 16:06:10.622172 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc003728d20)}\nE0521 16:06:11.622277 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008702b40)}\nE0521 16:06:12.622616 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0062d3630)}\nE0521 16:06:12.627179 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0082d2c30)}\nW0521 16:06:16.016429 1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured\nI0521 16:06:40.255393 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:06:40.255457 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:06:40.255473 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:07:21.764713 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:07:21.764776 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:07:21.764792 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:07:54.026906 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:07:54.027006 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:07:54.027025 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:08:29.898236 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:08:29.898305 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:08:29.898322 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:09:10.709151 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:09:10.709214 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:09:10.709230 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:09:36.559506 1 trace.go:205] Trace[117416987]: \"Delete\" url:/apis/events.k8s.io/v1/namespaces/emptydir-wrapper-9870/events (21-May-2021 16:09:35.894) (total time: 665ms):\nTrace[117416987]: [665.1446ms] [665.1446ms] END\nI0521 16:09:55.690427 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:09:55.690515 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:09:55.690541 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:10:40.093101 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:10:40.093173 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:10:40.093189 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:11:20.410952 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:11:20.411031 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:11:20.411048 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:12:00.718215 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:12:00.718277 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:12:00.718293 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:12:39.865459 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:12:39.865532 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:12:39.865550 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:13:09.991757 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:13:09.991826 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:13:09.991842 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:13:48.850108 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:13:48.850196 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:13:48.850212 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:14:29.794050 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:14:29.794124 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:14:29.794141 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:15:09.642030 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:15:09.642101 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:15:09.642118 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:15:43.377147 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:15:43.377216 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:15:43.377232 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:16:17.123521 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:16:17.123588 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:16:17.123604 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:16:56.736956 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:16:56.737021 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:16:56.737039 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:17:40.067699 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:17:40.067801 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:17:40.067821 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:18:15.921713 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:18:15.921778 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:18:15.921792 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:18:50.037486 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:18:50.037553 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:18:50.037569 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:19:10.930644 1 controller.go:609] quota admission added evaluator for: cronjobs.batch\nI0521 16:19:11.201570 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:19:11.201611 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 16:19:11.660434 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:19:11.660466 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 16:19:18.078891 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:19:18.078925 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nW0521 16:19:22.274921 1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured\nI0521 16:19:23.829214 1 client.go:360] parsed scheme: \"endpoint\"\nI0521 16:19:23.829254 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]\nI0521 16:19:30.315707 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:19:30.315770 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:19:30.315786 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:19:39.070026 1 trace.go:205] Trace[574162973]: \"Delete\" url:/api/v1/namespaces/chunking-1968/podtemplates (21-May-2021 16:19:37.747) (total time: 1322ms):\nTrace[574162973]: [1.322930357s] [1.322930357s] END\nW0521 16:19:47.740191 1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured\nI0521 16:19:50.867143 1 controller.go:609] quota admission added evaluator for: e2e-test-resourcequota-599-crds.resourcequota.example.com\nW0521 16:19:55.913507 1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured\nE0521 16:19:59.312134 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00881e780)}\nE0521 16:20:00.316614 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc001d30f50)}\nI0521 16:20:00.879011 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:20:00.879074 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:20:00.879090 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nE0521 16:20:01.316692 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00904b4f0)}\nE0521 16:20:02.316553 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc009044230)}\nE0521 16:20:03.316638 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00727ef00)}\nE0521 16:20:04.316697 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0074a1450)}\nE0521 16:20:05.317068 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007496a50)}\nE0521 16:20:05.756572 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006a6e870)}\nE0521 16:20:06.316865 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007496b90)}\nE0521 16:20:06.767153 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007165450)}\nE0521 16:20:07.317532 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007d185f0)}\nE0521 16:20:07.767324 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007d18c30)}\nE0521 16:20:08.316572 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0067106e0)}\nE0521 16:20:08.768236 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0067110e0)}\nE0521 16:20:09.316472 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008b6ad20)}\nE0521 16:20:09.768344 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008b6b0e0)}\nE0521 16:20:10.316569 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008064370)}\nE0521 16:20:10.767590 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007cc0550)}\nE0521 16:20:11.316801 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008750780)}\nE0521 16:20:11.767070 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008750e60)}\nE0521 16:20:12.316266 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0071e87d0)}\nE0521 16:20:12.767369 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007b0ed70)}\nE0521 16:20:13.316569 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007cc12c0)}\nE0521 16:20:13.767497 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007cc1a40)}\nE0521 16:20:14.317141 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006780b90)}\nE0521 16:20:14.767667 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00682b220)}\nE0521 16:20:15.316304 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0088c8a50)}\nE0521 16:20:15.767860 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006514870)}\nE0521 16:20:16.317367 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006a71090)}\nE0521 16:20:16.767334 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0062d4460)}\nE0521 16:20:17.316863 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0077d5e00)}\nE0521 16:20:17.769016 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00693f450)}\nE0521 16:20:18.316896 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006695630)}\nE0521 16:20:18.767983 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008408c80)}\nE0521 16:20:19.317555 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc002efb950)}\nE0521 16:20:19.767212 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0071a0640)}\nE0521 16:20:20.316701 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc002eeaa50)}\nE0521 16:20:20.767064 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008995860)}\nE0521 16:20:21.316730 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008578c30)}\nE0521 16:20:21.767144 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0085793b0)}\nE0521 16:20:22.316513 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0091fa780)}\nE0521 16:20:22.767016 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0061c86e0)}\nE0521 16:20:23.316790 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007d821e0)}\nE0521 16:20:23.768468 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0079fd7c0)}\nE0521 16:20:24.316767 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc009842b40)}\nE0521 16:20:24.767192 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc009146050)}\nE0521 16:20:25.316359 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc009147e00)}\nE0521 16:20:25.767053 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008cec3c0)}\nE0521 16:20:26.316551 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0078a4b40)}\nE0521 16:20:26.767044 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc003136eb0)}\nE0521 16:20:27.316618 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007a91590)}\nE0521 16:20:27.767231 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007be9220)}\nE0521 16:20:28.316449 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007ff8050)}\nE0521 16:20:28.766984 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc000a1c190)}\nE0521 16:20:29.316544 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc003247d10)}\nE0521 16:20:29.767764 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007336aa0)}\nE0521 16:20:30.316514 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00700d130)}\nE0521 16:20:30.767237 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0074d64b0)}\nE0521 16:20:31.317333 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00891f8b0)}\nE0521 16:20:31.767225 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007db6190)}\nE0521 16:20:32.316563 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00862d770)}\nE0521 16:20:32.767352 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00854c140)}\nE0521 16:20:33.316768 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0090166e0)}\nE0521 16:20:33.767088 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00643a550)}\nI0521 16:20:33.857529 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:20:33.857593 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:20:33.857609 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nE0521 16:20:34.316528 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0090179f0)}\nE0521 16:20:34.767236 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0090f3c70)}\nE0521 16:20:35.316863 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008012910)}\nE0521 16:20:35.767340 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008013130)}\nE0521 16:20:36.316938 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0080131d0)}\nE0521 16:20:36.767997 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0031eb6d0)}\nE0521 16:20:37.316701 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008c7a000)}\nE0521 16:20:37.767642 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0078749b0)}\nE0521 16:20:38.316548 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0030504b0)}\nE0521 16:20:38.767420 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007875040)}\nE0521 16:20:39.316739 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0034fd6d0)}\nE0521 16:20:39.767214 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0065805a0)}\nE0521 16:20:40.316919 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008e6cc80)}\nE0521 16:20:40.767284 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00811f770)}\nE0521 16:20:41.316846 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0083b5950)}\nE0521 16:20:41.767071 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0031781e0)}\nE0521 16:20:42.316400 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008c46cd0)}\nE0521 16:20:42.767080 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008ff5bd0)}\nE0521 16:20:43.316841 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007b26aa0)}\nE0521 16:20:43.767199 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006930910)}\nE0521 16:20:44.316968 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0037ff270)}\nE0521 16:20:44.767567 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0076148c0)}\nE0521 16:20:45.316621 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006475db0)}\nE0521 16:20:45.767045 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00769ccd0)}\nE0521 16:20:46.317084 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006599a90)}\nE0521 16:20:46.767469 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc003318320)}\nE0521 16:20:47.316593 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0088ee5a0)}\nE0521 16:20:47.766885 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0088ee960)}\nE0521 16:20:48.316648 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007ce22d0)}\nE0521 16:20:48.768000 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc01b4d4640)}\nE0521 16:20:49.316467 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0077f8aa0)}\nE0521 16:20:49.767458 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0077f8cd0)}\nE0521 16:20:50.316829 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007cda0a0)}\nE0521 16:20:50.767326 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00772a1e0)}\nE0521 16:20:51.316682 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0078dad70)}\nE0521 16:20:51.767756 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00879cc30)}\nE0521 16:20:52.316470 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00879d900)}\nE0521 16:20:52.767025 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00772b810)}\nE0521 16:20:53.316609 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00803df40)}\nE0521 16:20:53.767209 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0064c2870)}\nE0521 16:20:54.317115 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0064c2cd0)}\nE0521 16:20:54.767216 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc002eba960)}\nE0521 16:20:55.316653 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006446cd0)}\nE0521 16:20:55.767340 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc006446eb0)}\nE0521 16:20:56.316929 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007dcdef0)}\nE0521 16:20:56.767234 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0077fe3c0)}\nE0521 16:20:57.316845 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0067da550)}\nE0521 16:20:57.767271 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0067dad20)}\nE0521 16:20:58.316370 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0067daff0)}\nE0521 16:20:58.767012 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc003849950)}\nE0521 16:20:59.316471 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0083de9b0)}\nE0521 16:20:59.320507 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0083deaf0)}\nE0521 16:20:59.767697 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc008da9220)}\nE0521 16:21:00.767553 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc00694dae0)}\nE0521 16:21:01.767137 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0087f2370)}\nE0521 16:21:02.769179 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007f37b80)}\nE0521 16:21:03.767736 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc007c45bd0)}\nE0521 16:21:04.767421 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0061c1950)}\nE0521 16:21:05.767538 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0089ecbe0)}\nE0521 16:21:05.771710 1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp 172.18.0.3:10252: connect: connection refused\", err:(*net.OpError)(0xc0088d4910)}\nI0521 16:21:11.134551 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:21:11.134632 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:21:11.134649 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:21:41.857317 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:21:41.857398 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:21:41.857414 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:22:20.229161 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:22:20.229234 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:22:20.229251 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:22:56.247379 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:22:56.247457 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:22:56.247473 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:23:35.878498 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:23:35.878568 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:23:35.878584 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:24:08.238472 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:24:08.238547 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:24:08.238564 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:24:42.390699 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:24:42.390775 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:24:42.390791 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:25:22.449347 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:25:22.449410 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:25:22.449426 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:26:04.202770 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:26:04.202840 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:26:04.202856 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:26:42.368126 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:26:42.368188 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:26:42.368204 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:27:26.337926 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:27:26.337998 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:27:26.338015 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:27:58.210723 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:27:58.210798 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:27:58.210814 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:28:30.308591 1 trace.go:205] Trace[423311311]: \"Delete\" url:/api/v1/namespaces/chunking-9511/podtemplates (21-May-2021 16:28:28.926) (total time: 1381ms):\nTrace[423311311]: [1.381638789s] [1.381638789s] END\nI0521 16:28:38.535462 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:28:38.535529 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:28:38.535554 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:29:16.238672 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:29:16.238746 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:29:16.238761 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:29:56.241790 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:29:56.241889 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:29:56.241906 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:30:05.495589 1 controller.go:609] quota admission added evaluator for: poddisruptionbudgets.policy\nI0521 16:30:33.993358 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:30:33.993452 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:30:33.993472 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:31:07.678739 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:31:07.678805 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:31:07.678821 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:31:41.658521 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:31:41.658574 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:31:41.658590 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:32:15.978382 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:32:15.978431 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:32:15.978443 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nE0521 16:32:21.279250 1 fieldmanager.go:175] [SHOULD NOT HAPPEN] failed to update managedFields for /, Kind=: failed to convert new object (apps/v1, Kind=Deployment) to smd typed: .spec.template.spec.containers[name=\"httpd\"].env: duplicate entries for key [name=\"A\"]\nE0521 16:32:23.844626 1 fieldmanager.go:175] [SHOULD NOT HAPPEN] failed to update managedFields for /, Kind=: failed to convert new object (/v1, Kind=Pod) to smd typed: .spec.containers[name=\"httpd\"].env: duplicate entries for key [name=\"A\"]\nE0521 16:32:24.943266 1 fieldmanager.go:175] [SHOULD NOT HAPPEN] failed to update managedFields for /, Kind=: failed to convert new object (/v1, Kind=Pod) to smd typed: .spec.containers[name=\"httpd\"].env: duplicate entries for key [name=\"A\"]\nE0521 16:32:29.106863 1 fieldmanager.go:175] [SHOULD NOT HAPPEN] failed to update managedFields for /, Kind=: failed to convert new object (/v1, Kind=Pod) to smd typed: .spec.containers[name=\"httpd\"].env: duplicate entries for key [name=\"A\"]\nE0521 16:32:41.013375 1 fieldmanager.go:175] [SHOULD NOT HAPPEN] failed to update managedFields for /, Kind=: failed to convert new object (apps/v1, Kind=ReplicaSet) to smd typed: errors:\n .spec.template.spec.containers[name=\"httpd\"].env: duplicate entries for key [name=\"A\"]\n .spec.template.spec.containers[name=\"httpd\"].env: duplicate entries for key [name=\"A\"]\nE0521 16:32:42.931634 1 fieldmanager.go:175] [SHOULD NOT HAPPEN] failed to update managedFields for /, Kind=: failed to convert new object (/v1, Kind=Pod) to smd typed: errors:\n .spec.containers[name=\"httpd\"].env: duplicate entries for key [name=\"A\"]\n .spec.containers[name=\"httpd\"].env: duplicate entries for key [name=\"A\"]\nE0521 16:32:44.949110 1 fieldmanager.go:175] [SHOULD NOT HAPPEN] failed to update managedFields for /, Kind=: failed to convert new object (/v1, Kind=Pod) to smd typed: errors:\n .spec.containers[name=\"httpd\"].env: duplicate entries for key [name=\"A\"]\n .spec.containers[name=\"httpd\"].env: duplicate entries for key [name=\"A\"]\nI0521 16:32:46.349539 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:32:46.349625 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:32:46.349642 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nE0521 16:32:54.158847 1 fieldmanager.go:175] [SHOULD NOT HAPPEN] failed to update managedFields for /, Kind=: failed to convert new object (/v1, Kind=Pod) to smd typed: errors:\n .spec.containers[name=\"httpd\"].env: duplicate entries for key [name=\"A\"]\n .spec.containers[name=\"httpd\"].env: duplicate entries for key [name=\"A\"]\nI0521 16:32:55.577680 1 trace.go:205] Trace[1437692143]: \"Delete\" url:/api/v1/namespaces/deployment-4203/events (21-May-2021 16:32:54.234) (total time: 1343ms):\nTrace[1437692143]: [1.343537919s] [1.343537919s] END\nI0521 16:33:20.984117 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:33:20.984199 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:33:20.984216 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:34:04.348381 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:34:04.348452 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:34:04.348470 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:34:48.039930 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:34:48.040003 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:34:48.040021 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:35:18.278350 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:35:18.278429 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:35:18.278446 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:35:56.193385 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:35:56.193451 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:35:56.193466 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:36:38.665877 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:36:38.665951 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:36:38.665967 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:37:13.411430 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:37:13.411502 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:37:13.411517 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:37:52.493615 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:37:52.493687 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:37:52.493705 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:38:24.874854 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:38:24.874929 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:38:24.874946 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0521 16:38:56.300332 1 client.go:360] parsed scheme: \"passthrough\"\nI0521 16:38:56.300406 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }\nI0521 16:38:56.300422 1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\n==== END logs for container kube-apiserver of pod kube-system/kube-apiserver-kali-control-plane ====\n==== START logs for container kube-controller-manager of pod kube-system/kube-controller-manager-kali-control-plane ====\nFlag --port has been deprecated, see --secure-port instead.\nI0521 15:13:08.498776 1 serving.go:331] Generated self-signed cert in-memory\nI0521 15:13:08.827793 1 controllermanager.go:175] Version: v1.19.11\nI0521 15:13:08.828564 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt\nI0521 15:13:08.828571 1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt\nI0521 15:13:08.828923 1 secure_serving.go:197] Serving securely on 127.0.0.1:10257\nI0521 15:13:08.828963 1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-controller-manager...\nI0521 15:13:08.828964 1 tlsconfig.go:240] Starting DynamicServingCertificateController\nE0521 15:13:14.769291 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: endpoints \"kube-controller-manager\" is forbidden: User \"system:kube-controller-manager\" cannot get resource \"endpoints\" in API group \"\" in the namespace \"kube-system\"\nI0521 15:13:19.002390 1 leaderelection.go:253] successfully acquired lease kube-system/kube-controller-manager\nI0521 15:13:19.002516 1 event.go:291] \"Event occurred\" object=\"kube-system/kube-controller-manager\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Normal\" reason=\"LeaderElection\" message=\"kali-control-plane_ee41ffd0-caca-4095-9e7f-8b7f2fb5b820 became leader\"\nI0521 15:13:19.002601 1 event.go:291] \"Event occurred\" object=\"kube-system/kube-controller-manager\" kind=\"Lease\" apiVersion=\"coordination.k8s.io/v1\" type=\"Normal\" reason=\"LeaderElection\" message=\"kali-control-plane_ee41ffd0-caca-4095-9e7f-8b7f2fb5b820 became leader\"\nI0521 15:13:19.363641 1 shared_informer.go:240] Waiting for caches to sync for tokens\nI0521 15:13:19.463957 1 shared_informer.go:247] Caches are synced for tokens \nI0521 15:13:19.482801 1 controllermanager.go:549] Started \"cronjob\"\nI0521 15:13:19.482854 1 cronjob_controller.go:96] Starting CronJob Manager\nI0521 15:13:19.506131 1 controllermanager.go:549] Started \"tokencleaner\"\nI0521 15:13:19.506185 1 tokencleaner.go:118] Starting token cleaner controller\nI0521 15:13:19.506200 1 shared_informer.go:240] Waiting for caches to sync for token_cleaner\nI0521 15:13:19.506210 1 shared_informer.go:247] Caches are synced for token_cleaner \nI0521 15:13:19.525383 1 controllermanager.go:549] Started \"endpointslice\"\nI0521 15:13:19.525524 1 endpointslice_controller.go:237] Starting endpoint slice controller\nI0521 15:13:19.525548 1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice\nI0521 15:13:19.548145 1 controllermanager.go:549] Started \"replicationcontroller\"\nI0521 15:13:19.548223 1 replica_set.go:182] Starting replicationcontroller controller\nI0521 15:13:19.548235 1 shared_informer.go:240] Waiting for caches to sync for ReplicationController\nI0521 15:13:19.569828 1 controllermanager.go:549] Started \"serviceaccount\"\nI0521 15:13:19.569945 1 serviceaccounts_controller.go:117] Starting service account controller\nI0521 15:13:19.569972 1 shared_informer.go:240] Waiting for caches to sync for service account\nI0521 15:13:19.602635 1 controllermanager.go:549] Started \"namespace\"\nI0521 15:13:19.602700 1 namespace_controller.go:200] Starting namespace controller\nI0521 15:13:19.602721 1 shared_informer.go:240] Waiting for caches to sync for namespace\nE0521 15:13:19.624492 1 core.go:90] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail\nW0521 15:13:19.624535 1 controllermanager.go:541] Skipping \"service\"\nW0521 15:13:19.624551 1 controllermanager.go:541] Skipping \"ttl-after-finished\"\nI0521 15:13:19.867300 1 controllermanager.go:549] Started \"statefulset\"\nI0521 15:13:19.867390 1 stateful_set.go:146] Starting stateful set controller\nI0521 15:13:19.867403 1 shared_informer.go:240] Waiting for caches to sync for stateful set\nI0521 15:13:20.117295 1 node_lifecycle_controller.go:77] Sending events to api server\nE0521 15:13:20.117399 1 core.go:230] failed to start cloud node lifecycle controller: no cloud provider provided\nW0521 15:13:20.117418 1 controllermanager.go:541] Skipping \"cloud-node-lifecycle\"\nW0521 15:13:20.117437 1 core.go:244] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.\nW0521 15:13:20.117447 1 controllermanager.go:541] Skipping \"route\"\nI0521 15:13:20.366925 1 controllermanager.go:549] Started \"pvc-protection\"\nW0521 15:13:20.366968 1 controllermanager.go:541] Skipping \"root-ca-cert-publisher\"\nW0521 15:13:20.366981 1 controllermanager.go:541] Skipping \"ephemeral-volume\"\nI0521 15:13:20.367037 1 pvc_protection_controller.go:110] Starting PVC protection controller\nI0521 15:13:20.367057 1 shared_informer.go:240] Waiting for caches to sync for PVC protection\nI0521 15:13:20.617945 1 controllermanager.go:549] Started \"job\"\nI0521 15:13:20.618005 1 job_controller.go:148] Starting job controller\nI0521 15:13:20.618027 1 shared_informer.go:240] Waiting for caches to sync for job\nI0521 15:13:21.016845 1 controllermanager.go:549] Started \"disruption\"\nI0521 15:13:21.016946 1 disruption.go:331] Starting disruption controller\nI0521 15:13:21.016964 1 shared_informer.go:240] Waiting for caches to sync for disruption\nI0521 15:13:21.267652 1 controllermanager.go:549] Started \"ttl\"\nI0521 15:13:21.267737 1 ttl_controller.go:118] Starting TTL controller\nI0521 15:13:21.267760 1 shared_informer.go:240] Waiting for caches to sync for TTL\nI0521 15:13:21.416742 1 node_lifecycle_controller.go:380] Sending events to api server.\nI0521 15:13:21.417083 1 taint_manager.go:163] Sending events to api server.\nI0521 15:13:21.417208 1 node_lifecycle_controller.go:508] Controller will reconcile labels.\nI0521 15:13:21.417272 1 controllermanager.go:549] Started \"nodelifecycle\"\nI0521 15:13:21.417405 1 node_lifecycle_controller.go:542] Starting node controller\nI0521 15:13:21.417435 1 shared_informer.go:240] Waiting for caches to sync for taint\nI0521 15:13:21.669206 1 controllermanager.go:549] Started \"persistentvolume-binder\"\nI0521 15:13:21.669307 1 pv_controller_base.go:303] Starting persistent volume controller\nI0521 15:13:21.669396 1 shared_informer.go:240] Waiting for caches to sync for persistent volume\nI0521 15:13:22.269231 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps\nI0521 15:13:22.269376 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions\nI0521 15:13:22.269472 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps\nI0521 15:13:22.269560 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io\nW0521 15:13:22.269601 1 shared_informer.go:494] resyncPeriod 54106817177212 is smaller than resyncCheckPeriod 72509647954057 and the informer has already started. Changing it to 72509647954057\nI0521 15:13:22.269895 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io\nI0521 15:13:22.269988 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy\nI0521 15:13:22.270086 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io\nI0521 15:13:22.270172 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints\nI0521 15:13:22.270291 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps\nI0521 15:13:22.270366 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io\nI0521 15:13:22.270452 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges\nI0521 15:13:22.270588 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps\nI0521 15:13:22.270698 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps\nI0521 15:13:22.270778 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io\nI0521 15:13:22.270859 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling\nI0521 15:13:22.270923 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch\nI0521 15:13:22.270997 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io\nI0521 15:13:22.271263 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts\nI0521 15:13:22.271388 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch\nI0521 15:13:22.271523 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io\nI0521 15:13:22.271630 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates\nI0521 15:13:22.271734 1 controllermanager.go:549] Started \"resourcequota\"\nI0521 15:13:22.271808 1 resource_quota_controller.go:272] Starting resource quota controller\nI0521 15:13:22.271834 1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0521 15:13:22.271868 1 resource_quota_monitor.go:303] QuotaMonitor running\nI0521 15:13:22.300227 1 controllermanager.go:549] Started \"deployment\"\nI0521 15:13:22.300405 1 deployment_controller.go:153] Starting deployment controller\nI0521 15:13:22.300430 1 shared_informer.go:240] Waiting for caches to sync for deployment\nI0521 15:13:22.417088 1 controllermanager.go:549] Started \"csrcleaner\"\nI0521 15:13:22.417139 1 cleaner.go:83] Starting CSR cleaner controller\nI0521 15:13:22.667987 1 controllermanager.go:549] Started \"podgc\"\nI0521 15:13:22.668070 1 gc_controller.go:89] Starting GC controller\nI0521 15:13:22.668083 1 shared_informer.go:240] Waiting for caches to sync for GC\nI0521 15:13:22.920680 1 controllermanager.go:549] Started \"daemonset\"\nI0521 15:13:22.920760 1 daemon_controller.go:285] Starting daemon sets controller\nI0521 15:13:22.920772 1 shared_informer.go:240] Waiting for caches to sync for daemon sets\nI0521 15:13:23.167373 1 controllermanager.go:549] Started \"bootstrapsigner\"\nI0521 15:13:23.167494 1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer\nI0521 15:13:23.417756 1 request.go:645] Throttling request took 1.048172165s, request: GET:https://172.18.0.3:6443/apis/autoscaling/v2beta2?timeout=32s\nI0521 15:13:23.419181 1 controllermanager.go:549] Started \"persistentvolume-expander\"\nI0521 15:13:23.419409 1 expand_controller.go:303] Starting expand controller\nI0521 15:13:23.419465 1 shared_informer.go:240] Waiting for caches to sync for expand\nI0521 15:13:23.667330 1 controllermanager.go:549] Started \"clusterrole-aggregation\"\nI0521 15:13:23.667420 1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator\nI0521 15:13:23.667433 1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator\nI0521 15:13:23.917210 1 controllermanager.go:549] Started \"pv-protection\"\nI0521 15:13:23.917300 1 pv_protection_controller.go:83] Starting PV protection controller\nI0521 15:13:23.917324 1 shared_informer.go:240] Waiting for caches to sync for PV protection\nI0521 15:13:24.168526 1 controllermanager.go:549] Started \"endpoint\"\nI0521 15:13:24.168612 1 endpoints_controller.go:184] Starting endpoint controller\nI0521 15:13:24.168624 1 shared_informer.go:240] Waiting for caches to sync for endpoint\nI0521 15:13:24.417167 1 controllermanager.go:549] Started \"replicaset\"\nI0521 15:13:24.417215 1 replica_set.go:182] Starting replicaset controller\nI0521 15:13:24.417245 1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet\nI0521 15:13:24.567840 1 certificate_controller.go:118] Starting certificate controller \"csrsigning-kubelet-serving\"\nI0521 15:13:24.567883 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving\nI0521 15:13:24.567920 1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key\nI0521 15:13:24.568653 1 certificate_controller.go:118] Starting certificate controller \"csrsigning-kubelet-client\"\nI0521 15:13:24.568697 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client\nI0521 15:13:24.568736 1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key\nI0521 15:13:24.569402 1 certificate_controller.go:118] Starting certificate controller \"csrsigning-kube-apiserver-client\"\nI0521 15:13:24.569430 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client\nI0521 15:13:24.569474 1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key\nI0521 15:13:24.570054 1 controllermanager.go:549] Started \"csrsigning\"\nI0521 15:13:24.570137 1 certificate_controller.go:118] Starting certificate controller \"csrsigning-legacy-unknown\"\nI0521 15:13:24.570153 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown\nI0521 15:13:24.570185 1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key\nI0521 15:13:24.716830 1 controllermanager.go:549] Started \"csrapproving\"\nI0521 15:13:24.716924 1 certificate_controller.go:118] Starting certificate controller \"csrapproving\"\nI0521 15:13:24.716939 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving\nI0521 15:13:24.867045 1 node_ipam_controller.go:91] Sending events to api server.\nI0521 15:13:34.871979 1 range_allocator.go:82] Sending events to api server.\nI0521 15:13:34.872268 1 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.\nI0521 15:13:34.872330 1 controllermanager.go:549] Started \"nodeipam\"\nI0521 15:13:34.872446 1 node_ipam_controller.go:159] Starting ipam controller\nI0521 15:13:34.872473 1 shared_informer.go:240] Waiting for caches to sync for node\nI0521 15:13:34.900424 1 controllermanager.go:549] Started \"attachdetach\"\nI0521 15:13:34.900550 1 attach_detach_controller.go:322] Starting attach detach controller\nI0521 15:13:34.900583 1 shared_informer.go:240] Waiting for caches to sync for attach detach\nI0521 15:13:34.926498 1 controllermanager.go:549] Started \"endpointslicemirroring\"\nI0521 15:13:34.926589 1 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller\nI0521 15:13:34.926604 1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring\nI0521 15:13:34.955989 1 garbagecollector.go:128] Starting garbage collector controller\nI0521 15:13:34.956012 1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0521 15:13:34.956039 1 graph_builder.go:282] GraphBuilder running\nI0521 15:13:34.956097 1 controllermanager.go:549] Started \"garbagecollector\"\nI0521 15:13:35.006310 1 controllermanager.go:549] Started \"horizontalpodautoscaling\"\nI0521 15:13:35.006372 1 horizontal.go:169] Starting HPA controller\nI0521 15:13:35.006400 1 shared_informer.go:240] Waiting for caches to sync for HPA\nI0521 15:13:35.006717 1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0521 15:13:35.019629 1 shared_informer.go:247] Caches are synced for expand \nW0521 15:13:35.021941 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName=\"kali-control-plane\" does not exist\nI0521 15:13:35.025670 1 shared_informer.go:247] Caches are synced for endpoint_slice \nI0521 15:13:35.026631 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring \nI0521 15:13:35.048365 1 shared_informer.go:247] Caches are synced for ReplicationController \nI0521 15:13:35.067149 1 shared_informer.go:247] Caches are synced for PVC protection \nI0521 15:13:35.067539 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator \nI0521 15:13:35.067687 1 shared_informer.go:247] Caches are synced for bootstrap_signer \nI0521 15:13:35.067940 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving \nI0521 15:13:35.067943 1 shared_informer.go:247] Caches are synced for TTL \nI0521 15:13:35.068192 1 shared_informer.go:247] Caches are synced for GC \nI0521 15:13:35.068684 1 shared_informer.go:247] Caches are synced for endpoint \nI0521 15:13:35.068784 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client \nI0521 15:13:35.069551 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client \nI0521 15:13:35.069649 1 shared_informer.go:247] Caches are synced for persistent volume \nI0521 15:13:35.070057 1 shared_informer.go:247] Caches are synced for service account \nI0521 15:13:35.070300 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown \nI0521 15:13:35.072580 1 shared_informer.go:247] Caches are synced for node \nI0521 15:13:35.072626 1 range_allocator.go:172] Starting range CIDR allocator\nI0521 15:13:35.072636 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator\nI0521 15:13:35.072645 1 shared_informer.go:247] Caches are synced for cidrallocator \nI0521 15:13:35.081008 1 range_allocator.go:373] Set node kali-control-plane PodCIDR to [10.244.0.0/24]\nE0521 15:13:35.086685 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io \"edit\": the object has been modified; please apply your changes to the latest version and try again\nE0521 15:13:35.086962 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io \"view\": the object has been modified; please apply your changes to the latest version and try again\nE0521 15:13:35.099211 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io \"edit\": the object has been modified; please apply your changes to the latest version and try again\nI0521 15:13:35.100506 1 shared_informer.go:247] Caches are synced for deployment \nI0521 15:13:35.100687 1 shared_informer.go:247] Caches are synced for attach detach \nI0521 15:13:35.102808 1 shared_informer.go:247] Caches are synced for namespace \nI0521 15:13:35.106882 1 event.go:291] \"Event occurred\" object=\"local-path-storage/local-path-provisioner\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set local-path-provisioner-547f784dff to 1\"\nI0521 15:13:35.107358 1 event.go:291] \"Event occurred\" object=\"kube-system/coredns\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set coredns-f9fd979d6 to 2\"\nI0521 15:13:35.117053 1 shared_informer.go:247] Caches are synced for disruption \nI0521 15:13:35.117074 1 shared_informer.go:247] Caches are synced for certificate-csrapproving \nI0521 15:13:35.117086 1 disruption.go:339] Sending events to api server.\nI0521 15:13:35.117372 1 shared_informer.go:247] Caches are synced for ReplicaSet \nI0521 15:13:35.117399 1 shared_informer.go:247] Caches are synced for PV protection \nI0521 15:13:35.117528 1 shared_informer.go:247] Caches are synced for taint \nI0521 15:13:35.117599 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: \nI0521 15:13:35.117632 1 taint_manager.go:187] Starting NoExecuteTaintManager\nW0521 15:13:35.117687 1 node_lifecycle_controller.go:1044] Missing timestamp for Node kali-control-plane. Assuming now as a timestamp.\nI0521 15:13:35.117745 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.\nI0521 15:13:35.117865 1 event.go:291] \"Event occurred\" object=\"kali-control-plane\" kind=\"Node\" apiVersion=\"v1\" type=\"Normal\" reason=\"RegisteredNode\" message=\"Node kali-control-plane event: Registered Node kali-control-plane in Controller\"\nI0521 15:13:35.118129 1 shared_informer.go:247] Caches are synced for job \nI0521 15:13:35.120850 1 shared_informer.go:247] Caches are synced for daemon sets \nI0521 15:13:35.123173 1 event.go:291] \"Event occurred\" object=\"kube-system/coredns-f9fd979d6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: coredns-f9fd979d6-nfqfd\"\nI0521 15:13:35.126309 1 event.go:291] \"Event occurred\" object=\"local-path-storage/local-path-provisioner-547f784dff\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: local-path-provisioner-547f784dff-s88mx\"\nI0521 15:13:35.127766 1 event.go:291] \"Event occurred\" object=\"kube-system/coredns-f9fd979d6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: coredns-f9fd979d6-mpnsm\"\nI0521 15:13:35.137021 1 event.go:291] \"Event occurred\" object=\"kube-system/kindnet\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: kindnet-7b2zs\"\nI0521 15:13:35.138107 1 event.go:291] \"Event occurred\" object=\"kube-system/kube-proxy\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: kube-proxy-c6n8g\"\nI0521 15:13:35.167700 1 shared_informer.go:247] Caches are synced for stateful set \nI0521 15:13:35.272109 1 shared_informer.go:247] Caches are synced for resource quota \nI0521 15:13:35.306476 1 shared_informer.go:247] Caches are synced for HPA \nI0521 15:13:35.306906 1 shared_informer.go:247] Caches are synced for resource quota \nI0521 15:13:35.664328 1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0521 15:13:35.756267 1 shared_informer.go:247] Caches are synced for garbage collector \nI0521 15:13:35.756309 1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage\nI0521 15:13:35.764651 1 shared_informer.go:247] Caches are synced for garbage collector \nI0521 15:13:50.118458 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.\nW0521 15:13:50.228763 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName=\"kali-worker\" does not exist\nI0521 15:13:50.235819 1 range_allocator.go:373] Set node kali-worker PodCIDR to [10.244.1.0/24]\nI0521 15:13:50.239229 1 event.go:291] \"Event occurred\" object=\"kube-system/kube-proxy\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: kube-proxy-ggwmf\"\nI0521 15:13:50.239525 1 event.go:291] \"Event occurred\" object=\"kube-system/kindnet\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: kindnet-vlqfv\"\nE0521 15:13:50.257438 1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet\", UID:\"4475fe22-8df5-4436-bc1d-18482df5a443\", ResourceVersion:\"485\", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63757206799, loc:(*time.Location)(0x6a53ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kubectl-create\", Operation:\"Update\", APIVersion:\"apps/v1\", Time:(*v1.Time)(0xc000eb0de0), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc000eb0f00)}, v1.ManagedFieldsEntry{Manager:\"kube-controller-manager\", Operation:\"Update\", APIVersion:\"apps/v1\", Time:(*v1.Time)(0xc000eb1020), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc000eb1140)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000eb1260), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000eb1380), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000eb14a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000eb15c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"docker.io/kindest/kindnetd:v20210326-1e038dc5\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc000eb1700)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc000eb1860)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:\"CONTROL_PLANE_ENDPOINT\", Value:\"kali-control-plane:6443\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:false, MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc000c60600), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc0028a5ad8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0008ab960), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0003cd128)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0028a5b08)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again\nW0521 15:13:50.437232 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName=\"kali-worker2\" does not exist\nI0521 15:13:50.444385 1 event.go:291] \"Event occurred\" object=\"kube-system/kube-proxy\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: kube-proxy-87457\"\nI0521 15:13:50.444499 1 range_allocator.go:373] Set node kali-worker2 PodCIDR to [10.244.2.0/24]\nI0521 15:13:50.445017 1 event.go:291] \"Event occurred\" object=\"kube-system/kindnet\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: kindnet-n7f64\"\nE0521 15:13:50.461896 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy\", UID:\"41b3104f-a576-4641-b321-1d0dfa73f9da\", ResourceVersion:\"532\", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63757206797, loc:(*time.Location)(0x6a53ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kubeadm\", Operation:\"Update\", APIVersion:\"apps/v1\", Time:(*v1.Time)(0xc001a12ca0), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc001a12cc0)}, v1.ManagedFieldsEntry{Manager:\"kube-controller-manager\", Operation:\"Update\", APIVersion:\"apps/v1\", Time:(*v1.Time)(0xc001a12ce0), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc001a12d00)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001a12d20), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0018b9880), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001a12d40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001a12dc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"k8s.gcr.io/kube-proxy:v1.19.11\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc001a12e40)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc001711080), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc001dce008), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0004a79d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"CriticalAddonsOnly\", Operator:\"Exists\", Value:\"\", Effect:\"\", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0022bc428)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001dce078)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailable:1, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again\nI0521 15:13:55.118878 1 event.go:291] \"Event occurred\" object=\"kali-worker\" kind=\"Node\" apiVersion=\"v1\" type=\"Normal\" reason=\"RegisteredNode\" message=\"Node kali-worker event: Registered Node kali-worker in Controller\"\nW0521 15:13:55.118962 1 node_lifecycle_controller.go:1044] Missing timestamp for Node kali-worker. Assuming now as a timestamp.\nI0521 15:13:55.119033 1 event.go:291] \"Event occurred\" object=\"kali-worker2\" kind=\"Node\" apiVersion=\"v1\" type=\"Normal\" reason=\"RegisteredNode\" message=\"Node kali-worker2 event: Registered Node kali-worker2 in Controller\"\nW0521 15:13:55.119046 1 node_lifecycle_controller.go:1044] Missing timestamp for Node kali-worker2. Assuming now as a timestamp.\nI0521 15:16:01.465058 1 event.go:291] \"Event occurred\" object=\"kube-system/create-loop-devs\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: create-loop-devs-cwbn4\"\nI0521 15:16:01.468457 1 event.go:291] \"Event occurred\" object=\"kube-system/create-loop-devs\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: create-loop-devs-26xt8\"\nI0521 15:16:01.470000 1 event.go:291] \"Event occurred\" object=\"kube-system/create-loop-devs\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: create-loop-devs-8l686\"\nE0521 15:16:01.484330 1 daemon_controller.go:320] kube-system/create-loop-devs failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"create-loop-devs\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"/apis/apps/v1/namespaces/kube-system/daemonsets/create-loop-devs\", UID:\"14b411a5-28ad-4d2c-8713-6de6f2f844d8\", ResourceVersion:\"1049\", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63757206961, loc:(*time.Location)(0x6a53ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"create-loop-devs\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"create-loop-devs\\\"},\\\"name\\\":\\\"create-loop-devs\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"name\\\":\\\"create-loop-devs\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"name\\\":\\\"create-loop-devs\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"command\\\":[\\\"sh\\\",\\\"-c\\\",\\\"while true; do\\\\n for i in $(seq 0 1000); do\\\\n if ! [ -e /dev/loop$i ]; then\\\\n mknod /dev/loop$i b 7 $i\\\\n fi\\\\n done\\\\n sleep 100000000\\\\ndone\\\\n\\\"],\\\"image\\\":\\\"alpine:3.6\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"loopdev\\\",\\\"resources\\\":{},\\\"securityContext\\\":{\\\"privileged\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/dev\\\",\\\"name\\\":\\\"dev\\\"}]}],\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/dev\\\"},\\\"name\\\":\\\"dev\\\"}]}}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kubectl-client-side-apply\", Operation:\"Update\", APIVersion:\"apps/v1\", Time:(*v1.Time)(0xc003309a80), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003309ac0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc003309ae0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"name\":\"create-loop-devs\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"dev\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc003309b00), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"loopdev\", Image:\"alpine:3.6\", Command:[]string{\"sh\", \"-c\", \"while true; do\\n for i in $(seq 0 1000); do\\n if ! [ -e /dev/loop$i ]; then\\n mknod /dev/loop$i b 7 $i\\n fi\\n done\\n sleep 100000000\\ndone\\n\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"dev\", ReadOnly:false, MountPath:\"/dev\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc00336b860), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc0008122a0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"\", DeprecatedServiceAccount:\"\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000d22c40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001580208)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0008122cc)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"create-loop-devs\": the object has been modified; please apply your changes to the latest version and try again\nI0521 15:16:01.830924 1 event.go:291] \"Event occurred\" object=\"kube-system/tune-sysctls\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: tune-sysctls-zzq45\"\nI0521 15:16:01.846674 1 event.go:291] \"Event occurred\" object=\"kube-system/tune-sysctls\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: tune-sysctls-m54ts\"\nI0521 15:16:01.847278 1 event.go:291] \"Event occurred\" object=\"kube-system/tune-sysctls\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: tune-sysctls-8m4jc\"\nI0521 15:16:02.418395 1 event.go:291] \"Event occurred\" object=\"kube-system/kube-multus-ds\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: kube-multus-ds-xtw9p\"\nI0521 15:16:02.426557 1 event.go:291] \"Event occurred\" object=\"kube-system/kube-multus-ds\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: kube-multus-ds-zr9pd\"\nI0521 15:16:02.426766 1 event.go:291] \"Event occurred\" object=\"kube-system/kube-multus-ds\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: kube-multus-ds-f4mr9\"\nE0521 15:16:02.451556 1 daemon_controller.go:320] kube-system/kube-multus-ds failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-multus-ds\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-multus-ds\", UID:\"928bc64f-c0c9-475a-b436-4ec77811dd11\", ResourceVersion:\"1096\", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63757206962, loc:(*time.Location)(0x6a53ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"multus\", \"name\":\"multus\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"multus\\\",\\\"name\\\":\\\"multus\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kube-multus-ds\\\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"name\\\":\\\"multus\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"multus\\\",\\\"name\\\":\\\"multus\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"args\\\":[\\\"--multus-conf-file=auto\\\",\\\"--cni-version=0.3.1\\\"],\\\"command\\\":[\\\"/entrypoint.sh\\\"],\\\"image\\\":\\\"ghcr.io/k8snetworkplumbingwg/multus-cni:stable\\\",\\\"name\\\":\\\"kube-multus\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"privileged\\\":true},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"cni\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/tmp/multus-conf\\\",\\\"name\\\":\\\"multus-cfg\\\"}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"multus\\\",\\\"terminationGracePeriodSeconds\\\":10,\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\"},\\\"name\\\":\\\"cni\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/opt/cni/bin\\\"},\\\"name\\\":\\\"cnibin\\\"},{\\\"configMap\\\":{\\\"items\\\":[{\\\"key\\\":\\\"cni-conf.json\\\",\\\"path\\\":\\\"70-multus.conf\\\"}],\\\"name\\\":\\\"multus-cni-config\\\"},\\\"name\\\":\\\"multus-cfg\\\"}]}},\\\"updateStrategy\\\":{\\\"type\\\":\\\"RollingUpdate\\\"}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kubectl-client-side-apply\", Operation:\"Update\", APIVersion:\"apps/v1\", Time:(*v1.Time)(0xc002dd40c0), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002dd40e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc002dd4100), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"multus\", \"name\":\"multus\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002dd4120), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:\"cnibin\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002dd4140), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:\"multus-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001906c80), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-multus\", Image:\"ghcr.io/k8snetworkplumbingwg/multus-cni:stable\", Command:[]string{\"/entrypoint.sh\"}, Args:[]string{\"--multus-conf-file=auto\", \"--cni-version=0.3.1\"}, WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"cni\", ReadOnly:false, MountPath:\"/host/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"cnibin\", ReadOnly:false, MountPath:\"/host/opt/cni/bin\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"multus-cfg\", ReadOnly:false, MountPath:\"/tmp/multus-conf\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc00187d0e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc003580238), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"multus\", DeprecatedServiceAccount:\"multus\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000d23420), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0015803a0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc003580280)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kube-multus-ds\": the object has been modified; please apply your changes to the latest version and try again\nI0521 15:16:03.758826 1 event.go:291] \"Event occurred\" object=\"metallb-system/speaker\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: speaker-jlmfn\"\nI0521 15:16:03.761625 1 event.go:291] \"Event occurred\" object=\"metallb-system/controller\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set controller-675995489c to 1\"\nI0521 15:16:03.763763 1 event.go:291] \"Event occurred\" object=\"metallb-system/speaker\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: speaker-kjmdr\"\nI0521 15:16:03.764273 1 event.go:291] \"Event occurred\" object=\"metallb-system/speaker\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: speaker-x7d27\"\nI0521 15:16:03.765757 1 event.go:291] \"Event occurred\" object=\"metallb-system/controller-675995489c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: controller-675995489c-scdfn\"\nE0521 15:16:03.783007 1 daemon_controller.go:320] metallb-system/speaker failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"speaker\", GenerateName:\"\", Namespace:\"metallb-system\", SelfLink:\"/apis/apps/v1/namespaces/metallb-system/daemonsets/speaker\", UID:\"d7daf5aa-21be-488b-92a4-c4eaee7a388b\", ResourceVersion:\"1150\", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63757206963, loc:(*time.Location)(0x6a53ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"metallb\", \"component\":\"speaker\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"metallb\\\",\\\"component\\\":\\\"speaker\\\"},\\\"name\\\":\\\"speaker\\\",\\\"namespace\\\":\\\"metallb-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"metallb\\\",\\\"component\\\":\\\"speaker\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"prometheus.io/port\\\":\\\"7472\\\",\\\"prometheus.io/scrape\\\":\\\"true\\\"},\\\"labels\\\":{\\\"app\\\":\\\"metallb\\\",\\\"component\\\":\\\"speaker\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"args\\\":[\\\"--port=7472\\\",\\\"--config=config\\\"],\\\"env\\\":[{\\\"name\\\":\\\"METALLB_NODE_NAME\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"spec.nodeName\\\"}}},{\\\"name\\\":\\\"METALLB_HOST\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.hostIP\\\"}}},{\\\"name\\\":\\\"METALLB_ML_BIND_ADDR\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.podIP\\\"}}},{\\\"name\\\":\\\"METALLB_ML_LABELS\\\",\\\"value\\\":\\\"app=metallb,component=speaker\\\"},{\\\"name\\\":\\\"METALLB_ML_SECRET_KEY\\\",\\\"valueFrom\\\":{\\\"secretKeyRef\\\":{\\\"key\\\":\\\"secretkey\\\",\\\"name\\\":\\\"memberlist\\\"}}}],\\\"image\\\":\\\"quay.io/metallb/speaker:main\\\",\\\"name\\\":\\\"speaker\\\",\\\"ports\\\":[{\\\"containerPort\\\":7472,\\\"name\\\":\\\"monitoring\\\"},{\\\"containerPort\\\":7946,\\\"name\\\":\\\"memberlist-tcp\\\"},{\\\"containerPort\\\":7946,\\\"name\\\":\\\"memberlist-udp\\\",\\\"protocol\\\":\\\"UDP\\\"}],\\\"securityContext\\\":{\\\"allowPrivilegeEscalation\\\":false,\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_RAW\\\"],\\\"drop\\\":[\\\"ALL\\\"]},\\\"readOnlyRootFilesystem\\\":true}}],\\\"hostNetwork\\\":true,\\\"nodeSelector\\\":{\\\"kubernetes.io/os\\\":\\\"linux\\\"},\\\"serviceAccountName\\\":\\\"speaker\\\",\\\"terminationGracePeriodSeconds\\\":2,\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"key\\\":\\\"node-role.kubernetes.io/master\\\",\\\"operator\\\":\\\"Exists\\\"}]}}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kubectl-client-side-apply\", Operation:\"Update\", APIVersion:\"apps/v1\", Time:(*v1.Time)(0xc0024cd380), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0024cd3a0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0024cd3c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"metallb\", \"component\":\"speaker\"}, Annotations:map[string]string{\"prometheus.io/port\":\"7472\", \"prometheus.io/scrape\":\"true\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"speaker\", Image:\"quay.io/metallb/speaker:main\", Command:[]string(nil), Args:[]string{\"--port=7472\", \"--config=config\"}, WorkingDir:\"\", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:\"monitoring\", HostPort:7472, ContainerPort:7472, Protocol:\"TCP\", HostIP:\"\"}, v1.ContainerPort{Name:\"memberlist-tcp\", HostPort:7946, ContainerPort:7946, Protocol:\"TCP\", HostIP:\"\"}, v1.ContainerPort{Name:\"memberlist-udp\", HostPort:7946, ContainerPort:7946, Protocol:\"UDP\", HostIP:\"\"}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"METALLB_NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc0024cd400)}, v1.EnvVar{Name:\"METALLB_HOST\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc0024cd440)}, v1.EnvVar{Name:\"METALLB_ML_BIND_ADDR\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc0024cd480)}, v1.EnvVar{Name:\"METALLB_ML_LABELS\", Value:\"app=metallb,component=speaker\", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:\"METALLB_ML_SECRET_KEY\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc0024cd4c0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc00340c300), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc0033a1d88), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"speaker\", DeprecatedServiceAccount:\"speaker\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000759880), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"node-role.kubernetes.io/master\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001804a58)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0033a1ddc)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"speaker\": the object has been modified; please apply your changes to the latest version and try again\nI0521 15:16:04.986297 1 event.go:291] \"Event occurred\" object=\"projectcontour/contour-certgen-v1.15.1\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: contour-certgen-v1.15.1-7m8mh\"\nI0521 15:16:05.037495 1 event.go:291] \"Event occurred\" object=\"projectcontour/contour\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set contour-6648989f79 to 2\"\nI0521 15:16:05.045486 1 event.go:291] \"Event occurred\" object=\"projectcontour/contour-6648989f79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: contour-6648989f79-c2th6\"\nI0521 15:16:05.050511 1 event.go:291] \"Event occurred\" object=\"projectcontour/contour-6648989f79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: contour-6648989f79-6s225\"\nI0521 15:16:05.056687 1 event.go:291] \"Event occurred\" object=\"projectcontour/envoy\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: envoy-gkg7t\"\nI0521 15:16:05.061125 1 event.go:291] \"Event occurred\" object=\"projectcontour/envoy\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: envoy-rs2lk\"\nI0521 15:16:05.181693 1 event.go:291] \"Event occurred\" object=\"projectcontour/envoy\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: envoy-788lx\"\nI0521 15:16:05.186657 1 event.go:291] \"Event occurred\" object=\"projectcontour/envoy\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: envoy-gkg7t\"\nI0521 15:16:05.188036 1 event.go:291] \"Event occurred\" object=\"projectcontour/envoy\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: envoy-rs2lk\"\nE0521 15:16:05.213186 1 daemon_controller.go:320] projectcontour/envoy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"envoy\", GenerateName:\"\", Namespace:\"projectcontour\", SelfLink:\"/apis/apps/v1/namespaces/projectcontour/daemonsets/envoy\", UID:\"ac7a3356-909f-44af-ae62-7163a05f72ec\", ResourceVersion:\"1273\", Generation:2, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63757206965, loc:(*time.Location)(0x6a53ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"envoy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"2\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"envoy\\\"},\\\"name\\\":\\\"envoy\\\",\\\"namespace\\\":\\\"projectcontour\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"envoy\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"annotations\\\":{\\\"prometheus.io/path\\\":\\\"/stats/prometheus\\\",\\\"prometheus.io/port\\\":\\\"8002\\\",\\\"prometheus.io/scrape\\\":\\\"true\\\"},\\\"labels\\\":{\\\"app\\\":\\\"envoy\\\"}},\\\"spec\\\":{\\\"automountServiceAccountToken\\\":false,\\\"containers\\\":[{\\\"args\\\":[\\\"envoy\\\",\\\"shutdown-manager\\\"],\\\"command\\\":[\\\"/bin/contour\\\"],\\\"image\\\":\\\"docker.io/projectcontour/contour:v1.15.1\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"lifecycle\\\":{\\\"preStop\\\":{\\\"exec\\\":{\\\"command\\\":[\\\"/bin/contour\\\",\\\"envoy\\\",\\\"shutdown\\\"]}}},\\\"livenessProbe\\\":{\\\"httpGet\\\":{\\\"path\\\":\\\"/healthz\\\",\\\"port\\\":8090},\\\"initialDelaySeconds\\\":3,\\\"periodSeconds\\\":10},\\\"name\\\":\\\"shutdown-manager\\\"},{\\\"args\\\":[\\\"-c\\\",\\\"/config/envoy.json\\\",\\\"--service-cluster $(CONTOUR_NAMESPACE)\\\",\\\"--service-node $(ENVOY_POD_NAME)\\\",\\\"--log-level info\\\"],\\\"command\\\":[\\\"envoy\\\"],\\\"env\\\":[{\\\"name\\\":\\\"CONTOUR_NAMESPACE\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"apiVersion\\\":\\\"v1\\\",\\\"fieldPath\\\":\\\"metadata.namespace\\\"}}},{\\\"name\\\":\\\"ENVOY_POD_NAME\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"apiVersion\\\":\\\"v1\\\",\\\"fieldPath\\\":\\\"metadata.name\\\"}}}],\\\"image\\\":\\\"docker.io/envoyproxy/envoy:v1.18.3\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"lifecycle\\\":{\\\"preStop\\\":{\\\"httpGet\\\":{\\\"path\\\":\\\"/shutdown\\\",\\\"port\\\":8090,\\\"scheme\\\":\\\"HTTP\\\"}}},\\\"name\\\":\\\"envoy\\\",\\\"ports\\\":[{\\\"containerPort\\\":8080,\\\"hostPort\\\":80,\\\"name\\\":\\\"http\\\",\\\"protocol\\\":\\\"TCP\\\"},{\\\"containerPort\\\":8443,\\\"hostPort\\\":443,\\\"name\\\":\\\"https\\\",\\\"protocol\\\":\\\"TCP\\\"}],\\\"readinessProbe\\\":{\\\"httpGet\\\":{\\\"path\\\":\\\"/ready\\\",\\\"port\\\":8002},\\\"initialDelaySeconds\\\":3,\\\"periodSeconds\\\":4},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/config\\\",\\\"name\\\":\\\"envoy-config\\\",\\\"readOnly\\\":true},{\\\"mountPath\\\":\\\"/certs\\\",\\\"name\\\":\\\"envoycert\\\",\\\"readOnly\\\":true}]}],\\\"initContainers\\\":[{\\\"args\\\":[\\\"bootstrap\\\",\\\"/config/envoy.json\\\",\\\"--xds-address=contour\\\",\\\"--xds-port=8001\\\",\\\"--xds-resource-version=v3\\\",\\\"--resources-dir=/config/resources\\\",\\\"--envoy-cafile=/certs/ca.crt\\\",\\\"--envoy-cert-file=/certs/tls.crt\\\",\\\"--envoy-key-file=/certs/tls.key\\\"],\\\"command\\\":[\\\"contour\\\"],\\\"env\\\":[{\\\"name\\\":\\\"CONTOUR_NAMESPACE\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"metadata.namespace\\\"}}}],\\\"image\\\":\\\"docker.io/projectcontour/contour:v1.15.1\\\",\\\"imagePullPolicy\\\":\\\"IfNotPresent\\\",\\\"name\\\":\\\"envoy-initconfig\\\",\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/config\\\",\\\"name\\\":\\\"envoy-config\\\"},{\\\"mountPath\\\":\\\"/certs\\\",\\\"name\\\":\\\"envoycert\\\",\\\"readOnly\\\":true}]}],\\\"restartPolicy\\\":\\\"Always\\\",\\\"serviceAccountName\\\":\\\"envoy\\\",\\\"terminationGracePeriodSeconds\\\":300,\\\"volumes\\\":[{\\\"emptyDir\\\":{},\\\"name\\\":\\\"envoy-config\\\"},{\\\"name\\\":\\\"envoycert\\\",\\\"secret\\\":{\\\"secretName\\\":\\\"envoycert\\\"}}]}},\\\"updateStrategy\\\":{\\\"rollingUpdate\\\":{\\\"maxUnavailable\\\":\\\"10%\\\"},\\\"type\\\":\\\"RollingUpdate\\\"}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-controller-manager\", Operation:\"Update\", APIVersion:\"apps/v1\", Time:(*v1.Time)(0xc003291000), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003291020)}, v1.ManagedFieldsEntry{Manager:\"kubectl-client-side-apply\", Operation:\"Update\", APIVersion:\"apps/v1\", Time:(*v1.Time)(0xc003291040), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003291060)}, v1.ManagedFieldsEntry{Manager:\"kubectl-patch\", Operation:\"Update\", APIVersion:\"apps/v1\", Time:(*v1.Time)(0xc003291080), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0032910a0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0032910c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"envoy\"}, Annotations:map[string]string{\"prometheus.io/path\":\"/stats/prometheus\", \"prometheus.io/port\":\"8002\", \"prometheus.io/scrape\":\"true\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"envoy-config\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc0032910e0), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:\"envoycert\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002f83980), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:\"envoy-initconfig\", Image:\"docker.io/projectcontour/contour:v1.15.1\", Command:[]string{\"contour\"}, Args:[]string{\"bootstrap\", \"/config/envoy.json\", \"--xds-address=contour\", \"--xds-port=8001\", \"--xds-resource-version=v3\", \"--resources-dir=/config/resources\", \"--envoy-cafile=/certs/ca.crt\", \"--envoy-cert-file=/certs/tls.crt\", \"--envoy-key-file=/certs/tls.key\"}, WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"CONTOUR_NAMESPACE\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc003291260)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"envoy-config\", ReadOnly:false, MountPath:\"/config\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"envoycert\", ReadOnly:true, MountPath:\"/certs\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:\"shutdown-manager\", Image:\"docker.io/projectcontour/contour:v1.15.1\", Command:[]string{\"/bin/contour\"}, Args:[]string{\"envoy\", \"shutdown-manager\"}, WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc0019f32c0), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(0xc000eaa110), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:\"envoy\", Image:\"docker.io/envoyproxy/envoy:v1.18.3\", Command:[]string{\"envoy\"}, Args:[]string{\"-c\", \"/config/envoy.json\", \"--service-cluster $(CONTOUR_NAMESPACE)\", \"--service-node $(ENVOY_POD_NAME)\", \"--log-level info\"}, WorkingDir:\"\", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:\"http\", HostPort:80, ContainerPort:8080, Protocol:\"TCP\", HostIP:\"\"}, v1.ContainerPort{Name:\"https\", HostPort:443, ContainerPort:8443, Protocol:\"TCP\", HostIP:\"\"}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"CONTOUR_NAMESPACE\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc0032911a0)}, v1.EnvVar{Name:\"ENVOY_POD_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc0032911e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"envoy-config\", ReadOnly:true, MountPath:\"/config\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"envoycert\", ReadOnly:true, MountPath:\"/certs\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(0xc0019f3320), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(0xc000eaa150), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc003246f38), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"ingress-ready\":\"true\"}, ServiceAccountName:\"envoy\", DeprecatedServiceAccount:\"envoy\", AutomountServiceAccountToken:(*bool)(0xc003246fdc), NodeName:\"\", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0008aa070), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"node-role.kubernetes.io/master\", Operator:\"Equal\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0006d2730)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc003247010)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailable:0, NumberUnavailable:2, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"envoy\": the object has been modified; please apply your changes to the latest version and try again\nI0521 15:16:05.741131 1 event.go:291] \"Event occurred\" object=\"kubernetes-dashboard/kubernetes-dashboard\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set kubernetes-dashboard-9f9799597 to 1\"\nI0521 15:16:05.751372 1 event.go:291] \"Event occurred\" object=\"kubernetes-dashboard/kubernetes-dashboard-9f9799597\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: kubernetes-dashboard-9f9799597-fr9hn\"\nI0521 15:16:05.767568 1 event.go:291] \"Event occurred\" object=\"kubernetes-dashboard/dashboard-metrics-scraper\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set dashboard-metrics-scraper-79c5968bdc to 1\"\nI0521 15:16:05.770947 1 event.go:291] \"Event occurred\" object=\"kubernetes-dashboard/dashboard-metrics-scraper-79c5968bdc\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: dashboard-metrics-scraper-79c5968bdc-tfgzj\"\nI0521 15:16:07.223716 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for tlscertificatedelegations.projectcontour.io\nI0521 15:16:07.223919 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for extensionservices.projectcontour.io\nI0521 15:16:07.223979 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for httpproxies.projectcontour.io\nI0521 15:16:07.224034 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for network-attachment-definitions.k8s.cni.cncf.io\nI0521 15:16:07.224150 1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0521 15:16:07.524429 1 shared_informer.go:247] Caches are synced for resource quota \nI0521 15:16:08.180764 1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0521 15:16:08.180844 1 shared_informer.go:247] Caches are synced for garbage collector \nI0521 15:16:27.450745 1 event.go:291] \"Event occurred\" object=\"projectcontour/contour-certgen-v1.15.1\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Completed\" message=\"Job completed\"\nI0521 15:18:21.403489 1 event.go:291] \"Event occurred\" object=\"kubectl-4040/frontend\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set frontend-58d458fdbd to 3\"\nI0521 15:18:21.409648 1 event.go:291] \"Event occurred\" object=\"kubectl-4040/frontend-58d458fdbd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: frontend-58d458fdbd-kh7kp\"\nI0521 15:18:21.414702 1 event.go:291] \"Event occurred\" object=\"kubectl-4040/frontend-58d458fdbd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: frontend-58d458fdbd-tr5fl\"\nI0521 15:18:21.414779 1 event.go:291] \"Event occurred\" object=\"kubectl-4040/frontend-58d458fdbd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: frontend-58d458fdbd-cxbpg\"\nI0521 15:18:21.683473 1 event.go:291] \"Event occurred\" object=\"kubectl-4040/agnhost-primary\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set agnhost-primary-76f75c9b74 to 1\"\nI0521 15:18:21.687336 1 event.go:291] \"Event occurred\" object=\"kubectl-4040/agnhost-primary-76f75c9b74\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-primary-76f75c9b74-v9xsq\"\nI0521 15:18:21.953886 1 event.go:291] \"Event occurred\" object=\"kubectl-4040/agnhost-replica\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set agnhost-replica-7d6489798 to 2\"\nI0521 15:18:21.957852 1 event.go:291] \"Event occurred\" object=\"kubectl-4040/agnhost-replica-7d6489798\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-replica-7d6489798-96wbp\"\nI0521 15:18:21.962103 1 event.go:291] \"Event occurred\" object=\"kubectl-4040/agnhost-replica-7d6489798\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-replica-7d6489798-kh64b\"\nE0521 15:18:37.850201 1 tokens_controller.go:261] error synchronizing serviceaccount kubectl-4040/default: secrets \"default-token-xpmst\" is forbidden: unable to create new content in namespace kubectl-4040 because it is being terminated\nI0521 15:18:48.544168 1 namespace_controller.go:185] Namespace has been deleted kubectl-4040\nI0521 15:18:50.436849 1 namespace_controller.go:185] Namespace has been deleted pods-4662\nI0521 15:19:05.218825 1 namespace_controller.go:185] Namespace has been deleted rally-ebde9e4d-dfv2bizs\nE0521 15:19:25.560104 1 tokens_controller.go:261] error synchronizing serviceaccount c-rally-5cd295ac-ufoajiky/c-rally-5cd295ac-ufoajiky: secrets \"c-rally-5cd295ac-ufoajiky-token-pdn7c\" is forbidden: unable to create new content in namespace c-rally-5cd295ac-ufoajiky because it is being terminated\nE0521 15:19:25.561318 1 tokens_controller.go:261] error synchronizing serviceaccount c-rally-5cd295ac-ufoajiky/default: secrets \"default-token-w9qjz\" is forbidden: unable to create new content in namespace c-rally-5cd295ac-ufoajiky because it is being terminated\nI0521 15:19:29.377796 1 event.go:291] \"Event occurred\" object=\"c-rally-28456e8c-6rig10zo/rally-28456e8c-b1za1kc6\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rally-28456e8c-b1za1kc6-4jw8m\"\nI0521 15:19:29.381588 1 event.go:291] \"Event occurred\" object=\"c-rally-28456e8c-6rig10zo/rally-28456e8c-b1za1kc6\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rally-28456e8c-b1za1kc6-x9nd6\"\nI0521 15:19:30.717344 1 namespace_controller.go:185] Namespace has been deleted c-rally-5cd295ac-ufoajiky\nI0521 15:20:03.568514 1 event.go:291] \"Event occurred\" object=\"c-rally-3a036e6e-62mlebr1/rally-3a036e6e-tdt3op8t\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rally-3a036e6e-tdt3op8t-hf8rq\"\nI0521 15:20:03.574129 1 event.go:291] \"Event occurred\" object=\"c-rally-3a036e6e-62mlebr1/rally-3a036e6e-tdt3op8t\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rally-3a036e6e-tdt3op8t-gvrwr\"\nI0521 15:20:05.078643 1 namespace_controller.go:185] Namespace has been deleted c-rally-28456e8c-6rig10zo\nI0521 15:20:05.602483 1 event.go:291] \"Event occurred\" object=\"c-rally-3a036e6e-62mlebr1/rally-3a036e6e-tdt3op8t\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rally-3a036e6e-tdt3op8t-swpqh\"\nE0521 15:20:07.629640 1 replica_set.go:201] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{rally-3a036e6e-tdt3op8t c-rally-3a036e6e-62mlebr1 /api/v1/namespaces/c-rally-3a036e6e-62mlebr1/replicationcontrollers/rally-3a036e6e-tdt3op8t 9bd20a52-7efc-4c2e-a2c2-87b9c4294d3d 2815 3 2021-05-21 15:20:03 +0000 UTC map[app:rally-3a036e6e-d7ifd8ki] map[] [] [] [{OpenAPI-Generator Update v1 2021-05-21 15:20:03 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:labels\":{\".\":{},\"f:app\":{}}},\"f:spec\":{\"f:replicas\":{},\"f:selector\":{\".\":{},\"f:app\":{}},\"f:template\":{\".\":{},\"f:metadata\":{\".\":{},\"f:creationTimestamp\":{},\"f:labels\":{\".\":{},\"f:app\":{}},\"f:name\":{}},\"f:spec\":{\".\":{},\"f:containers\":{\".\":{},\"k:{\\\"name\\\":\\\"rally-3a036e6e-tdt3op8t\\\"}\":{\".\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:serviceAccount\":{},\"f:serviceAccountName\":{},\"f:terminationGracePeriodSeconds\":{}}}}}} {kube-controller-manager Update v1 2021-05-21 15:20:04 +0000 UTC FieldsV1 {\"f:status\":{\"f:availableReplicas\":{},\"f:fullyLabeledReplicas\":{},\"f:observedGeneration\":{},\"f:readyReplicas\":{},\"f:replicas\":{}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: rally-3a036e6e-d7ifd8ki,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{rally-3a036e6e-tdt3op8t 0 0001-01-01 00:00:00 +0000 UTC map[app:rally-3a036e6e-d7ifd8ki] map[] [] [] []} {[] [] [{rally-3a036e6e-tdt3op8t k8s.gcr.io/pause:3.3 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002e91c90 ClusterFirst map[] c-rally-3a036e6e-62mlebr1 c-rally-3a036e6e-62mlebr1 false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:2,ReadyReplicas:3,AvailableReplicas:3,Conditions:[]ReplicaSetCondition{},},}\nI0521 15:20:07.635628 1 event.go:291] \"Event occurred\" object=\"c-rally-3a036e6e-62mlebr1/rally-3a036e6e-tdt3op8t\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: rally-3a036e6e-tdt3op8t-swpqh\"\nI0521 15:20:23.738599 1 event.go:291] \"Event occurred\" object=\"c-rally-9afae090-j4k7xf2n/rally-9afae090-6of48cno\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rally-9afae090-6of48cno-7fz8z\"\nI0521 15:20:25.150214 1 namespace_controller.go:185] Namespace has been deleted c-rally-3a036e6e-62mlebr1\nI0521 15:20:35.835677 1 event.go:291] \"Event occurred\" object=\"c-rally-b499d029-gxwy5wv8/rally-b499d029-9gwhnyt7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rally-b499d029-9gwhnyt7-m9ghd\"\nI0521 15:20:37.018569 1 namespace_controller.go:185] Namespace has been deleted c-rally-9afae090-j4k7xf2n\nI0521 15:20:37.865848 1 event.go:291] \"Event occurred\" object=\"c-rally-b499d029-gxwy5wv8/rally-b499d029-9gwhnyt7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rally-b499d029-9gwhnyt7-vczfr\"\nE0521 15:20:39.894270 1 replica_set.go:201] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{rally-b499d029-9gwhnyt7 c-rally-b499d029-gxwy5wv8 /apis/apps/v1/namespaces/c-rally-b499d029-gxwy5wv8/replicasets/rally-b499d029-9gwhnyt7 eda8de9f-25bc-44e8-915d-e14992045fe2 3057 3 2021-05-21 15:20:35 +0000 UTC map[app:rally-b499d029-rgxbqlg9] map[] [] [] [{OpenAPI-Generator Update apps/v1 2021-05-21 15:20:35 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:labels\":{\".\":{},\"f:app\":{}}},\"f:spec\":{\"f:replicas\":{},\"f:selector\":{\"f:matchLabels\":{\".\":{},\"f:app\":{}}},\"f:template\":{\"f:metadata\":{\"f:labels\":{\".\":{},\"f:app\":{}},\"f:name\":{}},\"f:spec\":{\"f:containers\":{\"k:{\\\"name\\\":\\\"rally-b499d029-9gwhnyt7\\\"}\":{\".\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:serviceAccount\":{},\"f:serviceAccountName\":{},\"f:terminationGracePeriodSeconds\":{}}}}}} {kube-controller-manager Update apps/v1 2021-05-21 15:20:37 +0000 UTC FieldsV1 {\"f:status\":{\"f:availableReplicas\":{},\"f:fullyLabeledReplicas\":{},\"f:observedGeneration\":{},\"f:readyReplicas\":{},\"f:replicas\":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: rally-b499d029-rgxbqlg9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{rally-b499d029-9gwhnyt7 0 0001-01-01 00:00:00 +0000 UTC map[app:rally-b499d029-rgxbqlg9] map[] [] [] []} {[] [] [{rally-b499d029-9gwhnyt7 k8s.gcr.io/pause:3.3 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003334080 ClusterFirst map[] c-rally-b499d029-gxwy5wv8 c-rally-b499d029-gxwy5wv8 false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},}\nI0521 15:20:39.900088 1 event.go:291] \"Event occurred\" object=\"c-rally-b499d029-gxwy5wv8/rally-b499d029-9gwhnyt7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: rally-b499d029-9gwhnyt7-vczfr\"\nE0521 15:20:47.080121 1 tokens_controller.go:261] error synchronizing serviceaccount c-rally-b499d029-gxwy5wv8/c-rally-b499d029-gxwy5wv8: secrets \"c-rally-b499d029-gxwy5wv8-token-q9thx\" is forbidden: unable to create new content in namespace c-rally-b499d029-gxwy5wv8 because it is being terminated\nE0521 15:20:47.083592 1 tokens_controller.go:261] error synchronizing serviceaccount c-rally-b499d029-gxwy5wv8/default: secrets \"default-token-xmvfk\" is forbidden: unable to create new content in namespace c-rally-b499d029-gxwy5wv8 because it is being terminated\nI0521 15:20:57.384018 1 namespace_controller.go:185] Namespace has been deleted c-rally-b499d029-gxwy5wv8\nE0521 15:21:07.279255 1 tokens_controller.go:261] error synchronizing serviceaccount c-rally-c86844db-1v9fi29f/c-rally-c86844db-1v9fi29f: secrets \"c-rally-c86844db-1v9fi29f-token-5g5ql\" is forbidden: unable to create new content in namespace c-rally-c86844db-1v9fi29f because it is being terminated\nE0521 15:21:07.282516 1 tokens_controller.go:261] error synchronizing serviceaccount c-rally-c86844db-1v9fi29f/default: secrets \"default-token-9bz8z\" is forbidden: unable to create new content in namespace c-rally-c86844db-1v9fi29f because it is being terminated\nI0521 15:21:12.354517 1 namespace_controller.go:185] Namespace has been deleted c-rally-c86844db-1v9fi29f\nE0521 15:21:56.092632 1 tokens_controller.go:261] error synchronizing serviceaccount c-rally-0f4c6201-lbs1r6ye/c-rally-0f4c6201-lbs1r6ye: secrets \"c-rally-0f4c6201-lbs1r6ye-token-llvl4\" is forbidden: unable to create new content in namespace c-rally-0f4c6201-lbs1r6ye because it is being terminated\nE0521 15:21:56.096548 1 tokens_controller.go:261] error synchronizing serviceaccount c-rally-0f4c6201-lbs1r6ye/default: secrets \"default-token-tgsv4\" is forbidden: unable to create new content in namespace c-rally-0f4c6201-lbs1r6ye because it is being terminated\nI0521 15:22:01.198154 1 namespace_controller.go:185] Namespace has been deleted c-rally-0f4c6201-lbs1r6ye\nI0521 15:22:47.337247 1 namespace_controller.go:185] Namespace has been deleted c-rally-bf2ff4ef-8mwfmd15\nE0521 15:23:25.528088 1 tokens_controller.go:261] error synchronizing serviceaccount c-rally-c320f185-wpi7630k/c-rally-c320f185-wpi7630k: secrets \"c-rally-c320f185-wpi7630k-token-9cmpp\" is forbidden: unable to create new content in namespace c-rally-c320f185-wpi7630k because it is being terminated\nE0521 15:23:25.533869 1 tokens_controller.go:261] error synchronizing serviceaccount c-rally-c320f185-wpi7630k/default: secrets \"default-token-4mw5m\" is forbidden: unable to create new content in namespace c-rally-c320f185-wpi7630k because it is being terminated\nI0521 15:23:30.714759 1 namespace_controller.go:185] Namespace has been deleted c-rally-c320f185-wpi7630k\nE0521 15:24:09.808457 1 tokens_controller.go:261] error synchronizing serviceaccount c-rally-d149430d-qumuqk6u/c-rally-d149430d-qumuqk6u: secrets \"c-rally-d149430d-qumuqk6u-token-gl7v8\" is forbidden: unable to create new content in namespace c-rally-d149430d-qumuqk6u because it is being terminated\nE0521 15:24:09.811412 1 tokens_controller.go:261] error synchronizing serviceaccount c-rally-d149430d-qumuqk6u/default: secrets \"default-token-82dvr\" is forbidden: unable to create new content in namespace c-rally-d149430d-qumuqk6u because it is being terminated\nI0521 15:24:14.925322 1 namespace_controller.go:185] Namespace has been deleted c-rally-d149430d-qumuqk6u\nI0521 15:24:59.053984 1 namespace_controller.go:185] Namespace has been deleted c-rally-e411f2ad-yd7760lz\nI0521 15:25:41.753380 1 event.go:291] \"Event occurred\" object=\"c-rally-21e95fae-m6cnxsrq/rally-21e95fae-fmpxl6sl\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set rally-21e95fae-fmpxl6sl-6957484498 to 2\"\nI0521 15:25:41.762325 1 event.go:291] \"Event occurred\" object=\"c-rally-21e95fae-m6cnxsrq/rally-21e95fae-fmpxl6sl-6957484498\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rally-21e95fae-fmpxl6sl-6957484498-2dwjv\"\nI0521 15:25:41.769715 1 event.go:291] \"Event occurred\" object=\"c-rally-21e95fae-m6cnxsrq/rally-21e95fae-fmpxl6sl-6957484498\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rally-21e95fae-fmpxl6sl-6957484498-jgr26\"\nI0521 15:25:43.434474 1 namespace_controller.go:185] Namespace has been deleted c-rally-4efb7b99-xrd0orfn\nE0521 15:25:49.886369 1 tokens_controller.go:261] error synchronizing serviceaccount c-rally-21e95fae-m6cnxsrq/c-rally-21e95fae-m6cnxsrq: secrets \"c-rally-21e95fae-m6cnxsrq-token-2lcsp\" is forbidden: unable to create new content in namespace c-rally-21e95fae-m6cnxsrq because it is being terminated\nE0521 15:25:49.888772 1 tokens_controller.go:261] error synchronizing serviceaccount c-rally-21e95fae-m6cnxsrq/default: secrets \"default-token-4lprd\" is forbidden: unable to create new content in namespace c-rally-21e95fae-m6cnxsrq because it is being terminated\nI0521 15:25:57.890281 1 event.go:291] \"Event occurred\" object=\"c-rally-a84ce386-z4zx13qd/rally-a84ce386-gn4hy7j7\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set rally-a84ce386-gn4hy7j7-67fd94c58c to 1\"\nI0521 15:25:57.899210 1 event.go:291] \"Event occurred\" object=\"c-rally-a84ce386-z4zx13qd/rally-a84ce386-gn4hy7j7-67fd94c58c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rally-a84ce386-gn4hy7j7-67fd94c58c-z6l2c\"\nI0521 15:25:59.942801 1 event.go:291] \"Event occurred\" object=\"c-rally-a84ce386-z4zx13qd/rally-a84ce386-gn4hy7j7\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set rally-a84ce386-gn4hy7j7-65d76f7b7d to 1\"\nI0521 15:25:59.947297 1 event.go:291] \"Event occurred\" object=\"c-rally-a84ce386-z4zx13qd/rally-a84ce386-gn4hy7j7-65d76f7b7d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rally-a84ce386-gn4hy7j7-65d76f7b7d-lq5pj\"\nI0521 15:26:00.256951 1 namespace_controller.go:185] Namespace has been deleted c-rally-21e95fae-m6cnxsrq\nI0521 15:26:01.601477 1 event.go:291] \"Event occurred\" object=\"c-rally-a84ce386-z4zx13qd/rally-a84ce386-gn4hy7j7\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set rally-a84ce386-gn4hy7j7-67fd94c58c to 0\"\nI0521 15:26:01.606224 1 event.go:291] \"Event occurred\" object=\"c-rally-a84ce386-z4zx13qd/rally-a84ce386-gn4hy7j7-67fd94c58c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: rally-a84ce386-gn4hy7j7-67fd94c58c-z6l2c\"\nE0521 15:26:08.124401 1 tokens_controller.go:261] error synchronizing serviceaccount c-rally-a84ce386-z4zx13qd/c-rally-a84ce386-z4zx13qd: secrets \"c-rally-a84ce386-z4zx13qd-token-mt674\" is forbidden: unable to create new content in namespace c-rally-a84ce386-z4zx13qd because it is being terminated\nE0521 15:26:08.127132 1 tokens_controller.go:261] error synchronizing serviceaccount c-rally-a84ce386-z4zx13qd/default: secrets \"default-token-2c57g\" is forbidden: unable to create new content in namespace c-rally-a84ce386-z4zx13qd because it is being terminated\nI0521 15:26:48.184859 1 event.go:291] \"Event occurred\" object=\"c-rally-809ad2dd-6wko1yye/rally-809ad2dd-ma2sslgz\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod rally-809ad2dd-ma2sslgz-0 in StatefulSet rally-809ad2dd-ma2sslgz successful\"\nI0521 15:26:49.691440 1 event.go:291] \"Event occurred\" object=\"c-rally-809ad2dd-6wko1yye/rally-809ad2dd-ma2sslgz\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod rally-809ad2dd-ma2sslgz-1 in StatefulSet rally-809ad2dd-ma2sslgz successful\"\nI0521 15:26:50.813708 1 namespace_controller.go:185] Namespace has been deleted c-rally-a84ce386-z4zx13qd\nI0521 15:26:51.213151 1 stateful_set.go:419] StatefulSet has been deleted c-rally-809ad2dd-6wko1yye/rally-809ad2dd-ma2sslgz\nE0521 15:26:57.321452 1 tokens_controller.go:261] error synchronizing serviceaccount c-rally-809ad2dd-6wko1yye/c-rally-809ad2dd-6wko1yye: secrets \"c-rally-809ad2dd-6wko1yye-token-wp8wn\" is forbidden: unable to create new content in namespace c-rally-809ad2dd-6wko1yye because it is being terminated\nE0521 15:26:57.324233 1 tokens_controller.go:261] error synchronizing serviceaccount c-rally-809ad2dd-6wko1yye/default: secrets \"default-token-qfdg4\" is forbidden: unable to create new content in namespace c-rally-809ad2dd-6wko1yye because it is being terminated\nI0521 15:27:06.317756 1 event.go:291] \"Event occurred\" object=\"c-rally-b699a93d-mji3kefi/rally-b699a93d-hlyzer73\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod rally-b699a93d-hlyzer73-0 in StatefulSet rally-b699a93d-hlyzer73 successful\"\nI0521 15:27:07.673132 1 namespace_controller.go:185] Namespace has been deleted c-rally-809ad2dd-6wko1yye\nI0521 15:27:08.346832 1 event.go:291] \"Event occurred\" object=\"c-rally-b699a93d-mji3kefi/rally-b699a93d-hlyzer73\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod rally-b699a93d-hlyzer73-1 in StatefulSet rally-b699a93d-hlyzer73 successful\"\nI0521 15:27:10.381933 1 event.go:291] \"Event occurred\" object=\"c-rally-b699a93d-mji3kefi/rally-b699a93d-hlyzer73\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod rally-b699a93d-hlyzer73-1 in StatefulSet rally-b699a93d-hlyzer73 successful\"\nI0521 15:27:11.395690 1 stateful_set.go:419] StatefulSet has been deleted c-rally-b699a93d-mji3kefi/rally-b699a93d-hlyzer73\nE0521 15:27:17.518139 1 tokens_controller.go:261] error synchronizing serviceaccount c-rally-b699a93d-mji3kefi/c-rally-b699a93d-mji3kefi: secrets \"c-rally-b699a93d-mji3kefi-token-2qd49\" is forbidden: unable to create new content in namespace c-rally-b699a93d-mji3kefi because it is being terminated\nE0521 15:27:17.519856 1 tokens_controller.go:261] error synchronizing serviceaccount c-rally-b699a93d-mji3kefi/default: secrets \"default-token-6ljml\" is forbidden: unable to create new content in namespace c-rally-b699a93d-mji3kefi because it is being terminated\nI0521 15:27:26.464105 1 event.go:291] \"Event occurred\" object=\"c-rally-6b3d9ab6-iabaav29/rally-6b3d9ab6-27g6udfy\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rally-6b3d9ab6-27g6udfy-ghzl4\"\nI0521 15:27:27.872005 1 namespace_controller.go:185] Namespace has been deleted c-rally-b699a93d-mji3kefi\nI0521 15:27:27.944186 1 event.go:291] \"Event occurred\" object=\"c-rally-6b3d9ab6-iabaav29/rally-6b3d9ab6-27g6udfy\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Completed\" message=\"Job completed\"\nE0521 15:27:34.574907 1 tokens_controller.go:261] error synchronizing serviceaccount c-rally-6b3d9ab6-iabaav29/c-rally-6b3d9ab6-iabaav29: secrets \"c-rally-6b3d9ab6-iabaav29-token-sxwmc\" is forbidden: unable to create new content in namespace c-rally-6b3d9ab6-iabaav29 because it is being terminated\nE0521 15:27:34.577127 1 tokens_controller.go:261] error synchronizing serviceaccount c-rally-6b3d9ab6-iabaav29/default: secrets \"default-token-rm55p\" is forbidden: unable to create new content in namespace c-rally-6b3d9ab6-iabaav29 because it is being terminated\nI0521 15:27:39.776708 1 namespace_controller.go:185] Namespace has been deleted c-rally-6b3d9ab6-iabaav29\nI0521 15:27:42.666746 1 event.go:291] \"Event occurred\" object=\"c-rally-36ee3726-fnklrc9q/rally-36ee3726-xjvatcd7\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rally-36ee3726-xjvatcd7-pjhj2\"\nI0521 15:27:46.985410 1 event.go:291] \"Event occurred\" object=\"c-rally-36ee3726-fnklrc9q/rally-36ee3726-xjvatcd7\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Completed\" message=\"Job completed\"\nI0521 15:28:02.086807 1 namespace_controller.go:185] Namespace has been deleted c-rally-36ee3726-fnklrc9q\nI0521 15:28:03.818874 1 event.go:291] \"Event occurred\" object=\"c-rally-e80c89af-ojstqmxt/rally-e80c89af-fjnlight\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rally-e80c89af-fjnlight-pqb9l\"\nI0521 15:28:05.023725 1 event.go:291] \"Event occurred\" object=\"c-rally-e80c89af-ojstqmxt/rally-e80c89af-fjnlight\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Completed\" message=\"Job completed\"\nE0521 15:28:06.869513 1 garbagecollector.go:309] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"discovery.k8s.io/v1beta1\", Kind:\"EndpointSlice\", Name:\"rally-e80c89af-fjnlight-pclfh\", UID:\"72a813d4-7f6c-4f5a-8819-eaa4b8647a0b\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"c-rally-e80c89af-ojstqmxt\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Endpoints\", Name:\"rally-e80c89af-fjnlight\", UID:\"a662ca06-4f3a-4be2-bc8a-2f0db6a1aaf2\", Controller:(*bool)(0xc0039a3a2c), BlockOwnerDeletion:(*bool)(0xc0039a3a2d)}}}: endpointslices.discovery.k8s.io \"rally-e80c89af-fjnlight-pclfh\" not found\nI0521 15:28:19.170287 1 namespace_controller.go:185] Namespace has been deleted c-rally-e80c89af-ojstqmxt\nI0521 15:30:13.960078 1 namespace_controller.go:185] Namespace has been deleted c-rally-0b28c0d7-711n0pes\nE0521 15:30:26.157318 1 tokens_controller.go:261] error synchronizing serviceaccount autoscaling-335/default: secrets \"default-token-d2kjg\" is forbidden: unable to create new content in namespace autoscaling-335 because it is being terminated\nE0521 15:30:26.192699 1 tokens_controller.go:261] error synchronizing serviceaccount autoscaling-1673/default: secrets \"default-token-xn5f9\" is forbidden: unable to create new content in namespace autoscaling-1673 because it is being terminated\nE0521 15:30:26.241283 1 tokens_controller.go:261] error synchronizing serviceaccount node-lease-test-2252/default: secrets \"default-token-xhc8h\" is forbidden: unable to create new content in namespace node-lease-test-2252 because it is being terminated\nE0521 15:30:31.145645 1 tokens_controller.go:261] error synchronizing serviceaccount examples-8227/default: secrets \"default-token-8b5pl\" is forbidden: unable to create new content in namespace examples-8227 because it is being terminated\nI0521 15:30:31.238726 1 namespace_controller.go:185] Namespace has been deleted autoscaling-335\nI0521 15:30:31.298759 1 namespace_controller.go:185] Namespace has been deleted autoscaling-1673\nI0521 15:30:31.332986 1 namespace_controller.go:185] Namespace has been deleted node-lease-test-2252\nE0521 15:30:31.627516 1 tokens_controller.go:261] error synchronizing serviceaccount autoscaling-8690/default: secrets \"default-token-r55lj\" is forbidden: unable to create new content in namespace autoscaling-8690 because it is being terminated\nE0521 15:30:32.071665 1 tokens_controller.go:261] error synchronizing serviceaccount container-runtime-6238/default: secrets \"default-token-sc8xd\" is forbidden: unable to create new content in namespace container-runtime-6238 because it is being terminated\nI0521 15:30:33.797494 1 namespace_controller.go:185] Namespace has been deleted sysctl-1146\nE0521 15:30:34.353492 1 tokens_controller.go:261] error synchronizing serviceaccount security-context-test-6284/default: secrets \"default-token-mkxc2\" is forbidden: unable to create new content in namespace security-context-test-6284 because it is being terminated\nE0521 15:30:35.269886 1 tokens_controller.go:261] error synchronizing serviceaccount e2e-privileged-pod-4410/default: secrets \"default-token-gsprr\" is forbidden: unable to create new content in namespace e2e-privileged-pod-4410 because it is being terminated\nE0521 15:30:35.326665 1 tokens_controller.go:261] error synchronizing serviceaccount autoscaling-1035/default: secrets \"default-token-584t9\" is forbidden: unable to create new content in namespace autoscaling-1035 because it is being terminated\nI0521 15:30:35.932395 1 namespace_controller.go:185] Namespace has been deleted security-context-test-6995\nI0521 15:30:36.241682 1 namespace_controller.go:185] Namespace has been deleted examples-8227\nI0521 15:30:36.332807 1 namespace_controller.go:185] Namespace has been deleted autoscaling-3279\nI0521 15:30:36.754713 1 namespace_controller.go:185] Namespace has been deleted autoscaling-8690\nI0521 15:30:37.238606 1 namespace_controller.go:185] Namespace has been deleted container-runtime-6238\nI0521 15:30:37.306869 1 namespace_controller.go:185] Namespace has been deleted security-context-test-2622\nI0521 15:30:37.351084 1 namespace_controller.go:185] Namespace has been deleted sysctl-8736\nI0521 15:30:37.487997 1 namespace_controller.go:185] Namespace has been deleted container-probe-9622\nE0521 15:30:37.691431 1 tokens_controller.go:261] error synchronizing serviceaccount security-context-test-3087/default: secrets \"default-token-cdghk\" is forbidden: unable to create new content in namespace security-context-test-3087 because it is being terminated\nE0521 15:30:37.701414 1 tokens_controller.go:261] error synchronizing serviceaccount examples-232/default: secrets \"default-token-sxkpr\" is forbidden: unable to create new content in namespace examples-232 because it is being terminated\nE0521 15:30:38.059015 1 tokens_controller.go:261] error synchronizing serviceaccount localssd-3219/default: secrets \"default-token-fdzrs\" is forbidden: unable to create new content in namespace localssd-3219 because it is being terminated\nE0521 15:30:38.128301 1 tokens_controller.go:261] error synchronizing serviceaccount autoscaling-4818/default: secrets \"default-token-dhwdm\" is forbidden: unable to create new content in namespace autoscaling-4818 because it is being terminated\nI0521 15:30:38.514230 1 namespace_controller.go:185] Namespace has been deleted container-runtime-2243\nI0521 15:30:38.893321 1 namespace_controller.go:185] Namespace has been deleted sysctl-5231\nE0521 15:30:38.915009 1 tokens_controller.go:261] error synchronizing serviceaccount container-runtime-9747/default: secrets \"default-token-2k5js\" is forbidden: unable to create new content in namespace container-runtime-9747 because it is being terminated\nI0521 15:30:39.516923 1 namespace_controller.go:185] Namespace has been deleted security-context-test-6284\nI0521 15:30:39.595056 1 namespace_controller.go:185] Namespace has been deleted security-context-test-7585\nI0521 15:30:40.443095 1 namespace_controller.go:185] Namespace has been deleted autoscaling-1035\nI0521 15:30:41.774172 1 namespace_controller.go:185] Namespace has been deleted security-context-test-6461\nE0521 15:30:42.069978 1 tokens_controller.go:261] error synchronizing serviceaccount container-runtime-1996/default: secrets \"default-token-xdmlm\" is forbidden: unable to create new content in namespace container-runtime-1996 because it is being terminated\nI0521 15:30:42.770872 1 namespace_controller.go:185] Namespace has been deleted examples-232\nI0521 15:30:42.869725 1 namespace_controller.go:185] Namespace has been deleted node-lease-test-1612\nI0521 15:30:42.916740 1 namespace_controller.go:185] Namespace has been deleted container-runtime-9557\nI0521 15:30:43.010160 1 namespace_controller.go:185] Namespace has been deleted security-context-test-1519\nI0521 15:30:43.193408 1 namespace_controller.go:185] Namespace has been deleted localssd-3219\nI0521 15:30:43.248277 1 namespace_controller.go:185] Namespace has been deleted autoscaling-4818\nI0521 15:30:44.065932 1 namespace_controller.go:185] Namespace has been deleted container-runtime-9747\nI0521 15:30:44.134102 1 namespace_controller.go:185] Namespace has been deleted node-pools-6279\nI0521 15:30:45.015212 1 namespace_controller.go:185] Namespace has been deleted security-context-test-6364\nE0521 15:30:45.245507 1 tokens_controller.go:261] error synchronizing serviceaccount sysctl-3614/default: secrets \"default-token-dm59g\" is forbidden: unable to create new content in namespace sysctl-3614 because it is being terminated\nI0521 15:30:47.165056 1 namespace_controller.go:185] Namespace has been deleted container-runtime-1996\nI0521 15:30:48.005608 1 namespace_controller.go:185] Namespace has been deleted security-context-test-3087\nI0521 15:30:50.323625 1 namespace_controller.go:185] Namespace has been deleted sysctl-3614\nE0521 15:30:56.155801 1 tokens_controller.go:261] error synchronizing serviceaccount pods-5608/default: secrets \"default-token-c9k25\" is forbidden: unable to create new content in namespace pods-5608 because it is being terminated\nI0521 15:30:56.712217 1 namespace_controller.go:185] Namespace has been deleted security-context-test-6904\nI0521 15:30:57.720970 1 namespace_controller.go:185] Namespace has been deleted container-probe-1854\nI0521 15:31:01.173898 1 namespace_controller.go:185] Namespace has been deleted node-lease-test-5024\nI0521 15:31:17.982170 1 namespace_controller.go:185] Namespace has been deleted e2e-privileged-pod-4410\nI0521 15:31:38.870773 1 namespace_controller.go:185] Namespace has been deleted pods-5608\nE0521 15:31:41.007832 1 tokens_controller.go:261] error synchronizing serviceaccount examples-9061/default: secrets \"default-token-qh4sp\" is forbidden: unable to create new content in namespace examples-9061 because it is being terminated\nE0521 15:32:07.759841 1 namespace_controller.go:162] deletion of namespace examples-9061 failed: unexpected items still remain in namespace: examples-9061 for gvr: /v1, Resource=pods\nE0521 15:32:07.942264 1 namespace_controller.go:162] deletion of namespace examples-9061 failed: unexpected items still remain in namespace: examples-9061 for gvr: /v1, Resource=pods\nE0521 15:32:08.127447 1 namespace_controller.go:162] deletion of namespace examples-9061 failed: unexpected items still remain in namespace: examples-9061 for gvr: /v1, Resource=pods\nE0521 15:32:08.320609 1 namespace_controller.go:162] deletion of namespace examples-9061 failed: unexpected items still remain in namespace: examples-9061 for gvr: /v1, Resource=pods\nE0521 15:32:08.536030 1 namespace_controller.go:162] deletion of namespace examples-9061 failed: unexpected items still remain in namespace: examples-9061 for gvr: /v1, Resource=pods\nE0521 15:32:08.798612 1 namespace_controller.go:162] deletion of namespace examples-9061 failed: unexpected items still remain in namespace: examples-9061 for gvr: /v1, Resource=pods\nE0521 15:32:09.132556 1 namespace_controller.go:162] deletion of namespace examples-9061 failed: unexpected items still remain in namespace: examples-9061 for gvr: /v1, Resource=pods\nE0521 15:32:09.629926 1 namespace_controller.go:162] deletion of namespace examples-9061 failed: unexpected items still remain in namespace: examples-9061 for gvr: /v1, Resource=pods\nE0521 15:32:10.448930 1 namespace_controller.go:162] deletion of namespace examples-9061 failed: unexpected items still remain in namespace: examples-9061 for gvr: /v1, Resource=pods\nE0521 15:32:11.918280 1 namespace_controller.go:162] deletion of namespace examples-9061 failed: unexpected items still remain in namespace: examples-9061 for gvr: /v1, Resource=pods\nI0521 15:32:19.666219 1 namespace_controller.go:185] Namespace has been deleted examples-9061\nE0521 15:34:44.736527 1 tokens_controller.go:261] error synchronizing serviceaccount container-probe-9417/default: secrets \"default-token-6d48m\" is forbidden: unable to create new content in namespace container-probe-9417 because it is being terminated\nI0521 15:34:49.912419 1 namespace_controller.go:185] Namespace has been deleted container-probe-9417\nE0521 15:37:09.183985 1 tokens_controller.go:261] error synchronizing serviceaccount pods-9682/default: secrets \"default-token-8rrfq\" is forbidden: unable to create new content in namespace pods-9682 because it is being terminated\nI0521 15:37:35.729433 1 namespace_controller.go:185] Namespace has been deleted pods-9682\nI0521 15:57:53.095103 1 event.go:291] \"Event occurred\" object=\"deployment-6467/test-rollover-controller\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rollover-controller-2k2cz\"\nI0521 15:57:53.797073 1 event.go:291] \"Event occurred\" object=\"statefulset-2849/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI0521 15:57:54.262558 1 event.go:291] \"Event occurred\" object=\"services-742/externalsvc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalsvc-9pg9z\"\nI0521 15:57:54.265990 1 event.go:291] \"Event occurred\" object=\"services-742/externalsvc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalsvc-jgf8l\"\nE0521 15:57:57.981445 1 tokens_controller.go:261] error synchronizing serviceaccount security-context-test-9606/default: secrets \"default-token-g99q5\" is forbidden: unable to create new content in namespace security-context-test-9606 because it is being terminated\nE0521 15:57:58.146154 1 tokens_controller.go:261] error synchronizing serviceaccount projected-6809/default: secrets \"default-token-t22dk\" is forbidden: unable to create new content in namespace projected-6809 because it is being terminated\nE0521 15:57:58.797460 1 tokens_controller.go:261] error synchronizing serviceaccount var-expansion-1585/default: secrets \"default-token-vl9kc\" is forbidden: unable to create new content in namespace var-expansion-1585 because it is being terminated\nE0521 15:57:59.104918 1 tokens_controller.go:261] error synchronizing serviceaccount secrets-1820/default: secrets \"default-token-sn6kb\" is forbidden: unable to create new content in namespace secrets-1820 because it is being terminated\nI0521 15:58:00.946619 1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-6205/test-quota\nI0521 15:58:01.820718 1 event.go:291] \"Event occurred\" object=\"statefulset-2849/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nI0521 15:58:02.081941 1 namespace_controller.go:185] Namespace has been deleted container-runtime-9957\nI0521 15:58:03.110242 1 namespace_controller.go:185] Namespace has been deleted security-context-test-9606\nI0521 15:58:03.195232 1 namespace_controller.go:185] Namespace has been deleted downward-api-964\nI0521 15:58:03.251817 1 namespace_controller.go:185] Namespace has been deleted projected-6809\nI0521 15:58:03.477269 1 namespace_controller.go:185] Namespace has been deleted projected-183\nI0521 15:58:03.899585 1 namespace_controller.go:185] Namespace has been deleted var-expansion-1585\nI0521 15:58:04.121153 1 event.go:291] \"Event occurred\" object=\"deployment-6467/test-rollover-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rollover-deployment-78bc8b888c to 1\"\nI0521 15:58:04.123337 1 namespace_controller.go:185] Namespace has been deleted custom-resource-definition-4791\nI0521 15:58:04.124471 1 event.go:291] \"Event occurred\" object=\"deployment-6467/test-rollover-deployment-78bc8b888c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rollover-deployment-78bc8b888c-xbfkd\"\nI0521 15:58:04.222663 1 namespace_controller.go:185] Namespace has been deleted secrets-1820\nI0521 15:58:06.050958 1 namespace_controller.go:185] Namespace has been deleted resourcequota-6205\nI0521 15:58:06.149874 1 event.go:291] \"Event occurred\" object=\"deployment-6467/test-rollover-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rollover-deployment-78bc8b888c to 0\"\nI0521 15:58:06.161267 1 event.go:291] \"Event occurred\" object=\"deployment-6467/test-rollover-deployment-78bc8b888c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rollover-deployment-78bc8b888c-xbfkd\"\nI0521 15:58:06.162076 1 event.go:291] \"Event occurred\" object=\"deployment-6467/test-rollover-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rollover-deployment-5797c7764 to 1\"\nI0521 15:58:06.164403 1 event.go:291] \"Event occurred\" object=\"deployment-6467/test-rollover-deployment-5797c7764\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rollover-deployment-5797c7764-f2trn\"\nI0521 15:58:08.681473 1 namespace_controller.go:185] Namespace has been deleted pods-8105\nI0521 15:58:09.504333 1 event.go:291] \"Event occurred\" object=\"kubectl-6970/update-demo-nautilus\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: update-demo-nautilus-mfbjg\"\nI0521 15:58:09.508197 1 event.go:291] \"Event occurred\" object=\"kubectl-6970/update-demo-nautilus\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: update-demo-nautilus-xdp6w\"\nI0521 15:58:10.601183 1 event.go:291] \"Event occurred\" object=\"statefulset-2849/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nI0521 15:58:13.107844 1 event.go:291] \"Event occurred\" object=\"webhook-6215/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-cbccbf6bb to 1\"\nI0521 15:58:13.113595 1 event.go:291] \"Event occurred\" object=\"webhook-6215/sample-webhook-deployment-cbccbf6bb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-cbccbf6bb-k549r\"\nE0521 15:58:15.493567 1 replica_set.go:201] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{update-demo-nautilus kubectl-6970 /api/v1/namespaces/kubectl-6970/replicationcontrollers/update-demo-nautilus 680bc31c-357c-4a3a-8d1f-56b67be241b1 13396 2 2021-05-21 15:58:09 +0000 UTC map[name:update-demo version:nautilus] map[] [] [] [{kubectl-create Update v1 2021-05-21 15:58:09 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:labels\":{\".\":{},\"f:name\":{},\"f:version\":{}}},\"f:spec\":{\"f:replicas\":{},\"f:selector\":{\".\":{},\"f:name\":{},\"f:version\":{}},\"f:template\":{\".\":{},\"f:metadata\":{\".\":{},\"f:creationTimestamp\":{},\"f:labels\":{\".\":{},\"f:name\":{},\"f:version\":{}}},\"f:spec\":{\".\":{},\"f:containers\":{\".\":{},\"k:{\\\"name\\\":\\\"update-demo\\\"}\":{\".\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:ports\":{\".\":{},\"k:{\\\"containerPort\\\":80,\\\"protocol\\\":\\\"TCP\\\"}\":{\".\":{},\"f:containerPort\":{},\"f:protocol\":{}}},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}}} {kube-controller-manager Update v1 2021-05-21 15:58:12 +0000 UTC FieldsV1 {\"f:status\":{\"f:availableReplicas\":{},\"f:fullyLabeledReplicas\":{},\"f:observedGeneration\":{},\"f:readyReplicas\":{},\"f:replicas\":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: update-demo,version: nautilus,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:update-demo version:nautilus] map[] [] [] []} {[] [] [{update-demo gcr.io/kubernetes-e2e-test-images/nautilus:1.0 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00254feb8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:1,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},}\nI0521 15:58:15.497698 1 event.go:291] \"Event occurred\" object=\"kubectl-6970/update-demo-nautilus\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: update-demo-nautilus-mfbjg\"\nE0521 15:58:15.559797 1 tokens_controller.go:261] error synchronizing serviceaccount services-742/default: secrets \"default-token-6zxv9\" is forbidden: unable to create new content in namespace services-742 because it is being terminated\nI0521 15:58:17.356027 1 event.go:291] \"Event occurred\" object=\"deployment-6467/test-rollover-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rollover-controller to 0\"\nI0521 15:58:17.362558 1 event.go:291] \"Event occurred\" object=\"deployment-6467/test-rollover-controller\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rollover-controller-2k2cz\"\nE0521 15:58:17.677790 1 tokens_controller.go:261] error synchronizing serviceaccount emptydir-4056/default: secrets \"default-token-vv5mf\" is forbidden: unable to create new content in namespace emptydir-4056 because it is being terminated\nI0521 15:58:19.364111 1 namespace_controller.go:185] Namespace has been deleted pod-network-test-8205\nI0521 15:58:20.739844 1 namespace_controller.go:185] Namespace has been deleted services-742\nI0521 15:58:22.132185 1 event.go:291] \"Event occurred\" object=\"kubectl-6970/update-demo-nautilus\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: update-demo-nautilus-djxt7\"\nI0521 15:58:22.757777 1 namespace_controller.go:185] Namespace has been deleted emptydir-4056\nE0521 15:58:23.333675 1 tokens_controller.go:261] error synchronizing serviceaccount deployment-6467/default: secrets \"default-token-w8xdr\" is forbidden: unable to create new content in namespace deployment-6467 because it is being terminated\nE0521 15:58:23.343705 1 tokens_controller.go:261] error synchronizing serviceaccount webhook-6215-markers/default: secrets \"default-token-dnzb7\" is forbidden: unable to create new content in namespace webhook-6215-markers because it is being terminated\nE0521 15:58:23.436056 1 tokens_controller.go:261] error synchronizing serviceaccount webhook-6215/default: secrets \"default-token-7rczt\" is forbidden: unable to create new content in namespace webhook-6215 because it is being terminated\nI0521 15:58:23.686632 1 namespace_controller.go:185] Namespace has been deleted crd-publish-openapi-9254\nI0521 15:58:23.856602 1 event.go:291] \"Event occurred\" object=\"statefulset-2849/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-2 in StatefulSet ss2 successful\"\nI0521 15:58:24.192259 1 event.go:291] \"Event occurred\" object=\"webhook-8071/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-cbccbf6bb to 1\"\nI0521 15:58:24.196898 1 event.go:291] \"Event occurred\" object=\"webhook-8071/sample-webhook-deployment-cbccbf6bb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-cbccbf6bb-h6v9w\"\nI0521 15:58:28.413539 1 namespace_controller.go:185] Namespace has been deleted deployment-6467\nI0521 15:58:28.437926 1 namespace_controller.go:185] Namespace has been deleted webhook-6215-markers\nI0521 15:58:28.458458 1 namespace_controller.go:185] Namespace has been deleted webhook-6215\nE0521 15:58:28.663519 1 tokens_controller.go:261] error synchronizing serviceaccount projected-5258/default: secrets \"default-token-42xxf\" is forbidden: unable to create new content in namespace projected-5258 because it is being terminated\nI0521 15:58:29.623046 1 namespace_controller.go:185] Namespace has been deleted emptydir-7361\nE0521 15:58:30.063488 1 tokens_controller.go:261] error synchronizing serviceaccount projected-8264/default: secrets \"default-token-pbx7k\" is forbidden: unable to create new content in namespace projected-8264 because it is being terminated\nI0521 15:58:31.514288 1 event.go:291] \"Event occurred\" object=\"gc-4745/simpletest.deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set simpletest.deployment-7f7555f8bc to 2\"\nI0521 15:58:31.520307 1 event.go:291] \"Event occurred\" object=\"gc-4745/simpletest.deployment-7f7555f8bc\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.deployment-7f7555f8bc-brvl2\"\nI0521 15:58:31.523180 1 event.go:291] \"Event occurred\" object=\"gc-4745/simpletest.deployment-7f7555f8bc\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.deployment-7f7555f8bc-wz7sf\"\nI0521 15:58:31.946894 1 event.go:291] \"Event occurred\" object=\"statefulset-2849/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nI0521 15:58:33.319850 1 event.go:291] \"Event occurred\" object=\"kubectl-2477/agnhost-primary\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-primary-nc846\"\nI0521 15:58:33.701384 1 namespace_controller.go:185] Namespace has been deleted projected-5258\nI0521 15:58:33.883571 1 event.go:291] \"Event occurred\" object=\"statefulset-2849/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nE0521 15:58:34.261770 1 tokens_controller.go:261] error synchronizing serviceaccount projected-1188/default: serviceaccounts \"default\" not found\nE0521 15:58:34.420211 1 tokens_controller.go:261] error synchronizing serviceaccount kubectl-6970/default: secrets \"default-token-rc6qq\" is forbidden: unable to create new content in namespace kubectl-6970 because it is being terminated\nE0521 15:58:35.277924 1 namespace_controller.go:162] deletion of namespace projected-8264 failed: unexpected items still remain in namespace: projected-8264 for gvr: /v1, Resource=pods\nE0521 15:58:35.413983 1 tokens_controller.go:261] error synchronizing serviceaccount webhook-8071-markers/default: secrets \"default-token-gwqr2\" is forbidden: unable to create new content in namespace webhook-8071-markers because it is being terminated\nE0521 15:58:35.428540 1 namespace_controller.go:162] deletion of namespace projected-8264 failed: unexpected items still remain in namespace: projected-8264 for gvr: /v1, Resource=pods\nE0521 15:58:35.569760 1 namespace_controller.go:162] deletion of namespace projected-8264 failed: unexpected items still remain in namespace: projected-8264 for gvr: /v1, Resource=pods\nE0521 15:58:35.735420 1 namespace_controller.go:162] deletion of namespace projected-8264 failed: unexpected items still remain in namespace: projected-8264 for gvr: /v1, Resource=pods\nI0521 15:58:35.774507 1 event.go:291] \"Event occurred\" object=\"statefulset-2849/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nE0521 15:58:35.924660 1 namespace_controller.go:162] deletion of namespace projected-8264 failed: unexpected items still remain in namespace: projected-8264 for gvr: /v1, Resource=pods\nE0521 15:58:36.177279 1 namespace_controller.go:162] deletion of namespace projected-8264 failed: unexpected items still remain in namespace: projected-8264 for gvr: /v1, Resource=pods\nE0521 15:58:36.480872 1 namespace_controller.go:162] deletion of namespace projected-8264 failed: unexpected items still remain in namespace: projected-8264 for gvr: /v1, Resource=pods\nE0521 15:58:36.518429 1 tokens_controller.go:261] error synchronizing serviceaccount secrets-7368/default: secrets \"default-token-jmz6n\" is forbidden: unable to create new content in namespace secrets-7368 because it is being terminated\nI0521 15:58:36.623795 1 event.go:291] \"Event occurred\" object=\"statefulset-3721/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nE0521 15:58:36.951300 1 namespace_controller.go:162] deletion of namespace projected-8264 failed: unexpected items still remain in namespace: projected-8264 for gvr: /v1, Resource=pods\nI0521 15:58:37.226336 1 namespace_controller.go:185] Namespace has been deleted projected-5688\nE0521 15:58:37.738550 1 namespace_controller.go:162] deletion of namespace projected-8264 failed: unexpected items still remain in namespace: projected-8264 for gvr: /v1, Resource=pods\nE0521 15:58:39.172593 1 namespace_controller.go:162] deletion of namespace projected-8264 failed: unexpected items still remain in namespace: projected-8264 for gvr: /v1, Resource=pods\nI0521 15:58:39.293848 1 namespace_controller.go:185] Namespace has been deleted projected-1188\nI0521 15:58:39.335871 1 event.go:291] \"Event occurred\" object=\"statefulset-3721/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"RecreatingFailedPod\" message=\"StatefulSet statefulset-3721/ss is recreating failed Pod ss-0\"\nI0521 15:58:39.342222 1 event.go:291] \"Event occurred\" object=\"statefulset-3721/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0521 15:58:39.345401 1 event.go:291] \"Event occurred\" object=\"statefulset-3721/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0521 15:58:40.493102 1 namespace_controller.go:185] Namespace has been deleted webhook-8071-markers\nI0521 15:58:40.506183 1 namespace_controller.go:185] Namespace has been deleted webhook-8071\nI0521 15:58:41.335630 1 event.go:291] \"Event occurred\" object=\"statefulset-3721/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"RecreatingFailedPod\" message=\"StatefulSet statefulset-3721/ss is recreating failed Pod ss-0\"\nI0521 15:58:41.343432 1 event.go:291] \"Event occurred\" object=\"statefulset-3721/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0521 15:58:41.347186 1 event.go:291] \"Event occurred\" object=\"statefulset-3721/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nE0521 15:58:41.352406 1 stateful_set.go:392] error syncing StatefulSet statefulset-3721/ss, requeuing: The POST operation against Pod could not be completed at this time, please try again.\nI0521 15:58:41.352539 1 event.go:291] \"Event occurred\" object=\"statefulset-3721/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.\"\nI0521 15:58:41.637336 1 namespace_controller.go:185] Namespace has been deleted secrets-7368\nE0521 15:58:41.881555 1 namespace_controller.go:162] deletion of namespace projected-8264 failed: unexpected items still remain in namespace: projected-8264 for gvr: /v1, Resource=pods\nE0521 15:58:42.573740 1 tokens_controller.go:261] error synchronizing serviceaccount kubectl-2477/default: secrets \"default-token-bw4mn\" is forbidden: unable to create new content in namespace kubectl-2477 because it is being terminated\nI0521 15:58:42.614028 1 namespace_controller.go:185] Namespace has been deleted downward-api-6424\nI0521 15:58:43.119162 1 namespace_controller.go:185] Namespace has been deleted var-expansion-5107\nI0521 15:58:43.138026 1 event.go:291] \"Event occurred\" object=\"statefulset-3721/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"RecreatingFailedPod\" message=\"StatefulSet statefulset-3721/ss is recreating failed Pod ss-0\"\nI0521 15:58:43.145942 1 event.go:291] \"Event occurred\" object=\"statefulset-3721/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0521 15:58:43.149851 1 event.go:291] \"Event occurred\" object=\"statefulset-3721/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0521 15:58:45.163257 1 namespace_controller.go:185] Namespace has been deleted pod-network-test-2268\nE0521 15:58:45.594667 1 tokens_controller.go:261] error synchronizing serviceaccount pod-network-test-2258/default: secrets \"default-token-q79j2\" is forbidden: unable to create new content in namespace pod-network-test-2258 because it is being terminated\nI0521 15:58:46.689257 1 namespace_controller.go:185] Namespace has been deleted projected-7053\nE0521 15:58:46.785384 1 tokens_controller.go:261] error synchronizing serviceaccount secrets-6734/default: secrets \"default-token-8pp2v\" is forbidden: unable to create new content in namespace secrets-6734 because it is being terminated\nI0521 15:58:47.636623 1 namespace_controller.go:185] Namespace has been deleted services-9764\nI0521 15:58:49.386722 1 event.go:291] \"Event occurred\" object=\"statefulset-3721/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0521 15:58:50.738510 1 namespace_controller.go:185] Namespace has been deleted pod-network-test-2258\nI0521 15:58:51.783958 1 namespace_controller.go:185] Namespace has been deleted secret-namespace-6776\nI0521 15:58:51.797317 1 namespace_controller.go:185] Namespace has been deleted secrets-6734\nI0521 15:58:52.188037 1 namespace_controller.go:185] Namespace has been deleted projected-8264\nI0521 15:58:52.741356 1 namespace_controller.go:185] Namespace has been deleted emptydir-7314\nI0521 15:58:52.760171 1 namespace_controller.go:185] Namespace has been deleted kubectl-2477\nE0521 15:58:52.778570 1 tokens_controller.go:261] error synchronizing serviceaccount kubelet-test-1090/default: secrets \"default-token-rfq68\" is forbidden: unable to create new content in namespace kubelet-test-1090 because it is being terminated\nI0521 15:58:53.918741 1 event.go:291] \"Event occurred\" object=\"statefulset-2849/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-1 in StatefulSet ss2 successful\"\nI0521 15:58:54.543975 1 event.go:291] \"Event occurred\" object=\"replication-controller-518/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: condition-test-8dcch\"\nI0521 15:58:54.549278 1 event.go:291] \"Event occurred\" object=\"replication-controller-518/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-6qw2p\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0521 15:58:54.550539 1 event.go:291] \"Event occurred\" object=\"replication-controller-518/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: condition-test-hs5mx\"\nE0521 15:58:54.553717 1 replica_set.go:532] sync \"replication-controller-518/condition-test\" failed with pods \"condition-test-6qw2p\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0521 15:58:54.555666 1 event.go:291] \"Event occurred\" object=\"replication-controller-518/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-zw5hw\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0521 15:58:54.559908 1 replica_set.go:532] sync \"replication-controller-518/condition-test\" failed with pods \"condition-test-zw5hw\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nE0521 15:58:54.561438 1 replica_set.go:532] sync \"replication-controller-518/condition-test\" failed with pods \"condition-test-hbsw5\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0521 15:58:54.561470 1 event.go:291] \"Event occurred\" object=\"replication-controller-518/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-hbsw5\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0521 15:58:54.566032 1 replica_set.go:532] sync \"replication-controller-518/condition-test\" failed with pods \"condition-test-m786s\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0521 15:58:54.566057 1 event.go:291] \"Event occurred\" object=\"replication-controller-518/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-m786s\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0521 15:58:54.572081 1 replica_set.go:532] sync \"replication-controller-518/condition-test\" failed with pods \"condition-test-6rf9f\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0521 15:58:54.572108 1 event.go:291] \"Event occurred\" object=\"replication-controller-518/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-6rf9f\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0521 15:58:54.656151 1 replica_set.go:532] sync \"replication-controller-518/condition-test\" failed with pods \"condition-test-xplxq\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0521 15:58:54.656169 1 event.go:291] \"Event occurred\" object=\"replication-controller-518/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-xplxq\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0521 15:58:54.818766 1 replica_set.go:532] sync \"replication-controller-518/condition-test\" failed with pods \"condition-test-9dtkw\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0521 15:58:54.818790 1 event.go:291] \"Event occurred\" object=\"replication-controller-518/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-9dtkw\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0521 15:58:54.877976 1 tokens_controller.go:261] error synchronizing serviceaccount projected-7966/default: secrets \"default-token-xtpg6\" is forbidden: unable to create new content in namespace projected-7966 because it is being terminated\nE0521 15:58:55.108641 1 replica_set.go:532] sync \"replication-controller-518/condition-test\" failed with pods \"condition-test-9sw4f\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0521 15:58:55.108702 1 event.go:291] \"Event occurred\" object=\"replication-controller-518/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-9sw4f\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0521 15:58:55.141193 1 replica_set.go:532] sync \"replication-controller-518/condition-test\" failed with pods \"condition-test-fnsfg\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0521 15:58:55.141240 1 event.go:291] \"Event occurred\" object=\"replication-controller-518/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-fnsfg\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0521 15:58:55.145506 1 replica_set.go:532] sync \"replication-controller-518/condition-test\" failed with pods \"condition-test-rbc6s\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0521 15:58:55.145586 1 event.go:291] \"Event occurred\" object=\"replication-controller-518/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-rbc6s\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0521 15:58:55.526065 1 event.go:291] \"Event occurred\" object=\"replication-controller-518/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-c78bs\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0521 15:58:55.530182 1 replica_set.go:532] sync \"replication-controller-518/condition-test\" failed with pods \"condition-test-c78bs\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0521 15:58:55.531913 1 event.go:291] \"Event occurred\" object=\"replication-controller-518/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-bftjm\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0521 15:58:55.536687 1 replica_set.go:532] sync \"replication-controller-518/condition-test\" failed with pods \"condition-test-bftjm\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nE0521 15:58:55.538974 1 replica_set.go:532] sync \"replication-controller-518/condition-test\" failed with pods \"condition-test-ggtkm\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0521 15:58:55.539045 1 event.go:291] \"Event occurred\" object=\"replication-controller-518/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-ggtkm\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0521 15:58:55.805020 1 event.go:291] \"Event occurred\" object=\"services-7180/externalname-service\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalname-service-d6phd\"\nI0521 15:58:55.807709 1 event.go:291] \"Event occurred\" object=\"services-7180/externalname-service\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalname-service-jmrw6\"\nE0521 15:58:58.667273 1 tokens_controller.go:261] error synchronizing serviceaccount pod-network-test-4792/default: secrets \"default-token-r95xg\" is forbidden: unable to create new content in namespace pod-network-test-4792 because it is being terminated\nI0521 15:58:58.705643 1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-4108/test-quota\nE0521 15:58:58.759105 1 tokens_controller.go:261] error synchronizing serviceaccount resourcequota-4108/default: secrets \"default-token-vhdcz\" is forbidden: unable to create new content in namespace resourcequota-4108 because it is being terminated\nI0521 15:58:59.962520 1 namespace_controller.go:185] Namespace has been deleted projected-7966\nI0521 15:59:00.428695 1 event.go:291] \"Event occurred\" object=\"statefulset-2849/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nE0521 15:59:00.919958 1 tokens_controller.go:261] error synchronizing serviceaccount downward-api-293/default: secrets \"default-token-b8jkt\" is forbidden: unable to create new content in namespace downward-api-293 because it is being terminated\nI0521 15:59:00.923539 1 namespace_controller.go:185] Namespace has been deleted kubectl-6970\nI0521 15:59:01.614122 1 resource_quota_controller.go:306] Resource quota has been deleted replication-controller-518/condition-test\nE0521 15:59:01.766561 1 tokens_controller.go:261] error synchronizing serviceaccount replication-controller-518/default: secrets \"default-token-9vwgz\" is forbidden: unable to create new content in namespace replication-controller-518 because it is being terminated\nI0521 15:59:03.090516 1 event.go:291] \"Event occurred\" object=\"kubectl-433/update-demo-nautilus\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: update-demo-nautilus-z59j5\"\nI0521 15:59:03.094037 1 event.go:291] \"Event occurred\" object=\"kubectl-433/update-demo-nautilus\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: update-demo-nautilus-gb926\"\nI0521 15:59:03.108797 1 event.go:291] \"Event occurred\" object=\"proxy-4504/proxy-service-sgjpk\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: proxy-service-sgjpk-ttd72\"\nI0521 15:59:03.798244 1 namespace_controller.go:185] Namespace has been deleted resourcequota-4108\nI0521 15:59:05.942417 1 namespace_controller.go:185] Namespace has been deleted downward-api-293\nI0521 15:59:06.805341 1 namespace_controller.go:185] Namespace has been deleted replication-controller-518\nE0521 15:59:08.095396 1 tokens_controller.go:261] error synchronizing serviceaccount gc-7368/default: secrets \"default-token-5k7hs\" is forbidden: unable to create new content in namespace gc-7368 because it is being terminated\nI0521 15:59:08.126942 1 namespace_controller.go:185] Namespace has been deleted certificates-9467\nI0521 15:59:09.281948 1 event.go:291] \"Event occurred\" object=\"statefulset-2849/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-0 in StatefulSet ss2 successful\"\nI0521 15:59:09.393728 1 stateful_set.go:419] StatefulSet has been deleted statefulset-3721/ss\nI0521 15:59:11.525405 1 event.go:291] \"Event occurred\" object=\"services-1621/affinity-clusterip\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-clusterip-57tq2\"\nI0521 15:59:11.528487 1 event.go:291] \"Event occurred\" object=\"services-1621/affinity-clusterip\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-clusterip-xvmj9\"\nI0521 15:59:11.528522 1 event.go:291] \"Event occurred\" object=\"services-1621/affinity-clusterip\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-clusterip-r7dnn\"\nE0521 15:59:12.798613 1 tokens_controller.go:261] error synchronizing serviceaccount configmap-2323/default: secrets \"default-token-rb8k9\" is forbidden: unable to create new content in namespace configmap-2323 because it is being terminated\nI0521 15:59:13.079711 1 namespace_controller.go:185] Namespace has been deleted services-7180\nI0521 15:59:13.245460 1 namespace_controller.go:185] Namespace has been deleted gc-7368\nE0521 15:59:14.592744 1 tokens_controller.go:261] error synchronizing serviceaccount statefulset-3721/default: secrets \"default-token-dnw8d\" is forbidden: unable to create new content in namespace statefulset-3721 because it is being terminated\nE0521 15:59:14.878062 1 tokens_controller.go:261] error synchronizing serviceaccount emptydir-wrapper-9479/default: secrets \"default-token-hzfcd\" is forbidden: unable to create new content in namespace emptydir-wrapper-9479 because it is being terminated\nI0521 15:59:15.637982 1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-9161/test-quota\nE0521 15:59:16.606369 1 tokens_controller.go:261] error synchronizing serviceaccount security-context-test-2503/default: secrets \"default-token-jcf6v\" is forbidden: unable to create new content in namespace security-context-test-2503 because it is being terminated\nI0521 15:59:19.586834 1 namespace_controller.go:185] Namespace has been deleted kubectl-433\nI0521 15:59:19.630906 1 namespace_controller.go:185] Namespace has been deleted statefulset-3721\nI0521 15:59:20.053511 1 namespace_controller.go:185] Namespace has been deleted emptydir-wrapper-9479\nI0521 15:59:20.212197 1 event.go:291] \"Event occurred\" object=\"statefulset-2849/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nE0521 15:59:20.650700 1 tokens_controller.go:261] error synchronizing serviceaccount projected-7457/default: secrets \"default-token-fv4r6\" is forbidden: unable to create new content in namespace projected-7457 because it is being terminated\nE0521 15:59:20.770429 1 tokens_controller.go:261] error synchronizing serviceaccount resourcequota-9161/default: secrets \"default-token-g6cgk\" is forbidden: unable to create new content in namespace resourcequota-9161 because it is being terminated\nI0521 15:59:21.627793 1 namespace_controller.go:185] Namespace has been deleted projected-8917\nI0521 15:59:21.635529 1 namespace_controller.go:185] Namespace has been deleted security-context-test-2503\nI0521 15:59:23.114342 1 namespace_controller.go:185] Namespace has been deleted configmap-2323\nI0521 15:59:23.976956 1 event.go:291] \"Event occurred\" object=\"statefulset-2849/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-2 in StatefulSet ss2 successful\"\nI0521 15:59:24.833790 1 event.go:291] \"Event occurred\" object=\"replicaset-7757/pod-adoption-release\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: pod-adoption-release-ng6nx\"\nE0521 15:59:24.875830 1 tokens_controller.go:261] error synchronizing serviceaccount containers-7017/default: secrets \"default-token-978tk\" is forbidden: unable to create new content in namespace containers-7017 because it is being terminated\nE0521 15:59:24.983672 1 namespace_controller.go:162] deletion of namespace containers-7017 failed: unexpected items still remain in namespace: containers-7017 for gvr: /v1, Resource=pods\nE0521 15:59:25.144729 1 namespace_controller.go:162] deletion of namespace containers-7017 failed: unexpected items still remain in namespace: containers-7017 for gvr: /v1, Resource=pods\nE0521 15:59:25.306514 1 namespace_controller.go:162] deletion of namespace containers-7017 failed: unexpected items still remain in namespace: containers-7017 for gvr: /v1, Resource=pods\nI0521 15:59:25.410755 1 event.go:291] \"Event occurred\" object=\"services-1621/affinity-clusterip\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint services-1621/affinity-clusterip: Operation cannot be fulfilled on endpoints \\\"affinity-clusterip\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0521 15:59:25.478034 1 namespace_controller.go:162] deletion of namespace containers-7017 failed: unexpected items still remain in namespace: containers-7017 for gvr: /v1, Resource=pods\nE0521 15:59:25.663951 1 namespace_controller.go:162] deletion of namespace containers-7017 failed: unexpected items still remain in namespace: containers-7017 for gvr: /v1, Resource=pods\nI0521 15:59:25.757724 1 namespace_controller.go:185] Namespace has been deleted projected-7457\nI0521 15:59:25.810915 1 namespace_controller.go:185] Namespace has been deleted resourcequota-9161\nI0521 15:59:25.874641 1 namespace_controller.go:185] Namespace has been deleted podtemplate-9785\nI0521 15:59:25.876992 1 namespace_controller.go:185] Namespace has been deleted proxy-4504\nE0521 15:59:25.915550 1 namespace_controller.go:162] deletion of namespace containers-7017 failed: unexpected items still remain in namespace: containers-7017 for gvr: /v1, Resource=pods\nI0521 15:59:25.987259 1 event.go:291] \"Event occurred\" object=\"kubectl-4738/agnhost-primary\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-primary-5pxll\"\nE0521 15:59:26.244135 1 namespace_controller.go:162] deletion of namespace containers-7017 failed: unexpected items still remain in namespace: containers-7017 for gvr: /v1, Resource=pods\nI0521 15:59:26.344560 1 namespace_controller.go:185] Namespace has been deleted custom-resource-definition-1202\nE0521 15:59:26.727159 1 namespace_controller.go:162] deletion of namespace containers-7017 failed: unexpected items still remain in namespace: containers-7017 for gvr: /v1, Resource=pods\nE0521 15:59:26.927955 1 tokens_controller.go:261] error synchronizing serviceaccount downward-api-6556/default: secrets \"default-token-kgnnb\" is forbidden: unable to create new content in namespace downward-api-6556 because it is being terminated\nE0521 15:59:27.532851 1 namespace_controller.go:162] deletion of namespace containers-7017 failed: unexpected items still remain in namespace: containers-7017 for gvr: /v1, Resource=pods\nI0521 15:59:28.056088 1 event.go:291] \"Event occurred\" object=\"statefulset-2849/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-1 in StatefulSet ss2 successful\"\nE0521 15:59:28.553035 1 tokens_controller.go:261] error synchronizing serviceaccount svcaccounts-9376/default: secrets \"default-token-tw7np\" is forbidden: unable to create new content in namespace svcaccounts-9376 because it is being terminated\nE0521 15:59:28.554095 1 tokens_controller.go:261] error synchronizing serviceaccount svcaccounts-9376/mount-test: secrets \"mount-test-token-x295d\" is forbidden: unable to create new content in namespace svcaccounts-9376 because it is being terminated\nI0521 15:59:30.186728 1 namespace_controller.go:185] Namespace has been deleted pod-network-test-4792\nE0521 15:59:30.698144 1 tokens_controller.go:261] error synchronizing serviceaccount projected-7611/default: secrets \"default-token-l29js\" is forbidden: unable to create new content in namespace projected-7611 because it is being terminated\nI0521 15:59:32.074438 1 namespace_controller.go:185] Namespace has been deleted downward-api-6556\nI0521 15:59:33.687065 1 namespace_controller.go:185] Namespace has been deleted svcaccounts-9376\nI0521 15:59:33.710779 1 event.go:291] \"Event occurred\" object=\"job-9919/foo\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: foo-wb5xd\"\nI0521 15:59:33.714746 1 event.go:291] \"Event occurred\" object=\"job-9919/foo\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: foo-xzg5t\"\nI0521 15:59:33.967945 1 namespace_controller.go:185] Namespace has been deleted containers-7017\nE0521 15:59:34.181787 1 tokens_controller.go:261] error synchronizing serviceaccount kubectl-4738/default: secrets \"default-token-pvdzq\" is forbidden: unable to create new content in namespace kubectl-4738 because it is being terminated\nE0521 15:59:34.300501 1 tokens_controller.go:261] error synchronizing serviceaccount kubectl-8426/default: secrets \"default-token-zphbw\" is forbidden: unable to create new content in namespace kubectl-8426 because it is being terminated\nI0521 15:59:35.368174 1 namespace_controller.go:185] Namespace has been deleted kubelet-test-1090\nE0521 15:59:35.369221 1 namespace_controller.go:162] deletion of namespace projected-1984 failed: unexpected items still remain in namespace: projected-1984 for gvr: /v1, Resource=pods\nE0521 15:59:35.543303 1 namespace_controller.go:162] deletion of namespace projected-1984 failed: unexpected items still remain in namespace: projected-1984 for gvr: /v1, Resource=pods\nE0521 15:59:35.722334 1 namespace_controller.go:162] deletion of namespace projected-1984 failed: unexpected items still remain in namespace: projected-1984 for gvr: /v1, Resource=pods\nI0521 15:59:35.799404 1 namespace_controller.go:185] Namespace has been deleted projected-7611\nE0521 15:59:35.908252 1 namespace_controller.go:162] deletion of namespace projected-1984 failed: unexpected items still remain in namespace: projected-1984 for gvr: /v1, Resource=pods\nE0521 15:59:36.102734 1 namespace_controller.go:162] deletion of namespace projected-1984 failed: unexpected items still remain in namespace: projected-1984 for gvr: /v1, Resource=pods\nE0521 15:59:36.337128 1 namespace_controller.go:162] deletion of namespace projected-1984 failed: unexpected items still remain in namespace: projected-1984 for gvr: /v1, Resource=pods\nE0521 15:59:36.655771 1 namespace_controller.go:162] deletion of namespace projected-1984 failed: unexpected items still remain in namespace: projected-1984 for gvr: /v1, Resource=pods\nE0521 15:59:37.137123 1 namespace_controller.go:162] deletion of namespace projected-1984 failed: unexpected items still remain in namespace: projected-1984 for gvr: /v1, Resource=pods\nE0521 15:59:37.935578 1 namespace_controller.go:162] deletion of namespace projected-1984 failed: unexpected items still remain in namespace: projected-1984 for gvr: /v1, Resource=pods\nE0521 15:59:38.397209 1 tokens_controller.go:261] error synchronizing serviceaccount projected-9382/default: secrets \"default-token-jxjqk\" is forbidden: unable to create new content in namespace projected-9382 because it is being terminated\nE0521 15:59:38.743896 1 tokens_controller.go:261] error synchronizing serviceaccount crd-publish-openapi-4871/default: serviceaccounts \"default\" not found\nI0521 15:59:38.908671 1 event.go:291] \"Event occurred\" object=\"kubectl-8427/frontend\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set frontend-58d458fdbd to 3\"\nI0521 15:59:38.915104 1 event.go:291] \"Event occurred\" object=\"kubectl-8427/frontend-58d458fdbd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: frontend-58d458fdbd-78br8\"\nI0521 15:59:38.919541 1 event.go:291] \"Event occurred\" object=\"kubectl-8427/frontend-58d458fdbd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: frontend-58d458fdbd-6hz45\"\nI0521 15:59:38.919582 1 event.go:291] \"Event occurred\" object=\"kubectl-8427/frontend-58d458fdbd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: frontend-58d458fdbd-nz64t\"\nI0521 15:59:39.164663 1 event.go:291] \"Event occurred\" object=\"kubectl-8427/agnhost-primary\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set agnhost-primary-76f75c9b74 to 1\"\nI0521 15:59:39.168637 1 event.go:291] \"Event occurred\" object=\"kubectl-8427/agnhost-primary-76f75c9b74\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-primary-76f75c9b74-rdhqs\"\nE0521 15:59:39.349749 1 namespace_controller.go:162] deletion of namespace projected-1984 failed: unexpected items still remain in namespace: projected-1984 for gvr: /v1, Resource=pods\nI0521 15:59:39.413054 1 namespace_controller.go:185] Namespace has been deleted kubectl-8426\nI0521 15:59:39.423797 1 event.go:291] \"Event occurred\" object=\"kubectl-8427/agnhost-replica\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set agnhost-replica-7d6489798 to 2\"\nI0521 15:59:39.427747 1 event.go:291] \"Event occurred\" object=\"kubectl-8427/agnhost-replica-7d6489798\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-replica-7d6489798-fgtgh\"\nI0521 15:59:39.431356 1 event.go:291] \"Event occurred\" object=\"kubectl-8427/agnhost-replica-7d6489798\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-replica-7d6489798-4r596\"\nE0521 15:59:39.671515 1 tokens_controller.go:261] error synchronizing serviceaccount secrets-2972/default: secrets \"default-token-t7hwj\" is forbidden: unable to create new content in namespace secrets-2972 because it is being terminated\nI0521 15:59:40.210323 1 event.go:291] \"Event occurred\" object=\"statefulset-2849/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-0 in StatefulSet ss2 successful\"\nI0521 15:59:43.545629 1 namespace_controller.go:185] Namespace has been deleted projected-9382\nI0521 15:59:43.799683 1 namespace_controller.go:185] Namespace has been deleted crd-publish-openapi-4871\nI0521 15:59:43.983233 1 stateful_set.go:419] StatefulSet has been deleted statefulset-2849/ss2\nI0521 15:59:44.760259 1 namespace_controller.go:185] Namespace has been deleted gc-4745\nI0521 15:59:44.769849 1 namespace_controller.go:185] Namespace has been deleted secrets-2972\nI0521 15:59:44.834415 1 namespace_controller.go:185] Namespace has been deleted tables-4899\nI0521 15:59:46.411108 1 namespace_controller.go:185] Namespace has been deleted replicaset-7757\nI0521 15:59:47.071296 1 namespace_controller.go:185] Namespace has been deleted projected-1984\nI0521 15:59:47.885858 1 namespace_controller.go:185] Namespace has been deleted projected-8517\nE0521 15:59:49.029592 1 tokens_controller.go:261] error synchronizing serviceaccount statefulset-2849/default: secrets \"default-token-c7ql9\" is forbidden: unable to create new content in namespace statefulset-2849 because it is being terminated\nI0521 15:59:49.859173 1 namespace_controller.go:185] Namespace has been deleted secrets-2599\nE0521 15:59:50.293294 1 tokens_controller.go:261] error synchronizing serviceaccount kubectl-8427/default: secrets \"default-token-nwwkc\" is forbidden: unable to create new content in namespace kubectl-8427 because it is being terminated\nI0521 15:59:50.814267 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for e2e-test-crd-publish-openapi-4009-crds.crd-publish-openapi-test-multi-ver.example.com\nI0521 15:59:50.814387 1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0521 15:59:50.831880 1 namespace_controller.go:185] Namespace has been deleted services-1621\nI0521 15:59:50.914607 1 shared_informer.go:247] Caches are synced for resource quota \nE0521 15:59:51.214241 1 tokens_controller.go:261] error synchronizing serviceaccount emptydir-1225/default: secrets \"default-token-rpdzr\" is forbidden: unable to create new content in namespace emptydir-1225 because it is being terminated\nI0521 15:59:52.073238 1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0521 15:59:52.073314 1 shared_informer.go:247] Caches are synced for garbage collector \nE0521 15:59:52.158599 1 tokens_controller.go:261] error synchronizing serviceaccount subpath-6683/default: secrets \"default-token-v2wbg\" is forbidden: unable to create new content in namespace subpath-6683 because it is being terminated\nE0521 15:59:53.368299 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 15:59:54.211430 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 15:59:54.353858 1 namespace_controller.go:185] Namespace has been deleted statefulset-2849\nE0521 15:59:55.269055 1 tokens_controller.go:261] error synchronizing serviceaccount secrets-8446/default: secrets \"default-token-7tcgh\" is forbidden: unable to create new content in namespace secrets-8446 because it is being terminated\nI0521 15:59:55.554880 1 namespace_controller.go:185] Namespace has been deleted kubectl-8427\nE0521 15:59:56.194396 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 15:59:56.260367 1 namespace_controller.go:185] Namespace has been deleted emptydir-1225\nI0521 15:59:57.242867 1 namespace_controller.go:185] Namespace has been deleted subpath-6683\nI0521 15:59:59.032179 1 event.go:291] \"Event occurred\" object=\"kubectl-7076/httpd-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set httpd-deployment-86bff9b6d7 to 1\"\nI0521 15:59:59.039482 1 event.go:291] \"Event occurred\" object=\"kubectl-7076/httpd-deployment-86bff9b6d7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: httpd-deployment-86bff9b6d7-x6sbk\"\nI0521 16:00:00.351681 1 namespace_controller.go:185] Namespace has been deleted secrets-8446\nI0521 16:00:00.514347 1 namespace_controller.go:185] Namespace has been deleted kubectl-4738\nE0521 16:00:02.213649 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:00:02.903390 1 event.go:291] \"Event occurred\" object=\"aggregator-4806/sample-apiserver-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-apiserver-deployment-67dc674868 to 1\"\nI0521 16:00:02.910466 1 event.go:291] \"Event occurred\" object=\"aggregator-4806/sample-apiserver-deployment-67dc674868\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-apiserver-deployment-67dc674868-7czs4\"\nE0521 16:00:03.744608 1 tokens_controller.go:261] error synchronizing serviceaccount crd-publish-openapi-5110/default: secrets \"default-token-s8wxh\" is forbidden: unable to create new content in namespace crd-publish-openapi-5110 because it is being terminated\nI0521 16:00:04.427039 1 namespace_controller.go:185] Namespace has been deleted security-context-test-6430\nE0521 16:00:04.650755 1 tokens_controller.go:261] error synchronizing serviceaccount kubectl-7076/default: secrets \"default-token-8g6j8\" is forbidden: unable to create new content in namespace kubectl-7076 because it is being terminated\nI0521 16:00:06.816930 1 event.go:291] \"Event occurred\" object=\"services-5478/affinity-nodeport\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-nodeport-78phd\"\nI0521 16:00:06.826224 1 event.go:291] \"Event occurred\" object=\"services-5478/affinity-nodeport\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-nodeport-wsgpc\"\nI0521 16:00:06.826629 1 event.go:291] \"Event occurred\" object=\"services-5478/affinity-nodeport\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-nodeport-pj7gz\"\nI0521 16:00:07.075423 1 namespace_controller.go:185] Namespace has been deleted kubectl-7746\nE0521 16:00:07.420492 1 tokens_controller.go:261] error synchronizing serviceaccount resourcequota-5227/default: secrets \"default-token-7hfzl\" is forbidden: unable to create new content in namespace resourcequota-5227 because it is being terminated\nI0521 16:00:07.507818 1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-5227/test-quota\nI0521 16:00:08.349504 1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-4447/test-quota\nI0521 16:00:08.858088 1 event.go:291] \"Event occurred\" object=\"services-5478/affinity-nodeport\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint services-5478/affinity-nodeport: Operation cannot be fulfilled on endpoints \\\"affinity-nodeport\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0521 16:00:08.877177 1 namespace_controller.go:185] Namespace has been deleted crd-publish-openapi-5110\nI0521 16:00:09.664585 1 event.go:291] \"Event occurred\" object=\"services-7359/affinity-clusterip-timeout\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-clusterip-timeout-vvffg\"\nI0521 16:00:09.668043 1 event.go:291] \"Event occurred\" object=\"services-7359/affinity-clusterip-timeout\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-clusterip-timeout-q8w7k\"\nI0521 16:00:09.668577 1 event.go:291] \"Event occurred\" object=\"services-7359/affinity-clusterip-timeout\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-clusterip-timeout-xn69f\"\nI0521 16:00:09.752922 1 namespace_controller.go:185] Namespace has been deleted kubectl-7076\nE0521 16:00:09.798610 1 tokens_controller.go:261] error synchronizing serviceaccount subpath-4204/default: secrets \"default-token-rp5g6\" is forbidden: unable to create new content in namespace subpath-4204 because it is being terminated\nI0521 16:00:11.600481 1 namespace_controller.go:185] Namespace has been deleted pods-2183\nE0521 16:00:11.940422 1 namespace_controller.go:162] deletion of namespace configmap-7397 failed: unable to retrieve the complete list of server APIs: wardle.example.com/v1alpha1: the server could not find the requested resource\nW0521 16:00:12.036958 1 endpointslice_controller.go:284] Error syncing endpoint slices for service \"aggregator-4806/sample-api\", retrying. Error: EndpointSlice informer cache is out of date\nE0521 16:00:12.350113 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:00:12.528015 1 namespace_controller.go:185] Namespace has been deleted resourcequota-5227\nI0521 16:00:12.960612 1 event.go:291] \"Event occurred\" object=\"replication-controller-8595/my-hostname-basic-99c8ad53-10ba-4d00-8fe6-49e31607c628\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: my-hostname-basic-99c8ad53-10ba-4d00-8fe6-49e31607c628-ssh2t\"\nI0521 16:00:13.007141 1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-2804/test-quota\nE0521 16:00:13.033908 1 tokens_controller.go:261] error synchronizing serviceaccount resourcequota-2804/default: secrets \"default-token-mlrpq\" is forbidden: unable to create new content in namespace resourcequota-2804 because it is being terminated\nI0521 16:00:13.366573 1 namespace_controller.go:185] Namespace has been deleted resourcequota-4447\nI0521 16:00:13.452469 1 namespace_controller.go:185] Namespace has been deleted podtemplate-578\nI0521 16:00:14.839516 1 namespace_controller.go:185] Namespace has been deleted subpath-4204\nE0521 16:00:14.858711 1 tokens_controller.go:261] error synchronizing serviceaccount dns-8210/default: secrets \"default-token-86mst\" is forbidden: unable to create new content in namespace dns-8210 because it is being terminated\nI0521 16:00:17.122242 1 namespace_controller.go:185] Namespace has been deleted configmap-7397\nI0521 16:00:18.098097 1 namespace_controller.go:185] Namespace has been deleted resourcequota-2804\nI0521 16:00:19.909879 1 namespace_controller.go:185] Namespace has been deleted dns-8210\nI0521 16:00:20.210911 1 namespace_controller.go:185] Namespace has been deleted secrets-7400\nI0521 16:00:20.670894 1 namespace_controller.go:185] Namespace has been deleted events-5761\nE0521 16:00:21.233315 1 tokens_controller.go:261] error synchronizing serviceaccount emptydir-617/default: secrets \"default-token-b7fqw\" is forbidden: unable to create new content in namespace emptydir-617 because it is being terminated\nI0521 16:00:21.416526 1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0521 16:00:21.416592 1 shared_informer.go:247] Caches are synced for resource quota \nI0521 16:00:21.534719 1 event.go:291] \"Event occurred\" object=\"deployment-9005/test-recreate-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-recreate-deployment-c96cf48f to 1\"\nI0521 16:00:21.540408 1 event.go:291] \"Event occurred\" object=\"deployment-9005/test-recreate-deployment-c96cf48f\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-recreate-deployment-c96cf48f-hh99z\"\nI0521 16:00:22.575534 1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0521 16:00:22.575615 1 shared_informer.go:247] Caches are synced for garbage collector \nI0521 16:00:23.022881 1 namespace_controller.go:185] Namespace has been deleted aggregator-4806\nI0521 16:00:23.229029 1 namespace_controller.go:185] Namespace has been deleted kubelet-test-6599\nI0521 16:00:25.565878 1 event.go:291] \"Event occurred\" object=\"deployment-9005/test-recreate-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-recreate-deployment-c96cf48f to 0\"\nI0521 16:00:25.572399 1 event.go:291] \"Event occurred\" object=\"deployment-9005/test-recreate-deployment-c96cf48f\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-recreate-deployment-c96cf48f-hh99z\"\nI0521 16:00:25.587391 1 event.go:291] \"Event occurred\" object=\"deployment-9005/test-recreate-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-recreate-deployment-f79dd4667 to 1\"\nI0521 16:00:25.590720 1 event.go:291] \"Event occurred\" object=\"deployment-9005/test-recreate-deployment-f79dd4667\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-recreate-deployment-f79dd4667-rm7n6\"\nI0521 16:00:26.380429 1 namespace_controller.go:185] Namespace has been deleted emptydir-617\nE0521 16:00:26.594261 1 tokens_controller.go:261] error synchronizing serviceaccount job-9919/default: secrets \"default-token-zwkjt\" is forbidden: unable to create new content in namespace job-9919 because it is being terminated\nI0521 16:00:26.820350 1 event.go:291] \"Event occurred\" object=\"webhook-4840/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-cbccbf6bb to 1\"\nI0521 16:00:26.826850 1 event.go:291] \"Event occurred\" object=\"webhook-4840/sample-webhook-deployment-cbccbf6bb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-cbccbf6bb-fns25\"\nE0521 16:00:28.020600 1 tokens_controller.go:261] error synchronizing serviceaccount crd-publish-openapi-2465/default: secrets \"default-token-8b4mp\" is forbidden: unable to create new content in namespace crd-publish-openapi-2465 because it is being terminated\nE0521 16:00:28.087190 1 tokens_controller.go:261] error synchronizing serviceaccount replication-controller-8595/default: secrets \"default-token-w7ghh\" is forbidden: unable to create new content in namespace replication-controller-8595 because it is being terminated\nE0521 16:00:29.448461 1 tokens_controller.go:261] error synchronizing serviceaccount configmap-9937/default: secrets \"default-token-724f4\" is forbidden: unable to create new content in namespace configmap-9937 because it is being terminated\nE0521 16:00:30.792635 1 tokens_controller.go:261] error synchronizing serviceaccount deployment-9005/default: secrets \"default-token-96tq4\" is forbidden: unable to create new content in namespace deployment-9005 because it is being terminated\nI0521 16:00:30.898980 1 namespace_controller.go:185] Namespace has been deleted svcaccounts-2497\nI0521 16:00:31.226529 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-deployment-dd94f59b7 to 10\"\nI0521 16:00:31.232058 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-k62lf\"\nI0521 16:00:31.236554 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-7jz2c\"\nI0521 16:00:31.236763 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-ftct8\"\nI0521 16:00:31.240329 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-wwpgm\"\nI0521 16:00:31.241007 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-zrxf2\"\nI0521 16:00:31.241669 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-7bbwj\"\nI0521 16:00:31.242116 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-tbmnm\"\nI0521 16:00:31.256601 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-ddq48\"\nI0521 16:00:31.256665 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-h6d6c\"\nI0521 16:00:31.256699 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-lpvwc\"\nE0521 16:00:31.527381 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:00:31.692191 1 namespace_controller.go:185] Namespace has been deleted job-9919\nI0521 16:00:33.115242 1 namespace_controller.go:185] Namespace has been deleted crd-publish-openapi-2465\nI0521 16:00:33.166982 1 namespace_controller.go:185] Namespace has been deleted replication-controller-8595\nI0521 16:00:34.026164 1 namespace_controller.go:185] Namespace has been deleted services-5478\nI0521 16:00:34.564057 1 namespace_controller.go:185] Namespace has been deleted configmap-9937\nE0521 16:00:34.717782 1 tokens_controller.go:261] error synchronizing serviceaccount pods-269/default: secrets \"default-token-k4j7q\" is forbidden: unable to create new content in namespace pods-269 because it is being terminated\nE0521 16:00:34.755154 1 tokens_controller.go:261] error synchronizing serviceaccount watch-5628/default: secrets \"default-token-bq476\" is forbidden: unable to create new content in namespace watch-5628 because it is being terminated\nE0521 16:00:35.025062 1 tokens_controller.go:261] error synchronizing serviceaccount watch-1180/default: secrets \"default-token-4p8gx\" is forbidden: unable to create new content in namespace watch-1180 because it is being terminated\nI0521 16:00:35.590988 1 namespace_controller.go:185] Namespace has been deleted var-expansion-1920\nI0521 16:00:35.862752 1 namespace_controller.go:185] Namespace has been deleted deployment-9005\nI0521 16:00:37.313502 1 namespace_controller.go:185] Namespace has been deleted secrets-7054\nE0521 16:00:38.080033 1 tokens_controller.go:261] error synchronizing serviceaccount webhook-4840-markers/default: secrets \"default-token-fxd5m\" is forbidden: unable to create new content in namespace webhook-4840-markers because it is being terminated\nE0521 16:00:38.099036 1 tokens_controller.go:261] error synchronizing serviceaccount webhook-4840/default: secrets \"default-token-n2zjx\" is forbidden: unable to create new content in namespace webhook-4840 because it is being terminated\nI0521 16:00:39.261172 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-deployment-795d758f88 to 3\"\nI0521 16:00:39.264292 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-4s8hx\"\nI0521 16:00:39.267764 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-b9xwd\"\nI0521 16:00:39.268186 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-cq5tx\"\nI0521 16:00:39.271012 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-deployment-dd94f59b7 to 8\"\nI0521 16:00:39.278381 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-deployment-dd94f59b7-ftct8\"\nI0521 16:00:39.278413 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-deployment-dd94f59b7-7bbwj\"\nI0521 16:00:39.288679 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-deployment-795d758f88 to 5\"\nI0521 16:00:39.293035 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-th2gq\"\nI0521 16:00:39.296369 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-gjk7l\"\nI0521 16:00:39.830132 1 namespace_controller.go:185] Namespace has been deleted watch-5628\nI0521 16:00:39.937093 1 namespace_controller.go:185] Namespace has been deleted events-8797\nI0521 16:00:40.049761 1 namespace_controller.go:185] Namespace has been deleted watch-1180\nW0521 16:00:40.200025 1 endpointslice_controller.go:284] Error syncing endpoint slices for service \"services-7359/affinity-clusterip-timeout\", retrying. Error: EndpointSlice informer cache is out of date\nI0521 16:00:40.807007 1 namespace_controller.go:185] Namespace has been deleted pods-5691\nE0521 16:00:40.933776 1 resource_quota_controller.go:252] Operation cannot be fulfilled on resourcequotas \"test-quota\": the object has been modified; please apply your changes to the latest version and try again\nI0521 16:00:41.152077 1 namespace_controller.go:185] Namespace has been deleted watch-2995\nI0521 16:00:41.295514 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-deployment-dd94f59b7 to 20\"\nI0521 16:00:41.299767 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-mg78h\"\nI0521 16:00:41.300126 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-deployment-795d758f88 to 13\"\nI0521 16:00:41.303618 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-px2r9\"\nI0521 16:00:41.304104 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-qr8hj\"\nI0521 16:00:41.304172 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-54nvk\"\nI0521 16:00:41.308941 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-wvtvp\"\nI0521 16:00:41.308988 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-j8bq7\"\nI0521 16:00:41.310065 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-bhnsx\"\nI0521 16:00:41.310292 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-7tk9n\"\nI0521 16:00:41.310469 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-bg684\"\nI0521 16:00:41.310498 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-7n5tt\"\nI0521 16:00:41.315292 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-cqmvn\"\nI0521 16:00:41.315510 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-jjqgn\"\nI0521 16:00:41.315812 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-ksff8\"\nI0521 16:00:41.315841 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-xwpmc\"\nI0521 16:00:41.317554 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-v67rv\"\nI0521 16:00:41.318104 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-v76kz\"\nI0521 16:00:41.318133 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-cbfsj\"\nI0521 16:00:41.318161 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-jzdxk\"\nI0521 16:00:41.318222 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-qkj95\"\nI0521 16:00:41.321958 1 event.go:291] \"Event occurred\" object=\"deployment-6688/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-mtcq7\"\nI0521 16:00:41.329469 1 namespace_controller.go:185] Namespace has been deleted downward-api-6640\nE0521 16:00:41.350216 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"webserver-deployment-795d758f88.16812049996d8928\", GenerateName:\"\", Namespace:\"deployment-6688\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-6688\", Name:\"webserver-deployment-795d758f88\", UID:\"0f277e6a-9c6b-4481-b108-ead3e5d9b12a\", APIVersion:\"apps/v1\", ResourceVersion:\"18452\", FieldPath:\"\"}, Reason:\"SuccessfulCreate\", Message:\"Created pod: webserver-deployment-795d758f88-j8bq7\", Source:v1.EventSource{Component:\"replicaset-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022142a52692f28, ext:2853390292218, loc:(*time.Location)(0x6a53ca0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022142a52692f28, ext:2853390292218, loc:(*time.Location)(0x6a53ca0)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"webserver-deployment-795d758f88.16812049996d8928\" is forbidden: unable to create new content in namespace deployment-6688 because it is being terminated' (will not retry!)\nE0521 16:00:41.647198 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"webserver-deployment-795d758f88.16812049998190d3\", GenerateName:\"\", Namespace:\"deployment-6688\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-6688\", Name:\"webserver-deployment-795d758f88\", UID:\"0f277e6a-9c6b-4481-b108-ead3e5d9b12a\", APIVersion:\"apps/v1\", ResourceVersion:\"18452\", FieldPath:\"\"}, Reason:\"SuccessfulCreate\", Message:\"Created pod: webserver-deployment-795d758f88-7n5tt\", Source:v1.EventSource{Component:\"replicaset-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022142a527d36d3, ext:2853391604928, loc:(*time.Location)(0x6a53ca0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022142a527d36d3, ext:2853391604928, loc:(*time.Location)(0x6a53ca0)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"webserver-deployment-795d758f88.16812049998190d3\" is forbidden: unable to create new content in namespace deployment-6688 because it is being terminated' (will not retry!)\nE0521 16:00:41.697327 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"webserver-deployment-795d758f88.1681204999ce7072\", GenerateName:\"\", Namespace:\"deployment-6688\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-6688\", Name:\"webserver-deployment-795d758f88\", UID:\"0f277e6a-9c6b-4481-b108-ead3e5d9b12a\", APIVersion:\"apps/v1\", ResourceVersion:\"18452\", FieldPath:\"\"}, Reason:\"SuccessfulCreate\", Message:\"Created pod: webserver-deployment-795d758f88-cqmvn\", Source:v1.EventSource{Component:\"replicaset-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022142a52ca1672, ext:2853396642855, loc:(*time.Location)(0x6a53ca0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022142a52ca1672, ext:2853396642855, loc:(*time.Location)(0x6a53ca0)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"webserver-deployment-795d758f88.1681204999ce7072\" is forbidden: unable to create new content in namespace deployment-6688 because it is being terminated' (will not retry!)\nE0521 16:00:41.747266 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"webserver-deployment-795d758f88.16812049b095fcd5\", GenerateName:\"\", Namespace:\"deployment-6688\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-6688\", Name:\"webserver-deployment-795d758f88\", UID:\"0f277e6a-9c6b-4481-b108-ead3e5d9b12a\", APIVersion:\"apps/v1\", ResourceVersion:\"18452\", FieldPath:\"\"}, Reason:\"SuccessfulCreate\", Message:\"(combined from similar events): Created pod: webserver-deployment-795d758f88-jjqgn\", Source:v1.EventSource{Component:\"replicaset-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022142a6991a2d5, ext:2853778819210, loc:(*time.Location)(0x6a53ca0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022142a6991a2d5, ext:2853778819210, loc:(*time.Location)(0x6a53ca0)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"webserver-deployment-795d758f88.16812049b095fcd5\" is forbidden: unable to create new content in namespace deployment-6688 because it is being terminated' (will not retry!)\nE0521 16:00:42.047478 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"webserver-deployment-795d758f88.16812049b095fcd5\", GenerateName:\"\", Namespace:\"deployment-6688\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-6688\", Name:\"webserver-deployment-795d758f88\", UID:\"0f277e6a-9c6b-4481-b108-ead3e5d9b12a\", APIVersion:\"apps/v1\", ResourceVersion:\"18452\", FieldPath:\"\"}, Reason:\"SuccessfulCreate\", Message:\"(combined from similar events): Created pod: webserver-deployment-795d758f88-cbfsj\", Source:v1.EventSource{Component:\"replicaset-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022142a6991a2d5, ext:2853778819210, loc:(*time.Location)(0x6a53ca0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022142a78a5b4ca, ext:2854031792801, loc:(*time.Location)(0x6a53ca0)}}, Count:2, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"webserver-deployment-795d758f88.16812049b095fcd5\" is forbidden: unable to create new content in namespace deployment-6688 because it is being terminated' (will not retry!)\nE0521 16:00:42.147378 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"webserver-deployment-795d758f88.16812049b095fcd5\", GenerateName:\"\", Namespace:\"deployment-6688\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-6688\", Name:\"webserver-deployment-795d758f88\", UID:\"0f277e6a-9c6b-4481-b108-ead3e5d9b12a\", APIVersion:\"apps/v1\", ResourceVersion:\"18452\", FieldPath:\"\"}, Reason:\"SuccessfulCreate\", Message:\"(combined from similar events): Created pod: webserver-deployment-795d758f88-jzdxk\", Source:v1.EventSource{Component:\"replicaset-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022142a6991a2d5, ext:2853778819210, loc:(*time.Location)(0x6a53ca0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022142a82d69d80, ext:2854129028494, loc:(*time.Location)(0x6a53ca0)}}, Count:3, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"webserver-deployment-795d758f88.16812049b095fcd5\" is forbidden: unable to create new content in namespace deployment-6688 because it is being terminated' (will not retry!)\nE0521 16:00:42.297895 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"webserver-deployment-795d758f88.16812049b095fcd5\", GenerateName:\"\", Namespace:\"deployment-6688\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-6688\", Name:\"webserver-deployment-795d758f88\", UID:\"0f277e6a-9c6b-4481-b108-ead3e5d9b12a\", APIVersion:\"apps/v1\", ResourceVersion:\"18452\", FieldPath:\"\"}, Reason:\"SuccessfulCreate\", Message:\"(combined from similar events): Created pod: webserver-deployment-795d758f88-mtcq7\", Source:v1.EventSource{Component:\"replicaset-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022142a6991a2d5, ext:2853778819210, loc:(*time.Location)(0x6a53ca0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022142a8c0493ad, ext:2854283035552, loc:(*time.Location)(0x6a53ca0)}}, Count:4, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"webserver-deployment-795d758f88.16812049b095fcd5\" is forbidden: unable to create new content in namespace deployment-6688 because it is being terminated' (will not retry!)\nI0521 16:00:43.198826 1 namespace_controller.go:185] Namespace has been deleted webhook-4840-markers\nI0521 16:00:43.218478 1 namespace_controller.go:185] Namespace has been deleted webhook-4840\nE0521 16:00:44.254722 1 tokens_controller.go:261] error synchronizing serviceaccount downward-api-7203/default: secrets \"default-token-5nrkl\" is forbidden: unable to create new content in namespace downward-api-7203 because it is being terminated\nE0521 16:00:46.679089 1 tokens_controller.go:261] error synchronizing serviceaccount downward-api-4631/default: secrets \"default-token-bdwjt\" is forbidden: unable to create new content in namespace downward-api-4631 because it is being terminated\nE0521 16:00:46.857997 1 garbagecollector.go:309] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"webserver-deployment-dd94f59b7-wwpgm\", UID:\"4185fd32-81fe-4ac1-964e-2b9781afd97d\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"deployment-6688\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:true, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"apps/v1\", Kind:\"ReplicaSet\", Name:\"webserver-deployment-dd94f59b7\", UID:\"61652710-4641-4c17-92b8-e41035f0d1d8\", Controller:(*bool)(0xc003407a30), BlockOwnerDeletion:(*bool)(0xc003407a31)}}}: pods \"webserver-deployment-dd94f59b7-wwpgm\" not found\nI0521 16:00:48.029862 1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-1985/test-quota\nE0521 16:00:48.384894 1 tokens_controller.go:261] error synchronizing serviceaccount projected-1096/default: secrets \"default-token-v2t2z\" is forbidden: unable to create new content in namespace projected-1096 because it is being terminated\nI0521 16:00:49.326335 1 namespace_controller.go:185] Namespace has been deleted downward-api-7203\nI0521 16:00:49.724645 1 event.go:291] \"Event occurred\" object=\"webhook-9408/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-cbccbf6bb to 1\"\nI0521 16:00:49.730589 1 event.go:291] \"Event occurred\" object=\"webhook-9408/sample-webhook-deployment-cbccbf6bb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-cbccbf6bb-7sfwr\"\nI0521 16:00:51.839641 1 namespace_controller.go:185] Namespace has been deleted deployment-6688\nI0521 16:00:53.205115 1 namespace_controller.go:185] Namespace has been deleted resourcequota-1985\nI0521 16:00:53.477602 1 namespace_controller.go:185] Namespace has been deleted projected-1096\nE0521 16:00:53.497973 1 tokens_controller.go:261] error synchronizing serviceaccount svcaccounts-7902/default: secrets \"default-token-wjnlc\" is forbidden: unable to create new content in namespace svcaccounts-7902 because it is being terminated\nI0521 16:00:53.711473 1 event.go:291] \"Event occurred\" object=\"statefulset-5760/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nE0521 16:00:54.204287 1 tokens_controller.go:261] error synchronizing serviceaccount ingress-5616/default: secrets \"default-token-xp2p8\" is forbidden: unable to create new content in namespace ingress-5616 because it is being terminated\nE0521 16:00:54.721536 1 tokens_controller.go:261] error synchronizing serviceaccount configmap-7082/default: secrets \"default-token-zpnvl\" is forbidden: unable to create new content in namespace configmap-7082 because it is being terminated\nI0521 16:00:59.162282 1 namespace_controller.go:185] Namespace has been deleted services-7359\nI0521 16:00:59.238768 1 namespace_controller.go:185] Namespace has been deleted ingress-5616\nI0521 16:00:59.700576 1 event.go:291] \"Event occurred\" object=\"statefulset-5070/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0521 16:00:59.872070 1 namespace_controller.go:185] Namespace has been deleted configmap-7082\nE0521 16:01:00.802830 1 tokens_controller.go:261] error synchronizing serviceaccount projected-6969/default: secrets \"default-token-ldzrn\" is forbidden: unable to create new content in namespace projected-6969 because it is being terminated\nI0521 16:01:01.115150 1 namespace_controller.go:185] Namespace has been deleted pods-269\nI0521 16:01:01.216976 1 namespace_controller.go:185] Namespace has been deleted var-expansion-629\nI0521 16:01:03.564284 1 namespace_controller.go:185] Namespace has been deleted projected-8641\nI0521 16:01:03.672062 1 namespace_controller.go:185] Namespace has been deleted svcaccounts-7902\nI0521 16:01:03.836215 1 namespace_controller.go:185] Namespace has been deleted subpath-4649\nE0521 16:01:04.677260 1 tokens_controller.go:261] error synchronizing serviceaccount projected-2615/default: secrets \"default-token-85k4k\" is forbidden: unable to create new content in namespace projected-2615 because it is being terminated\nI0521 16:01:05.958416 1 namespace_controller.go:185] Namespace has been deleted projected-6969\nI0521 16:01:06.033487 1 namespace_controller.go:185] Namespace has been deleted configmap-6475\nI0521 16:01:06.257756 1 namespace_controller.go:185] Namespace has been deleted subpath-7156\nE0521 16:01:07.228009 1 tokens_controller.go:261] error synchronizing serviceaccount downward-api-2678/default: secrets \"default-token-lmhxr\" is forbidden: unable to create new content in namespace downward-api-2678 because it is being terminated\nI0521 16:01:08.331468 1 event.go:291] \"Event occurred\" object=\"svc-latency-2745/svc-latency-rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: svc-latency-rc-968q6\"\nI0521 16:01:09.772103 1 namespace_controller.go:185] Namespace has been deleted projected-2615\nI0521 16:01:10.834808 1 namespace_controller.go:185] Namespace has been deleted kubectl-6437\nE0521 16:01:11.896600 1 tokens_controller.go:261] error synchronizing serviceaccount webhook-9408/default: secrets \"default-token-5jqvt\" is forbidden: unable to create new content in namespace webhook-9408 because it is being terminated\nE0521 16:01:11.959064 1 tokens_controller.go:261] error synchronizing serviceaccount webhook-9408-markers/default: secrets \"default-token-b5tq7\" is forbidden: unable to create new content in namespace webhook-9408-markers because it is being terminated\nI0521 16:01:12.311921 1 namespace_controller.go:185] Namespace has been deleted downward-api-2678\nI0521 16:01:12.356275 1 event.go:291] \"Event occurred\" object=\"services-9371/nodeport-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: nodeport-test-5k2fg\"\nI0521 16:01:12.359097 1 event.go:291] \"Event occurred\" object=\"services-9371/nodeport-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: nodeport-test-tbs2q\"\nI0521 16:01:13.077374 1 namespace_controller.go:185] Namespace has been deleted downward-api-4631\nE0521 16:01:13.428898 1 tokens_controller.go:261] error synchronizing serviceaccount emptydir-2402/default: secrets \"default-token-kb6zm\" is forbidden: unable to create new content in namespace emptydir-2402 because it is being terminated\nI0521 16:01:13.995118 1 event.go:291] \"Event occurred\" object=\"statefulset-5760/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-1 in StatefulSet ss successful\"\nI0521 16:01:14.000456 1 event.go:291] \"Event occurred\" object=\"statefulset-5760/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-2 in StatefulSet ss successful\"\nE0521 16:01:14.338737 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:01:16.612203 1 event.go:291] \"Event occurred\" object=\"services-8629/externalname-service\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalname-service-jq5s6\"\nI0521 16:01:16.615571 1 event.go:291] \"Event occurred\" object=\"services-8629/externalname-service\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalname-service-7w775\"\nI0521 16:01:17.017602 1 namespace_controller.go:185] Namespace has been deleted webhook-9408-markers\nI0521 16:01:17.033953 1 namespace_controller.go:185] Namespace has been deleted webhook-9408\nE0521 16:01:17.327682 1 tokens_controller.go:261] error synchronizing serviceaccount containers-902/default: secrets \"default-token-b7kr5\" is forbidden: unable to create new content in namespace containers-902 because it is being terminated\nI0521 16:01:17.641152 1 namespace_controller.go:185] Namespace has been deleted container-probe-477\nI0521 16:01:18.493049 1 namespace_controller.go:185] Namespace has been deleted emptydir-2402\nI0521 16:01:18.663254 1 namespace_controller.go:185] Namespace has been deleted e2e-kubelet-etc-hosts-3924\nE0521 16:01:19.358398 1 tokens_controller.go:261] error synchronizing serviceaccount emptydir-2099/default: secrets \"default-token-28rv4\" is forbidden: unable to create new content in namespace emptydir-2099 because it is being terminated\nE0521 16:01:19.530961 1 tokens_controller.go:261] error synchronizing serviceaccount ingressclass-6889/default: secrets \"default-token-r5cpt\" is forbidden: unable to create new content in namespace ingressclass-6889 because it is being terminated\nI0521 16:01:19.601647 1 event.go:291] \"Event occurred\" object=\"webhook-9859/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-cbccbf6bb to 1\"\nI0521 16:01:19.605884 1 event.go:291] \"Event occurred\" object=\"webhook-9859/sample-webhook-deployment-cbccbf6bb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-cbccbf6bb-fbdz4\"\nI0521 16:01:20.411309 1 namespace_controller.go:185] Namespace has been deleted kubectl-174\nE0521 16:01:21.616801 1 tokens_controller.go:261] error synchronizing serviceaccount downward-api-9047/default: secrets \"default-token-b69v6\" is forbidden: unable to create new content in namespace downward-api-9047 because it is being terminated\nI0521 16:01:21.894308 1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-5199/quota-not-terminating\nI0521 16:01:21.897402 1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-5199/quota-terminating\nE0521 16:01:22.023585 1 tokens_controller.go:261] error synchronizing serviceaccount resourcequota-5199/default: secrets \"default-token-d9tcn\" is forbidden: unable to create new content in namespace resourcequota-5199 because it is being terminated\nI0521 16:01:22.468982 1 namespace_controller.go:185] Namespace has been deleted containers-902\nI0521 16:01:22.520834 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for e2e-test-crd-publish-openapi-8711-crds.crd-publish-openapi-test-multi-ver.example.com\nI0521 16:01:22.520976 1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0521 16:01:22.621210 1 shared_informer.go:247] Caches are synced for resource quota \nE0521 16:01:24.017769 1 tokens_controller.go:261] error synchronizing serviceaccount configmap-7457/default: secrets \"default-token-x8fnj\" is forbidden: unable to create new content in namespace configmap-7457 because it is being terminated\nI0521 16:01:24.282421 1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0521 16:01:24.282490 1 shared_informer.go:247] Caches are synced for garbage collector \nI0521 16:01:24.499405 1 namespace_controller.go:185] Namespace has been deleted emptydir-2099\nI0521 16:01:24.631036 1 namespace_controller.go:185] Namespace has been deleted ingressclass-6889\nE0521 16:01:26.407590 1 tokens_controller.go:261] error synchronizing serviceaccount svc-latency-2745/default: serviceaccounts \"default\" not found\nE0521 16:01:26.421571 1 namespace_controller.go:162] deletion of namespace svc-latency-2745 failed: unexpected items still remain in namespace: svc-latency-2745 for gvr: /v1, Resource=pods\nE0521 16:01:26.437714 1 tokens_controller.go:261] error synchronizing serviceaccount services-9371/default: secrets \"default-token-c9gsp\" is forbidden: unable to create new content in namespace services-9371 because it is being terminated\nE0521 16:01:26.583235 1 namespace_controller.go:162] deletion of namespace svc-latency-2745 failed: unexpected items still remain in namespace: svc-latency-2745 for gvr: /v1, Resource=pods\nW0521 16:01:26.614005 1 endpointslice_controller.go:284] Error syncing endpoint slices for service \"services-9371/nodeport-test\", retrying. Error: failed to update nodeport-test-dsnwh EndpointSlice for Service services-9371/nodeport-test: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"nodeport-test-dsnwh\": StorageError: invalid object, Code: 4, Key: /registry/endpointslices/services-9371/nodeport-test-dsnwh, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 7b42a291-61f6-4456-a824-a57b71a427e2, UID in object meta: \nI0521 16:01:26.614148 1 event.go:291] \"Event occurred\" object=\"services-9371/nodeport-test\" kind=\"Service\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpointSlices\" message=\"Error updating Endpoint Slices for Service services-9371/nodeport-test: failed to update nodeport-test-dsnwh EndpointSlice for Service services-9371/nodeport-test: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \\\"nodeport-test-dsnwh\\\": StorageError: invalid object, Code: 4, Key: /registry/endpointslices/services-9371/nodeport-test-dsnwh, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 7b42a291-61f6-4456-a824-a57b71a427e2, UID in object meta: \"\nE0521 16:01:26.714699 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"nodeport-test.1681205425d275b1\", GenerateName:\"\", Namespace:\"services-9371\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Service\", Namespace:\"services-9371\", Name:\"nodeport-test\", UID:\"c00ec7c1-cb03-40b0-8fdf-76c82c99858c\", APIVersion:\"v1\", ResourceVersion:\"19961\", FieldPath:\"\"}, Reason:\"FailedToUpdateEndpointSlices\", Message:\"Error updating Endpoint Slices for Service services-9371/nodeport-test: failed to update nodeport-test-dsnwh EndpointSlice for Service services-9371/nodeport-test: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \\\"nodeport-test-dsnwh\\\": StorageError: invalid object, Code: 4, Key: /registry/endpointslices/services-9371/nodeport-test-dsnwh, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 7b42a291-61f6-4456-a824-a57b71a427e2, UID in object meta: \", Source:v1.EventSource{Component:\"endpoint-slice-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0221435a49899b1, ext:2898695389561, loc:(*time.Location)(0x6a53ca0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0221435a49899b1, ext:2898695389561, loc:(*time.Location)(0x6a53ca0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'namespaces \"services-9371\" not found' (will not retry!)\nE0521 16:01:26.745284 1 namespace_controller.go:162] deletion of namespace svc-latency-2745 failed: unexpected items still remain in namespace: svc-latency-2745 for gvr: /v1, Resource=pods\nI0521 16:01:26.758918 1 namespace_controller.go:185] Namespace has been deleted downward-api-9047\nE0521 16:01:26.983160 1 namespace_controller.go:162] deletion of namespace svc-latency-2745 failed: unexpected items still remain in namespace: svc-latency-2745 for gvr: /v1, Resource=pods\nI0521 16:01:27.041976 1 namespace_controller.go:185] Namespace has been deleted resourcequota-5199\nE0521 16:01:27.185644 1 namespace_controller.go:162] deletion of namespace svc-latency-2745 failed: unexpected items still remain in namespace: svc-latency-2745 for gvr: /v1, Resource=pods\nE0521 16:01:27.419016 1 namespace_controller.go:162] deletion of namespace svc-latency-2745 failed: unexpected items still remain in namespace: svc-latency-2745 for gvr: /v1, Resource=pods\nI0521 16:01:27.555055 1 namespace_controller.go:185] Namespace has been deleted downward-api-5345\nE0521 16:01:27.725749 1 namespace_controller.go:162] deletion of namespace svc-latency-2745 failed: unexpected items still remain in namespace: svc-latency-2745 for gvr: /v1, Resource=pods\nE0521 16:01:28.241844 1 namespace_controller.go:162] deletion of namespace svc-latency-2745 failed: unexpected items still remain in namespace: svc-latency-2745 for gvr: /v1, Resource=pods\nI0521 16:01:29.096771 1 namespace_controller.go:185] Namespace has been deleted configmap-7457\nE0521 16:01:30.365301 1 tokens_controller.go:261] error synchronizing serviceaccount services-8629/default: secrets \"default-token-6qq6s\" is forbidden: unable to create new content in namespace services-8629 because it is being terminated\nI0521 16:01:31.238204 1 event.go:291] \"Event occurred\" object=\"statefulset-5070/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-1 in StatefulSet ss successful\"\nI0521 16:01:31.568111 1 namespace_controller.go:185] Namespace has been deleted kubelet-test-1393\nI0521 16:01:31.590271 1 namespace_controller.go:185] Namespace has been deleted services-9371\nE0521 16:01:32.692661 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:01:32.730446 1 tokens_controller.go:261] error synchronizing serviceaccount webhook-9859/default: secrets \"default-token-6599l\" is forbidden: unable to create new content in namespace webhook-9859 because it is being terminated\nI0521 16:01:32.860129 1 event.go:291] \"Event occurred\" object=\"statefulset-5070/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-2 in StatefulSet ss successful\"\nE0521 16:01:32.956131 1 tokens_controller.go:261] error synchronizing serviceaccount var-expansion-377/default: secrets \"default-token-wqs6q\" is forbidden: unable to create new content in namespace var-expansion-377 because it is being terminated\nI0521 16:01:33.235777 1 namespace_controller.go:185] Namespace has been deleted configmap-2567\nE0521 16:01:33.495985 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:01:34.051554 1 namespace_controller.go:185] Namespace has been deleted svc-latency-2745\nE0521 16:01:35.322752 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:01:35.415034 1 namespace_controller.go:185] Namespace has been deleted services-8629\nI0521 16:01:35.416181 1 namespace_controller.go:185] Namespace has been deleted emptydir-2784\nI0521 16:01:37.768488 1 namespace_controller.go:185] Namespace has been deleted pods-6919\nI0521 16:01:37.845394 1 namespace_controller.go:185] Namespace has been deleted webhook-9859-markers\nI0521 16:01:37.865738 1 namespace_controller.go:185] Namespace has been deleted webhook-9859\nI0521 16:01:38.092786 1 namespace_controller.go:185] Namespace has been deleted var-expansion-377\nE0521 16:01:38.459845 1 tokens_controller.go:261] error synchronizing serviceaccount init-container-5250/default: secrets \"default-token-xdd42\" is forbidden: unable to create new content in namespace init-container-5250 because it is being terminated\nE0521 16:01:38.824049 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:01:39.502250 1 namespace_controller.go:185] Namespace has been deleted projected-1369\nI0521 16:01:40.278580 1 event.go:291] \"Event occurred\" object=\"job-3934/adopt-release\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: adopt-release-przmr\"\nI0521 16:01:40.286190 1 event.go:291] \"Event occurred\" object=\"job-3934/adopt-release\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: adopt-release-7nv5g\"\nI0521 16:01:43.571217 1 namespace_controller.go:185] Namespace has been deleted init-container-5250\nI0521 16:01:44.378637 1 event.go:291] \"Event occurred\" object=\"webhook-2443/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-cbccbf6bb to 1\"\nI0521 16:01:44.383983 1 event.go:291] \"Event occurred\" object=\"webhook-2443/sample-webhook-deployment-cbccbf6bb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-cbccbf6bb-wk7v7\"\nI0521 16:01:44.809310 1 event.go:291] \"Event occurred\" object=\"webhook-3635/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-cbccbf6bb to 1\"\nI0521 16:01:44.814664 1 event.go:291] \"Event occurred\" object=\"webhook-3635/sample-webhook-deployment-cbccbf6bb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-cbccbf6bb-v2h4z\"\nI0521 16:01:45.318320 1 event.go:291] \"Event occurred\" object=\"job-3934/adopt-release\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: adopt-release-2kgmj\"\nI0521 16:01:45.466154 1 event.go:291] \"Event occurred\" object=\"statefulset-5760/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-2 in StatefulSet ss successful\"\nI0521 16:01:45.469843 1 event.go:291] \"Event occurred\" object=\"statefulset-5760/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-1 in StatefulSet ss successful\"\nI0521 16:01:45.481040 1 event.go:291] \"Event occurred\" object=\"statefulset-5760/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nE0521 16:01:45.944457 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:01:46.622080 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:01:50.497175 1 namespace_controller.go:185] Namespace has been deleted kubectl-509\nI0521 16:01:50.699130 1 event.go:291] \"Event occurred\" object=\"services-6358/affinity-nodeport-transition\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-nodeport-transition-mthdd\"\nI0521 16:01:50.718311 1 event.go:291] \"Event occurred\" object=\"services-6358/affinity-nodeport-transition\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-nodeport-transition-nzmfc\"\nI0521 16:01:50.718398 1 event.go:291] \"Event occurred\" object=\"services-6358/affinity-nodeport-transition\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-nodeport-transition-dsjsz\"\nI0521 16:01:52.199950 1 event.go:291] \"Event occurred\" object=\"services-8414/affinity-clusterip-transition\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-clusterip-transition-npkxk\"\nI0521 16:01:52.204361 1 event.go:291] \"Event occurred\" object=\"services-8414/affinity-clusterip-transition\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-clusterip-transition-drxnf\"\nI0521 16:01:52.204439 1 event.go:291] \"Event occurred\" object=\"services-8414/affinity-clusterip-transition\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-clusterip-transition-t5sg6\"\nE0521 16:01:52.490649 1 tokens_controller.go:261] error synchronizing serviceaccount job-3934/default: secrets \"default-token-2dn4s\" is forbidden: unable to create new content in namespace job-3934 because it is being terminated\nI0521 16:01:53.223400 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for e2e-test-crd-publish-openapi-5753-crds.crd-publish-openapi-test-common-group.example.com\nI0521 16:01:53.223487 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for e2e-test-crd-publish-openapi-9818-crds.crd-publish-openapi-test-common-group.example.com\nI0521 16:01:53.223631 1 shared_informer.go:240] Waiting for caches to sync for resource quota\nE0521 16:01:53.225562 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:01:53.620361 1 namespace_controller.go:185] Namespace has been deleted watch-9513\nE0521 16:01:54.616686 1 tokens_controller.go:261] error synchronizing serviceaccount pods-2913/default: secrets \"default-token-z7lgr\" is forbidden: unable to create new content in namespace pods-2913 because it is being terminated\nE0521 16:01:54.756789 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:01:55.387922 1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0521 16:01:55.388005 1 shared_informer.go:247] Caches are synced for garbage collector \nI0521 16:01:55.536831 1 stateful_set.go:419] StatefulSet has been deleted statefulset-5760/ss\nE0521 16:01:57.008872 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:01:58.078959 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:01:59.124092 1 namespace_controller.go:185] Namespace has been deleted crd-publish-openapi-7284\nE0521 16:01:59.211875 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:01:59.376927 1 namespace_controller.go:185] Namespace has been deleted kubectl-2809\nI0521 16:02:00.146961 1 namespace_controller.go:185] Namespace has been deleted webhook-3635-markers\nI0521 16:02:00.163244 1 namespace_controller.go:185] Namespace has been deleted webhook-3635\nE0521 16:02:00.585442 1 tokens_controller.go:261] error synchronizing serviceaccount statefulset-5760/default: secrets \"default-token-wffmt\" is forbidden: unable to create new content in namespace statefulset-5760 because it is being terminated\nI0521 16:02:00.757912 1 namespace_controller.go:185] Namespace has been deleted webhook-2443-markers\nI0521 16:02:01.486011 1 namespace_controller.go:185] Namespace has been deleted configmap-2201\nE0521 16:02:01.566486 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:02:02.238996 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:02:02.303774 1 namespace_controller.go:185] Namespace has been deleted emptydir-4583\nI0521 16:02:02.485794 1 event.go:291] \"Event occurred\" object=\"statefulset-5070/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-2 in StatefulSet ss successful\"\nE0521 16:02:04.484650 1 tokens_controller.go:261] error synchronizing serviceaccount projected-3983/default: secrets \"default-token-d9ggr\" is forbidden: unable to create new content in namespace projected-3983 because it is being terminated\nI0521 16:02:05.808864 1 namespace_controller.go:185] Namespace has been deleted statefulset-5760\nI0521 16:02:05.967009 1 namespace_controller.go:185] Namespace has been deleted webhook-2443\nE0521 16:02:06.156918 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:02:08.189142 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:02:08.262621 1 tokens_controller.go:261] error synchronizing serviceaccount container-runtime-4336/default: secrets \"default-token-4h6qf\" is forbidden: unable to create new content in namespace container-runtime-4336 because it is being terminated\nE0521 16:02:09.827307 1 namespace_controller.go:162] deletion of namespace projected-3983 failed: unexpected items still remain in namespace: projected-3983 for gvr: /v1, Resource=pods\nI0521 16:02:09.844634 1 namespace_controller.go:185] Namespace has been deleted containers-3768\nI0521 16:02:09.948280 1 namespace_controller.go:185] Namespace has been deleted services-9463\nE0521 16:02:10.008372 1 namespace_controller.go:162] deletion of namespace projected-3983 failed: unexpected items still remain in namespace: projected-3983 for gvr: /v1, Resource=pods\nE0521 16:02:10.193001 1 namespace_controller.go:162] deletion of namespace projected-3983 failed: unexpected items still remain in namespace: projected-3983 for gvr: /v1, Resource=pods\nI0521 16:02:10.418772 1 event.go:291] \"Event occurred\" object=\"statefulset-5070/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-1 in StatefulSet ss successful\"\nI0521 16:02:11.231536 1 namespace_controller.go:185] Namespace has been deleted crd-publish-openapi-7951\nI0521 16:02:11.647568 1 event.go:291] \"Event occurred\" object=\"gc-1534/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-9sdzn\"\nI0521 16:02:11.651214 1 event.go:291] \"Event occurred\" object=\"gc-1534/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-v4nbh\"\nI0521 16:02:11.652044 1 event.go:291] \"Event occurred\" object=\"gc-1534/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-n7qnp\"\nI0521 16:02:11.655551 1 event.go:291] \"Event occurred\" object=\"gc-1534/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-jrzvt\"\nI0521 16:02:11.656420 1 event.go:291] \"Event occurred\" object=\"gc-1534/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-sf58l\"\nI0521 16:02:11.657073 1 event.go:291] \"Event occurred\" object=\"gc-1534/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-ghk5z\"\nI0521 16:02:11.657132 1 event.go:291] \"Event occurred\" object=\"gc-1534/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-ctrst\"\nI0521 16:02:11.660822 1 event.go:291] \"Event occurred\" object=\"gc-1534/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-crhwf\"\nI0521 16:02:11.660946 1 event.go:291] \"Event occurred\" object=\"gc-1534/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-zvzft\"\nI0521 16:02:11.661439 1 event.go:291] \"Event occurred\" object=\"gc-1534/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-ms8fz\"\nE0521 16:02:11.817159 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:02:13.398767 1 namespace_controller.go:185] Namespace has been deleted container-runtime-4336\nE0521 16:02:13.502712 1 tokens_controller.go:261] error synchronizing serviceaccount downward-api-9651/default: secrets \"default-token-qk2r5\" is forbidden: unable to create new content in namespace downward-api-9651 because it is being terminated\nI0521 16:02:14.799261 1 namespace_controller.go:185] Namespace has been deleted container-probe-5792\nI0521 16:02:15.389004 1 namespace_controller.go:185] Namespace has been deleted projected-3983\nI0521 16:02:16.600105 1 namespace_controller.go:185] Namespace has been deleted pods-598\nE0521 16:02:16.721319 1 tokens_controller.go:261] error synchronizing serviceaccount container-lifecycle-hook-2774/default: secrets \"default-token-z6wpt\" is forbidden: unable to create new content in namespace container-lifecycle-hook-2774 because it is being terminated\nE0521 16:02:16.790806 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:02:17.410244 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:02:17.823904 1 namespace_controller.go:185] Namespace has been deleted emptydir-6209\nI0521 16:02:18.678300 1 namespace_controller.go:185] Namespace has been deleted downward-api-9651\nI0521 16:02:20.213558 1 event.go:291] \"Event occurred\" object=\"statefulset-5070/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0521 16:02:20.806467 1 namespace_controller.go:185] Namespace has been deleted emptydir-7543\nI0521 16:02:21.250216 1 event.go:291] \"Event occurred\" object=\"webhook-7761/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-cbccbf6bb to 1\"\nI0521 16:02:21.256297 1 event.go:291] \"Event occurred\" object=\"webhook-7761/sample-webhook-deployment-cbccbf6bb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-cbccbf6bb-xr86c\"\nE0521 16:02:21.397711 1 tokens_controller.go:261] error synchronizing serviceaccount kubectl-6341/default: secrets \"default-token-cqlhf\" is forbidden: unable to create new content in namespace kubectl-6341 because it is being terminated\nE0521 16:02:22.882592 1 tokens_controller.go:261] error synchronizing serviceaccount services-6718/default: secrets \"default-token-twtm2\" is forbidden: unable to create new content in namespace services-6718 because it is being terminated\nE0521 16:02:23.223921 1 shared_informer.go:243] unable to sync caches for resource quota\nE0521 16:02:23.223969 1 resource_quota_controller.go:447] timed out waiting for quota monitor sync\nI0521 16:02:25.512665 1 event.go:291] \"Event occurred\" object=\"gc-8415/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-x8xv9\"\nI0521 16:02:25.516397 1 event.go:291] \"Event occurred\" object=\"gc-8415/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-j744f\"\nI0521 16:02:25.520137 1 event.go:291] \"Event occurred\" object=\"gc-8415/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-jmc2k\"\nI0521 16:02:25.522855 1 event.go:291] \"Event occurred\" object=\"gc-8415/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-8nfll\"\nI0521 16:02:25.523813 1 event.go:291] \"Event occurred\" object=\"gc-8415/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-ckncp\"\nI0521 16:02:25.523858 1 event.go:291] \"Event occurred\" object=\"gc-8415/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-bsjzk\"\nI0521 16:02:25.524264 1 event.go:291] \"Event occurred\" object=\"gc-8415/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-5tgv7\"\nI0521 16:02:25.535892 1 event.go:291] \"Event occurred\" object=\"gc-8415/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-n5z89\"\nI0521 16:02:25.535928 1 event.go:291] \"Event occurred\" object=\"gc-8415/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-j7mmp\"\nI0521 16:02:25.535948 1 event.go:291] \"Event occurred\" object=\"gc-8415/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-5lvpf\"\nI0521 16:02:25.953487 1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0521 16:02:25.953581 1 shared_informer.go:247] Caches are synced for garbage collector \nI0521 16:02:26.244292 1 namespace_controller.go:185] Namespace has been deleted container-probe-5050\nI0521 16:02:26.429316 1 namespace_controller.go:185] Namespace has been deleted kubectl-6341\nI0521 16:02:26.989761 1 namespace_controller.go:185] Namespace has been deleted container-lifecycle-hook-2774\nI0521 16:02:27.092588 1 request.go:645] Throttling request took 1.047916197s, request: GET:https://172.18.0.3:6443/apis/certificates.k8s.io/v1beta1?timeout=32s\nI0521 16:02:27.969233 1 namespace_controller.go:185] Namespace has been deleted services-6718\nE0521 16:02:29.622194 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:02:30.062124 1 namespace_controller.go:162] deletion of namespace job-3934 failed: unexpected items still remain in namespace: job-3934 for gvr: /v1, Resource=pods\nE0521 16:02:30.226349 1 namespace_controller.go:162] deletion of namespace job-3934 failed: unexpected items still remain in namespace: job-3934 for gvr: /v1, Resource=pods\nE0521 16:02:30.395269 1 namespace_controller.go:162] deletion of namespace job-3934 failed: unexpected items still remain in namespace: job-3934 for gvr: /v1, Resource=pods\nE0521 16:02:30.467766 1 tokens_controller.go:261] error synchronizing serviceaccount webhook-7761/default: secrets \"default-token-2tdpj\" is forbidden: unable to create new content in namespace webhook-7761 because it is being terminated\nE0521 16:02:30.538178 1 tokens_controller.go:261] error synchronizing serviceaccount webhook-7761-markers/default: secrets \"default-token-kmrlx\" is forbidden: unable to create new content in namespace webhook-7761-markers because it is being terminated\nI0521 16:02:30.618790 1 namespace_controller.go:185] Namespace has been deleted kubectl-8120\nI0521 16:02:31.830583 1 stateful_set.go:419] StatefulSet has been deleted statefulset-5070/ss\nI0521 16:02:32.426127 1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-1757/test-quota\nI0521 16:02:35.060892 1 namespace_controller.go:185] Namespace has been deleted pods-1219\nI0521 16:02:35.564179 1 namespace_controller.go:185] Namespace has been deleted job-3934\nI0521 16:02:35.585456 1 namespace_controller.go:185] Namespace has been deleted webhook-7761-markers\nI0521 16:02:35.602914 1 namespace_controller.go:185] Namespace has been deleted webhook-7761\nI0521 16:02:37.216283 1 namespace_controller.go:185] Namespace has been deleted pods-2913\nI0521 16:02:37.547415 1 namespace_controller.go:185] Namespace has been deleted resourcequota-1757\nE0521 16:02:38.304597 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:02:41.611459 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:02:42.080857 1 namespace_controller.go:185] Namespace has been deleted statefulset-5070\nI0521 16:02:44.113903 1 namespace_controller.go:185] Namespace has been deleted downward-api-2516\nE0521 16:02:45.625509 1 tokens_controller.go:261] error synchronizing serviceaccount services-6358/default: secrets \"default-token-nf24g\" is forbidden: unable to create new content in namespace services-6358 because it is being terminated\nE0521 16:02:46.060941 1 tokens_controller.go:261] error synchronizing serviceaccount custom-resource-definition-5624/default: secrets \"default-token-4mbmc\" is forbidden: unable to create new content in namespace custom-resource-definition-5624 because it is being terminated\nI0521 16:02:46.516534 1 event.go:291] \"Event occurred\" object=\"gc-469/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-zwvhc\"\nI0521 16:02:46.519701 1 event.go:291] \"Event occurred\" object=\"gc-469/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-5flfn\"\nI0521 16:02:50.670764 1 namespace_controller.go:185] Namespace has been deleted services-6358\nI0521 16:02:51.179654 1 namespace_controller.go:185] Namespace has been deleted custom-resource-definition-5624\nE0521 16:02:51.479478 1 tokens_controller.go:261] error synchronizing serviceaccount kubectl-2714/default: secrets \"default-token-xxpkk\" is forbidden: unable to create new content in namespace kubectl-2714 because it is being terminated\nI0521 16:02:51.632400 1 namespace_controller.go:185] Namespace has been deleted container-probe-3537\nI0521 16:02:52.625926 1 event.go:291] \"Event occurred\" object=\"crd-webhook-1341/sample-crd-conversion-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-crd-conversion-webhook-deployment-85d57b96d6 to 1\"\nI0521 16:02:52.631890 1 event.go:291] \"Event occurred\" object=\"crd-webhook-1341/sample-crd-conversion-webhook-deployment-85d57b96d6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-crd-conversion-webhook-deployment-85d57b96d6-jtqdd\"\nI0521 16:02:53.777236 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for e2e-test-crd-publish-openapi-8351-crds.crd-publish-openapi-test-unknown-at-root.example.com\nI0521 16:02:53.777363 1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0521 16:02:53.877626 1 shared_informer.go:247] Caches are synced for resource quota \nI0521 16:02:55.419664 1 event.go:291] \"Event occurred\" object=\"webhook-3536/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-cbccbf6bb to 1\"\nI0521 16:02:55.424854 1 event.go:291] \"Event occurred\" object=\"webhook-3536/sample-webhook-deployment-cbccbf6bb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-cbccbf6bb-q28wp\"\nE0521 16:02:55.629573 1 tokens_controller.go:261] error synchronizing serviceaccount prestop-3891/default: secrets \"default-token-7jkd2\" is forbidden: unable to create new content in namespace prestop-3891 because it is being terminated\nI0521 16:02:56.437906 1 namespace_controller.go:185] Namespace has been deleted services-8790\nI0521 16:02:56.517659 1 namespace_controller.go:185] Namespace has been deleted kubectl-2714\nI0521 16:02:56.560557 1 namespace_controller.go:185] Namespace has been deleted services-6687\nE0521 16:02:56.803800 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:02:57.090659 1 tokens_controller.go:261] error synchronizing serviceaccount container-lifecycle-hook-6721/default: secrets \"default-token-vq5jb\" is forbidden: unable to create new content in namespace container-lifecycle-hook-6721 because it is being terminated\nI0521 16:02:57.704084 1 request.go:645] Throttling request took 1.048461471s, request: GET:https://172.18.0.3:6443/apis/storage.k8s.io/v1beta1?timeout=32s\nE0521 16:02:58.222880 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:02:59.513711 1 tokens_controller.go:261] error synchronizing serviceaccount projected-5699/default: secrets \"default-token-b4dsv\" is forbidden: unable to create new content in namespace projected-5699 because it is being terminated\nE0521 16:03:00.188037 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:03:01.823377 1 tokens_controller.go:261] error synchronizing serviceaccount crd-webhook-1341/default: secrets \"default-token-4c46l\" is forbidden: unable to create new content in namespace crd-webhook-1341 because it is being terminated\nI0521 16:03:02.326381 1 event.go:291] \"Event occurred\" object=\"deployment-5163/test-rolling-update-controller\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-controller-nhdq9\"\nI0521 16:03:02.512801 1 namespace_controller.go:185] Namespace has been deleted container-probe-8652\nE0521 16:03:03.585585 1 tokens_controller.go:261] error synchronizing serviceaccount webhook-3536-markers/default: secrets \"default-token-4xshn\" is forbidden: unable to create new content in namespace webhook-3536-markers because it is being terminated\nI0521 16:03:04.606914 1 namespace_controller.go:185] Namespace has been deleted projected-5699\nE0521 16:03:04.822063 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:03:05.666054 1 tokens_controller.go:261] error synchronizing serviceaccount downward-api-3859/default: secrets \"default-token-vph5b\" is forbidden: unable to create new content in namespace downward-api-3859 because it is being terminated\nE0521 16:03:06.878806 1 tokens_controller.go:261] error synchronizing serviceaccount custom-resource-definition-5824/default: secrets \"default-token-l9s95\" is forbidden: unable to create new content in namespace custom-resource-definition-5824 because it is being terminated\nE0521 16:03:06.921484 1 tokens_controller.go:261] error synchronizing serviceaccount init-container-9500/default: secrets \"default-token-8rzx6\" is forbidden: unable to create new content in namespace init-container-9500 because it is being terminated\nI0521 16:03:06.978740 1 namespace_controller.go:185] Namespace has been deleted crd-webhook-1341\nI0521 16:03:07.345051 1 event.go:291] \"Event occurred\" object=\"deployment-5163/test-rolling-update-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-deployment-c4cb8d6d9 to 1\"\nI0521 16:03:07.347982 1 event.go:291] \"Event occurred\" object=\"deployment-5163/test-rolling-update-deployment-c4cb8d6d9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-deployment-c4cb8d6d9-clb9z\"\nI0521 16:03:07.374667 1 namespace_controller.go:185] Namespace has been deleted container-lifecycle-hook-6721\nE0521 16:03:07.832820 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:03:08.671436 1 namespace_controller.go:185] Namespace has been deleted webhook-3536-markers\nI0521 16:03:08.685964 1 namespace_controller.go:185] Namespace has been deleted webhook-3536\nI0521 16:03:08.806738 1 event.go:291] \"Event occurred\" object=\"deployment-5163/test-rolling-update-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-controller to 0\"\nI0521 16:03:08.812471 1 event.go:291] \"Event occurred\" object=\"deployment-5163/test-rolling-update-controller\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-controller-nhdq9\"\nE0521 16:03:08.967171 1 tokens_controller.go:261] error synchronizing serviceaccount downward-api-8237/default: secrets \"default-token-t9b9z\" is forbidden: unable to create new content in namespace downward-api-8237 because it is being terminated\nI0521 16:03:09.052268 1 event.go:291] \"Event occurred\" object=\"deployment-2311/test-cleanup-controller\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-cleanup-controller-rxrmd\"\nE0521 16:03:09.854560 1 tokens_controller.go:261] error synchronizing serviceaccount emptydir-4075/default: secrets \"default-token-s5t6v\" is forbidden: unable to create new content in namespace emptydir-4075 because it is being terminated\nE0521 16:03:10.740926 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:03:10.811425 1 namespace_controller.go:185] Namespace has been deleted downward-api-3859\nE0521 16:03:11.278525 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:03:11.956466 1 namespace_controller.go:185] Namespace has been deleted custom-resource-definition-5824\nI0521 16:03:12.432775 1 namespace_controller.go:185] Namespace has been deleted crd-publish-openapi-4717\nI0521 16:03:12.931959 1 namespace_controller.go:185] Namespace has been deleted secrets-4248\nI0521 16:03:12.950357 1 event.go:291] \"Event occurred\" object=\"statefulset-437/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI0521 16:03:14.069008 1 event.go:291] \"Event occurred\" object=\"deployment-2311/test-cleanup-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-cleanup-deployment-5d446bdd47 to 1\"\nI0521 16:03:14.073436 1 event.go:291] \"Event occurred\" object=\"deployment-2311/test-cleanup-deployment-5d446bdd47\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-cleanup-deployment-5d446bdd47-mdrnf\"\nI0521 16:03:14.080165 1 namespace_controller.go:185] Namespace has been deleted downward-api-8237\nE0521 16:03:14.080576 1 tokens_controller.go:261] error synchronizing serviceaccount container-runtime-7120/default: secrets \"default-token-jqk47\" is forbidden: unable to create new content in namespace container-runtime-7120 because it is being terminated\nI0521 16:03:14.198730 1 event.go:291] \"Event occurred\" object=\"gc-8133/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-qv6m2\"\nI0521 16:03:14.202262 1 event.go:291] \"Event occurred\" object=\"gc-8133/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-hkvjv\"\nI0521 16:03:14.202526 1 event.go:291] \"Event occurred\" object=\"gc-8133/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-fz9sm\"\nI0521 16:03:14.205355 1 event.go:291] \"Event occurred\" object=\"gc-8133/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-wxb86\"\nI0521 16:03:14.206125 1 event.go:291] \"Event occurred\" object=\"gc-8133/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-987l5\"\nI0521 16:03:14.206168 1 event.go:291] \"Event occurred\" object=\"gc-8133/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-8ldpr\"\nI0521 16:03:14.206192 1 event.go:291] \"Event occurred\" object=\"gc-8133/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-9mjqx\"\nI0521 16:03:14.209138 1 event.go:291] \"Event occurred\" object=\"gc-8133/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-9sjs2\"\nI0521 16:03:14.210424 1 event.go:291] \"Event occurred\" object=\"gc-8133/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-5zwnk\"\nI0521 16:03:14.210493 1 event.go:291] \"Event occurred\" object=\"gc-8133/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-fllk7\"\nE0521 16:03:14.564951 1 tokens_controller.go:261] error synchronizing serviceaccount deployment-5163/default: secrets \"default-token-7ds6m\" is forbidden: unable to create new content in namespace deployment-5163 because it is being terminated\nE0521 16:03:14.599937 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:03:15.015985 1 namespace_controller.go:185] Namespace has been deleted emptydir-4075\nI0521 16:03:16.572481 1 event.go:291] \"Event occurred\" object=\"statefulset-437/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nI0521 16:03:16.975518 1 event.go:291] \"Event occurred\" object=\"deployment-2311/test-cleanup-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-cleanup-controller to 0\"\nE0521 16:03:16.977229 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"test-cleanup-deployment.1681206dd7df8114\", GenerateName:\"\", Namespace:\"deployment-2311\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Deployment\", Namespace:\"deployment-2311\", Name:\"test-cleanup-deployment\", UID:\"4534b13e-c0e8-4ca6-a401-8c77dadea28c\", APIVersion:\"apps/v1\", ResourceVersion:\"25090\", FieldPath:\"\"}, Reason:\"ScalingReplicaSet\", Message:\"Scaled down replica set test-cleanup-controller to 0\", Source:v1.EventSource{Component:\"deployment-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02214513a22d914, ext:3009056771306, loc:(*time.Location)(0x6a53ca0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02214513a22d914, ext:3009056771306, loc:(*time.Location)(0x6a53ca0)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"test-cleanup-deployment.1681206dd7df8114\" is forbidden: unable to create new content in namespace deployment-2311 because it is being terminated' (will not retry!)\nI0521 16:03:16.982061 1 event.go:291] \"Event occurred\" object=\"deployment-2311/test-cleanup-controller\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-cleanup-controller-rxrmd\"\nE0521 16:03:16.983476 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"test-cleanup-controller.1681206dd842c522\", GenerateName:\"\", Namespace:\"deployment-2311\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"deployment-2311\", Name:\"test-cleanup-controller\", UID:\"f2868d1c-39a6-4bd5-862d-1893c7444cb5\", APIVersion:\"apps/v1\", ResourceVersion:\"25299\", FieldPath:\"\"}, Reason:\"SuccessfulDelete\", Message:\"Deleted pod: test-cleanup-controller-rxrmd\", Source:v1.EventSource{Component:\"replicaset-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02214513a861d22, ext:3009063276772, loc:(*time.Location)(0x6a53ca0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02214513a861d22, ext:3009063276772, loc:(*time.Location)(0x6a53ca0)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"test-cleanup-controller.1681206dd842c522\" is forbidden: unable to create new content in namespace deployment-2311 because it is being terminated' (will not retry!)\nI0521 16:03:17.112563 1 namespace_controller.go:185] Namespace has been deleted emptydir-7437\nI0521 16:03:17.149135 1 namespace_controller.go:185] Namespace has been deleted init-container-9500\nI0521 16:03:18.799499 1 event.go:291] \"Event occurred\" object=\"statefulset-437/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nI0521 16:03:19.176644 1 namespace_controller.go:185] Namespace has been deleted container-runtime-7120\nI0521 16:03:19.583271 1 namespace_controller.go:185] Namespace has been deleted deployment-5163\nI0521 16:03:20.475858 1 event.go:291] \"Event occurred\" object=\"webhook-7612/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-cbccbf6bb to 1\"\nI0521 16:03:20.480596 1 event.go:291] \"Event occurred\" object=\"webhook-7612/sample-webhook-deployment-cbccbf6bb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-cbccbf6bb-2j52z\"\nI0521 16:03:20.536103 1 namespace_controller.go:185] Namespace has been deleted services-8414\nI0521 16:03:23.804410 1 event.go:291] \"Event occurred\" object=\"replicaset-1160/my-hostname-basic-bed22666-2c8d-4361-9993-5030f03bf230\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: my-hostname-basic-bed22666-2c8d-4361-9993-5030f03bf230-mlc6x\"\nI0521 16:03:24.297410 1 namespace_controller.go:185] Namespace has been deleted deployment-2311\nI0521 16:03:24.429959 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for e2e-test-crd-publish-openapi-134-crds.crd-publish-openapi-test-multi-to-single-ver.example.com\nI0521 16:03:24.430102 1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0521 16:03:24.530315 1 shared_informer.go:247] Caches are synced for resource quota \nE0521 16:03:24.638461 1 tokens_controller.go:261] error synchronizing serviceaccount container-lifecycle-hook-6417/default: secrets \"default-token-q7wdk\" is forbidden: unable to create new content in namespace container-lifecycle-hook-6417 because it is being terminated\nE0521 16:03:26.903481 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:03:27.167758 1 tokens_controller.go:261] error synchronizing serviceaccount projected-2381/default: secrets \"default-token-67f7w\" is forbidden: unable to create new content in namespace projected-2381 because it is being terminated\nE0521 16:03:27.621699 1 tokens_controller.go:261] error synchronizing serviceaccount discovery-7075/default: secrets \"default-token-plfx7\" is forbidden: unable to create new content in namespace discovery-7075 because it is being terminated\nE0521 16:03:27.751728 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:03:28.127194 1 namespace_controller.go:185] Namespace has been deleted watch-5466\nI0521 16:03:28.172634 1 namespace_controller.go:185] Namespace has been deleted init-container-5394\nE0521 16:03:28.845771 1 tokens_controller.go:261] error synchronizing serviceaccount gc-1534/default: secrets \"default-token-wb5tl\" is forbidden: unable to create new content in namespace gc-1534 because it is being terminated\nE0521 16:03:30.410104 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:03:32.089001 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:03:32.259543 1 namespace_controller.go:185] Namespace has been deleted projected-2381\nI0521 16:03:32.731610 1 namespace_controller.go:185] Namespace has been deleted discovery-7075\nI0521 16:03:32.847404 1 namespace_controller.go:185] Namespace has been deleted events-5323\nI0521 16:03:34.135981 1 namespace_controller.go:185] Namespace has been deleted gc-1534\nI0521 16:03:34.593957 1 event.go:291] \"Event occurred\" object=\"webhook-2715/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-cbccbf6bb to 1\"\nI0521 16:03:34.598135 1 event.go:291] \"Event occurred\" object=\"webhook-2715/sample-webhook-deployment-cbccbf6bb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-cbccbf6bb-cd4p7\"\nI0521 16:03:35.997199 1 event.go:291] \"Event occurred\" object=\"kubectl-9062/agnhost-primary\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-primary-7r29j\"\nI0521 16:03:36.249181 1 event.go:291] \"Event occurred\" object=\"dns-4213/test-service-2\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint dns-4213/test-service-2: Operation cannot be fulfilled on endpoints \\\"test-service-2\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0521 16:03:36.270502 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:03:36.674403 1 tokens_controller.go:261] error synchronizing serviceaccount exempted-namesapce/default: secrets \"default-token-dg6q5\" is forbidden: unable to create new content in namespace exempted-namesapce because it is being terminated\nE0521 16:03:37.137445 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:03:38.281605 1 namespace_controller.go:185] Namespace has been deleted prestop-3891\nE0521 16:03:38.891349 1 tokens_controller.go:261] error synchronizing serviceaccount gc-8415/default: secrets \"default-token-mhblh\" is forbidden: unable to create new content in namespace gc-8415 because it is being terminated\nE0521 16:03:38.920929 1 tokens_controller.go:261] error synchronizing serviceaccount emptydir-3215/default: secrets \"default-token-b46bb\" is forbidden: unable to create new content in namespace emptydir-3215 because it is being terminated\nE0521 16:03:38.976142 1 tokens_controller.go:261] error synchronizing serviceaccount replicaset-1160/default: secrets \"default-token-mpkz9\" is forbidden: unable to create new content in namespace replicaset-1160 because it is being terminated\nE0521 16:03:41.367019 1 tokens_controller.go:261] error synchronizing serviceaccount dns-4213/default: secrets \"default-token-bzgbf\" is forbidden: unable to create new content in namespace dns-4213 because it is being terminated\nI0521 16:03:41.864189 1 event.go:291] \"Event occurred\" object=\"crd-webhook-3579/sample-crd-conversion-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-crd-conversion-webhook-deployment-85d57b96d6 to 1\"\nI0521 16:03:41.870029 1 event.go:291] \"Event occurred\" object=\"crd-webhook-3579/sample-crd-conversion-webhook-deployment-85d57b96d6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-crd-conversion-webhook-deployment-85d57b96d6-2jwmb\"\nE0521 16:03:41.873983 1 tokens_controller.go:261] error synchronizing serviceaccount server-version-8382/default: secrets \"default-token-m4qmm\" is forbidden: unable to create new content in namespace server-version-8382 because it is being terminated\nI0521 16:03:41.924165 1 namespace_controller.go:185] Namespace has been deleted crd-publish-openapi-4463\nI0521 16:03:42.109841 1 event.go:291] \"Event occurred\" object=\"replication-controller-882/pod-release\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: pod-release-g9p5w\"\nI0521 16:03:42.511707 1 event.go:291] \"Event occurred\" object=\"services-615/externalsvc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalsvc-ccrj4\"\nI0521 16:03:42.515369 1 event.go:291] \"Event occurred\" object=\"services-615/externalsvc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalsvc-nj6cq\"\nI0521 16:03:43.337688 1 event.go:291] \"Event occurred\" object=\"webhook-7196/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-cbccbf6bb to 1\"\nI0521 16:03:43.343619 1 event.go:291] \"Event occurred\" object=\"webhook-7196/sample-webhook-deployment-cbccbf6bb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-cbccbf6bb-pjlkx\"\nI0521 16:03:43.925961 1 namespace_controller.go:185] Namespace has been deleted gc-8415\nI0521 16:03:43.991761 1 namespace_controller.go:185] Namespace has been deleted replicaset-1160\nI0521 16:03:44.007660 1 namespace_controller.go:185] Namespace has been deleted emptydir-3215\nI0521 16:03:44.109954 1 namespace_controller.go:185] Namespace has been deleted pods-814\nI0521 16:03:44.234279 1 event.go:291] \"Event occurred\" object=\"statefulset-437/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-2 in StatefulSet ss2 successful\"\nE0521 16:03:44.401334 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:03:45.167822 1 namespace_controller.go:185] Namespace has been deleted container-lifecycle-hook-6417\nI0521 16:03:45.830079 1 namespace_controller.go:185] Namespace has been deleted emptydir-634\nI0521 16:03:46.482694 1 namespace_controller.go:185] Namespace has been deleted dns-4213\nI0521 16:03:46.825672 1 namespace_controller.go:185] Namespace has been deleted exempted-namesapce\nI0521 16:03:46.832299 1 namespace_controller.go:185] Namespace has been deleted webhook-7612-markers\nI0521 16:03:46.845987 1 namespace_controller.go:185] Namespace has been deleted webhook-7612\nE0521 16:03:46.846772 1 tokens_controller.go:261] error synchronizing serviceaccount secrets-1026/default: secrets \"default-token-pmc7r\" is forbidden: unable to create new content in namespace secrets-1026 because it is being terminated\nI0521 16:03:46.950675 1 namespace_controller.go:185] Namespace has been deleted server-version-8382\nI0521 16:03:47.128765 1 event.go:291] \"Event occurred\" object=\"replication-controller-882/pod-release\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: pod-release-55dhk\"\nI0521 16:03:47.819437 1 namespace_controller.go:185] Namespace has been deleted webhook-2715-markers\nI0521 16:03:48.093635 1 namespace_controller.go:185] Namespace has been deleted init-container-332\nI0521 16:03:48.526054 1 namespace_controller.go:185] Namespace has been deleted projected-2327\nE0521 16:03:49.863961 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:03:50.425950 1 event.go:291] \"Event occurred\" object=\"statefulset-437/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nI0521 16:03:51.111876 1 namespace_controller.go:185] Namespace has been deleted dns-4983\nI0521 16:03:51.684877 1 namespace_controller.go:185] Namespace has been deleted pods-5793\nI0521 16:03:51.973010 1 namespace_controller.go:185] Namespace has been deleted secrets-1026\nI0521 16:03:52.279343 1 namespace_controller.go:185] Namespace has been deleted secrets-2438\nI0521 16:03:52.597248 1 namespace_controller.go:185] Namespace has been deleted security-context-test-1925\nI0521 16:03:53.000347 1 namespace_controller.go:185] Namespace has been deleted webhook-2715\nE0521 16:03:54.024948 1 tokens_controller.go:261] error synchronizing serviceaccount var-expansion-205/default: secrets \"default-token-249hx\" is forbidden: unable to create new content in namespace var-expansion-205 because it is being terminated\nI0521 16:03:55.032388 1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0521 16:03:55.032445 1 shared_informer.go:247] Caches are synced for resource quota \nI0521 16:03:55.676774 1 event.go:291] \"Event occurred\" object=\"statefulset-437/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-1 in StatefulSet ss2 successful\"\nI0521 16:03:56.755525 1 event.go:291] \"Event occurred\" object=\"webhook-9375/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-cbccbf6bb to 1\"\nI0521 16:03:56.761792 1 event.go:291] \"Event occurred\" object=\"webhook-9375/sample-webhook-deployment-cbccbf6bb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-cbccbf6bb-mwjv4\"\nI0521 16:03:57.848361 1 namespace_controller.go:185] Namespace has been deleted kubectl-9062\nE0521 16:03:58.050461 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:03:58.244790 1 namespace_controller.go:185] Namespace has been deleted crd-webhook-3579\nE0521 16:03:58.252650 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:03:58.357380 1 namespace_controller.go:185] Namespace has been deleted replication-controller-882\nI0521 16:03:58.892523 1 event.go:291] \"Event occurred\" object=\"webhook-8085/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-cbccbf6bb to 1\"\nI0521 16:03:58.899049 1 event.go:291] \"Event occurred\" object=\"webhook-8085/sample-webhook-deployment-cbccbf6bb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-cbccbf6bb-nzcx5\"\nI0521 16:03:59.144444 1 namespace_controller.go:185] Namespace has been deleted var-expansion-205\nE0521 16:03:59.344472 1 tokens_controller.go:261] error synchronizing serviceaccount configmap-8697/default: secrets \"default-token-tt69w\" is forbidden: unable to create new content in namespace configmap-8697 because it is being terminated\nI0521 16:03:59.686896 1 namespace_controller.go:185] Namespace has been deleted webhook-7196-markers\nI0521 16:03:59.705375 1 namespace_controller.go:185] Namespace has been deleted webhook-7196\nI0521 16:04:00.445393 1 event.go:291] \"Event occurred\" object=\"statefulset-437/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nE0521 16:04:00.818670 1 tokens_controller.go:261] error synchronizing serviceaccount secrets-9886/default: secrets \"default-token-5swhl\" is forbidden: unable to create new content in namespace secrets-9886 because it is being terminated\nE0521 16:04:01.445386 1 tokens_controller.go:261] error synchronizing serviceaccount containers-6550/default: secrets \"default-token-9rh6r\" is forbidden: unable to create new content in namespace containers-6550 because it is being terminated\nI0521 16:04:02.814901 1 event.go:291] \"Event occurred\" object=\"statefulset-437/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-0 in StatefulSet ss2 successful\"\nE0521 16:04:03.625793 1 tokens_controller.go:261] error synchronizing serviceaccount gc-469/default: secrets \"default-token-c854f\" is forbidden: unable to create new content in namespace gc-469 because it is being terminated\nE0521 16:04:03.952783 1 tokens_controller.go:261] error synchronizing serviceaccount fail-closed-namesapce/default: secrets \"default-token-scmzl\" is forbidden: unable to create new content in namespace fail-closed-namesapce because it is being terminated\nI0521 16:04:04.425575 1 namespace_controller.go:185] Namespace has been deleted configmap-8697\nE0521 16:04:04.612848 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:04:05.220101 1 namespace_controller.go:185] Namespace has been deleted downward-api-9736\nE0521 16:04:05.580282 1 tokens_controller.go:261] error synchronizing serviceaccount services-615/default: secrets \"default-token-rpgw2\" is forbidden: unable to create new content in namespace services-615 because it is being terminated\nI0521 16:04:05.857541 1 namespace_controller.go:185] Namespace has been deleted secrets-9886\nI0521 16:04:06.542895 1 namespace_controller.go:185] Namespace has been deleted containers-6550\nE0521 16:04:06.908138 1 tokens_controller.go:261] error synchronizing serviceaccount webhook-9375/default: secrets \"default-token-2q886\" is forbidden: unable to create new content in namespace webhook-9375 because it is being terminated\nI0521 16:04:08.772622 1 namespace_controller.go:185] Namespace has been deleted gc-469\nE0521 16:04:09.002020 1 tokens_controller.go:261] error synchronizing serviceaccount webhook-8085-markers/default: secrets \"default-token-jh9bs\" is forbidden: unable to create new content in namespace webhook-8085-markers because it is being terminated\nE0521 16:04:09.060968 1 tokens_controller.go:261] error synchronizing serviceaccount projected-5662/default: secrets \"default-token-4ngz5\" is forbidden: unable to create new content in namespace projected-5662 because it is being terminated\nI0521 16:04:09.279204 1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-9502/quota-besteffort\nI0521 16:04:09.281989 1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-9502/quota-not-besteffort\nI0521 16:04:10.208450 1 event.go:291] \"Event occurred\" object=\"statefulset-437/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nE0521 16:04:10.222661 1 tokens_controller.go:261] error synchronizing serviceaccount custom-resource-definition-9259/default: secrets \"default-token-c8pxv\" is forbidden: unable to create new content in namespace custom-resource-definition-9259 because it is being terminated\nI0521 16:04:10.743987 1 namespace_controller.go:185] Namespace has been deleted services-615\nI0521 16:04:11.991308 1 namespace_controller.go:185] Namespace has been deleted webhook-9375-markers\nI0521 16:04:12.012970 1 namespace_controller.go:185] Namespace has been deleted webhook-9375\nI0521 16:04:14.109667 1 namespace_controller.go:185] Namespace has been deleted fail-closed-namesapce\nI0521 16:04:14.124058 1 namespace_controller.go:185] Namespace has been deleted webhook-8085-markers\nI0521 16:04:14.140297 1 namespace_controller.go:185] Namespace has been deleted webhook-8085\nI0521 16:04:14.142033 1 namespace_controller.go:185] Namespace has been deleted projected-5662\nI0521 16:04:14.361496 1 namespace_controller.go:185] Namespace has been deleted resourcequota-9502\nI0521 16:04:14.992828 1 event.go:291] \"Event occurred\" object=\"webhook-6417/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-cbccbf6bb to 1\"\nI0521 16:04:15.000208 1 event.go:291] \"Event occurred\" object=\"webhook-6417/sample-webhook-deployment-cbccbf6bb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-cbccbf6bb-pwxp6\"\nI0521 16:04:15.357738 1 namespace_controller.go:185] Namespace has been deleted custom-resource-definition-9259\nI0521 16:04:16.330524 1 namespace_controller.go:185] Namespace has been deleted replication-controller-6400\nI0521 16:04:16.426000 1 namespace_controller.go:185] Namespace has been deleted downward-api-3544\nI0521 16:04:17.182531 1 event.go:291] \"Event occurred\" object=\"webhook-9355/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-cbccbf6bb to 1\"\nI0521 16:04:17.188164 1 event.go:291] \"Event occurred\" object=\"webhook-9355/sample-webhook-deployment-cbccbf6bb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-cbccbf6bb-sqv6d\"\nE0521 16:04:17.399605 1 tokens_controller.go:261] error synchronizing serviceaccount events-9749/default: secrets \"default-token-vczck\" is forbidden: unable to create new content in namespace events-9749 because it is being terminated\nE0521 16:04:19.279827 1 tokens_controller.go:261] error synchronizing serviceaccount projected-3324/default: secrets \"default-token-9zmng\" is forbidden: unable to create new content in namespace projected-3324 because it is being terminated\nE0521 16:04:19.539728 1 tokens_controller.go:261] error synchronizing serviceaccount crd-publish-openapi-277/default: secrets \"default-token-kkxbh\" is forbidden: unable to create new content in namespace crd-publish-openapi-277 because it is being terminated\nE0521 16:04:21.611387 1 tokens_controller.go:261] error synchronizing serviceaccount kubelet-test-2636/default: secrets \"default-token-b54x4\" is forbidden: unable to create new content in namespace kubelet-test-2636 because it is being terminated\nE0521 16:04:23.027172 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:04:24.378229 1 namespace_controller.go:185] Namespace has been deleted projected-3324\nI0521 16:04:24.642214 1 namespace_controller.go:185] Namespace has been deleted crd-publish-openapi-277\nI0521 16:04:24.668542 1 namespace_controller.go:185] Namespace has been deleted configmap-942\nE0521 16:04:25.113467 1 tokens_controller.go:261] error synchronizing serviceaccount webhook-6417-markers/default: secrets \"default-token-5b6gq\" is forbidden: unable to create new content in namespace webhook-6417-markers because it is being terminated\nE0521 16:04:25.165099 1 tokens_controller.go:261] error synchronizing serviceaccount webhook-6417/default: secrets \"default-token-6v8pj\" is forbidden: unable to create new content in namespace webhook-6417 because it is being terminated\nE0521 16:04:25.341039 1 tokens_controller.go:261] error synchronizing serviceaccount webhook-9355-markers/default: secrets \"default-token-hc2lp\" is forbidden: unable to create new content in namespace webhook-9355-markers because it is being terminated\nE0521 16:04:27.766417 1 tokens_controller.go:261] error synchronizing serviceaccount dns-3216/default: secrets \"default-token-f624k\" is forbidden: unable to create new content in namespace dns-3216 because it is being terminated\nE0521 16:04:29.307210 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:04:30.225962 1 namespace_controller.go:185] Namespace has been deleted webhook-6417-markers\nI0521 16:04:30.243917 1 namespace_controller.go:185] Namespace has been deleted webhook-6417\nI0521 16:04:30.410055 1 namespace_controller.go:185] Namespace has been deleted webhook-9355-markers\nI0521 16:04:30.430453 1 namespace_controller.go:185] Namespace has been deleted webhook-9355\nE0521 16:04:31.844792 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:04:32.868010 1 namespace_controller.go:185] Namespace has been deleted dns-3216\nE0521 16:04:34.060466 1 tokens_controller.go:261] error synchronizing serviceaccount dns-6441/default: secrets \"default-token-9n4gs\" is forbidden: unable to create new content in namespace dns-6441 because it is being terminated\nE0521 16:04:34.064273 1 tokens_controller.go:261] error synchronizing serviceaccount configmap-8448/default: secrets \"default-token-zqffn\" is forbidden: unable to create new content in namespace configmap-8448 because it is being terminated\nI0521 16:04:34.646085 1 event.go:291] \"Event occurred\" object=\"statefulset-437/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-2 in StatefulSet ss2 successful\"\nI0521 16:04:35.077025 1 namespace_controller.go:185] Namespace has been deleted configmap-5158\nI0521 16:04:36.657627 1 namespace_controller.go:185] Namespace has been deleted dns-4160\nI0521 16:04:37.185340 1 namespace_controller.go:185] Namespace has been deleted var-expansion-2216\nE0521 16:04:38.171301 1 tokens_controller.go:261] error synchronizing serviceaccount projected-9751/default: secrets \"default-token-r28cl\" is forbidden: unable to create new content in namespace projected-9751 because it is being terminated\nI0521 16:04:39.125419 1 namespace_controller.go:185] Namespace has been deleted configmap-8448\nI0521 16:04:39.125780 1 namespace_controller.go:185] Namespace has been deleted dns-6441\nI0521 16:04:40.214469 1 event.go:291] \"Event occurred\" object=\"statefulset-437/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nE0521 16:04:40.240208 1 tokens_controller.go:261] error synchronizing serviceaccount kubelet-test-1331/default: secrets \"default-token-d8r2r\" is forbidden: unable to create new content in namespace kubelet-test-1331 because it is being terminated\nI0521 16:04:41.351444 1 namespace_controller.go:185] Namespace has been deleted dns-7526\nI0521 16:04:41.887161 1 event.go:291] \"Event occurred\" object=\"statefulset-437/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-1 in StatefulSet ss2 successful\"\nE0521 16:04:42.286080 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:04:42.601595 1 event.go:291] \"Event occurred\" object=\"statefulset-1212/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nE0521 16:04:43.459215 1 namespace_controller.go:162] deletion of namespace projected-9751 failed: unexpected items still remain in namespace: projected-9751 for gvr: /v1, Resource=pods\nE0521 16:04:43.628240 1 namespace_controller.go:162] deletion of namespace projected-9751 failed: unexpected items still remain in namespace: projected-9751 for gvr: /v1, Resource=pods\nI0521 16:04:43.645410 1 namespace_controller.go:185] Namespace has been deleted container-lifecycle-hook-4677\nE0521 16:04:43.809895 1 namespace_controller.go:162] deletion of namespace projected-9751 failed: unexpected items still remain in namespace: projected-9751 for gvr: /v1, Resource=pods\nE0521 16:04:44.007368 1 namespace_controller.go:162] deletion of namespace projected-9751 failed: unexpected items still remain in namespace: projected-9751 for gvr: /v1, Resource=pods\nE0521 16:04:44.225634 1 namespace_controller.go:162] deletion of namespace projected-9751 failed: unexpected items still remain in namespace: projected-9751 for gvr: /v1, Resource=pods\nE0521 16:04:44.468235 1 namespace_controller.go:162] deletion of namespace projected-9751 failed: unexpected items still remain in namespace: projected-9751 for gvr: /v1, Resource=pods\nE0521 16:04:44.806870 1 namespace_controller.go:162] deletion of namespace projected-9751 failed: unexpected items still remain in namespace: projected-9751 for gvr: /v1, Resource=pods\nE0521 16:04:45.303036 1 namespace_controller.go:162] deletion of namespace projected-9751 failed: unexpected items still remain in namespace: projected-9751 for gvr: /v1, Resource=pods\nI0521 16:04:45.413147 1 namespace_controller.go:185] Namespace has been deleted kubelet-test-1331\nE0521 16:04:46.123055 1 namespace_controller.go:162] deletion of namespace projected-9751 failed: unexpected items still remain in namespace: projected-9751 for gvr: /v1, Resource=pods\nE0521 16:04:47.560026 1 namespace_controller.go:162] deletion of namespace projected-9751 failed: unexpected items still remain in namespace: projected-9751 for gvr: /v1, Resource=pods\nE0521 16:04:50.301251 1 namespace_controller.go:162] deletion of namespace projected-9751 failed: unexpected items still remain in namespace: projected-9751 for gvr: /v1, Resource=pods\nI0521 16:04:50.419896 1 event.go:291] \"Event occurred\" object=\"statefulset-437/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nI0521 16:04:50.578040 1 event.go:291] \"Event occurred\" object=\"job-587/fail-once-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-local-87qpx\"\nI0521 16:04:50.582047 1 event.go:291] \"Event occurred\" object=\"job-587/fail-once-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-local-2d2pg\"\nE0521 16:04:51.397352 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:04:52.619753 1 event.go:291] \"Event occurred\" object=\"statefulset-1212/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-1 in StatefulSet ss successful\"\nI0521 16:04:52.641029 1 event.go:291] \"Event occurred\" object=\"statefulset-1212/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-1 in StatefulSet ss successful\"\nI0521 16:04:52.657175 1 event.go:291] \"Event occurred\" object=\"statefulset-1212/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0521 16:04:52.708094 1 namespace_controller.go:185] Namespace has been deleted events-632\nI0521 16:04:53.185915 1 event.go:291] \"Event occurred\" object=\"statefulset-437/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-0 in StatefulSet ss2 successful\"\nI0521 16:04:53.271387 1 event.go:291] \"Event occurred\" object=\"job-587/fail-once-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-local-6lbhx\"\nE0521 16:04:53.407846 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:04:53.516777 1 tokens_controller.go:261] error synchronizing serviceaccount subpath-1380/default: secrets \"default-token-vksmh\" is forbidden: unable to create new content in namespace subpath-1380 because it is being terminated\nI0521 16:04:53.783956 1 event.go:291] \"Event occurred\" object=\"job-587/fail-once-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-local-gbkw7\"\nE0521 16:04:55.642305 1 tokens_controller.go:261] error synchronizing serviceaccount configmap-9516/default: secrets \"default-token-vzx9r\" is forbidden: unable to create new content in namespace configmap-9516 because it is being terminated\nI0521 16:04:56.778967 1 event.go:291] \"Event occurred\" object=\"job-587/fail-once-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Completed\" message=\"Job completed\"\nE0521 16:04:58.028046 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:04:58.633259 1 namespace_controller.go:185] Namespace has been deleted subpath-1380\nE0521 16:04:59.280053 1 namespace_controller.go:162] deletion of namespace kubelet-test-2636 failed: unexpected items still remain in namespace: kubelet-test-2636 for gvr: /v1, Resource=pods\nE0521 16:04:59.450278 1 namespace_controller.go:162] deletion of namespace kubelet-test-2636 failed: unexpected items still remain in namespace: kubelet-test-2636 for gvr: /v1, Resource=pods\nE0521 16:04:59.628980 1 namespace_controller.go:162] deletion of namespace kubelet-test-2636 failed: unexpected items still remain in namespace: kubelet-test-2636 for gvr: /v1, Resource=pods\nE0521 16:04:59.809746 1 namespace_controller.go:162] deletion of namespace kubelet-test-2636 failed: unexpected items still remain in namespace: kubelet-test-2636 for gvr: /v1, Resource=pods\nE0521 16:05:00.029667 1 namespace_controller.go:162] deletion of namespace kubelet-test-2636 failed: unexpected items still remain in namespace: kubelet-test-2636 for gvr: /v1, Resource=pods\nI0521 16:05:00.062285 1 namespace_controller.go:185] Namespace has been deleted events-9749\nE0521 16:05:00.294275 1 namespace_controller.go:162] deletion of namespace kubelet-test-2636 failed: unexpected items still remain in namespace: kubelet-test-2636 for gvr: /v1, Resource=pods\nI0521 16:05:00.428883 1 event.go:291] \"Event occurred\" object=\"statefulset-437/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI0521 16:05:00.576685 1 namespace_controller.go:185] Namespace has been deleted projected-9751\nI0521 16:05:02.652038 1 stateful_set.go:419] StatefulSet has been deleted statefulset-1212/ss\nI0521 16:05:03.281225 1 event.go:291] \"Event occurred\" object=\"webhook-9973/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-cbccbf6bb to 1\"\nI0521 16:05:03.287099 1 event.go:291] \"Event occurred\" object=\"webhook-9973/sample-webhook-deployment-cbccbf6bb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-cbccbf6bb-c4669\"\nI0521 16:05:04.149240 1 event.go:291] \"Event occurred\" object=\"statefulset-437/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-2 in StatefulSet ss2 successful\"\nI0521 16:05:05.634733 1 namespace_controller.go:185] Namespace has been deleted kubelet-test-2636\nI0521 16:05:05.901120 1 namespace_controller.go:185] Namespace has been deleted configmap-9516\nI0521 16:05:06.729391 1 namespace_controller.go:185] Namespace has been deleted gc-8133\nW0521 16:05:07.324306 1 endpointslice_controller.go:284] Error syncing endpoint slices for service \"dns-6327/dns-test-service-2\", retrying. Error: EndpointSlice informer cache is out of date\nE0521 16:05:07.773735 1 tokens_controller.go:261] error synchronizing serviceaccount statefulset-1212/default: secrets \"default-token-r2mrk\" is forbidden: unable to create new content in namespace statefulset-1212 because it is being terminated\nE0521 16:05:08.746154 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:05:08.849000 1 namespace_controller.go:185] Namespace has been deleted job-587\nI0521 16:05:09.565901 1 event.go:291] \"Event occurred\" object=\"gc-3187/simpletest.deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set simpletest.deployment-59cfbf9b4d to 2\"\nI0521 16:05:09.573268 1 event.go:291] \"Event occurred\" object=\"gc-3187/simpletest.deployment-59cfbf9b4d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.deployment-59cfbf9b4d-cqm6m\"\nI0521 16:05:09.576968 1 event.go:291] \"Event occurred\" object=\"gc-3187/simpletest.deployment-59cfbf9b4d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.deployment-59cfbf9b4d-bqjgc\"\nI0521 16:05:10.498776 1 event.go:291] \"Event occurred\" object=\"statefulset-437/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-1 in StatefulSet ss2 successful\"\nE0521 16:05:10.500559 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:05:11.699204 1 tokens_controller.go:261] error synchronizing serviceaccount kubectl-4701/default: secrets \"default-token-7g8c5\" is forbidden: unable to create new content in namespace kubectl-4701 because it is being terminated\nE0521 16:05:12.375470 1 tokens_controller.go:261] error synchronizing serviceaccount dns-6327/default: secrets \"default-token-hsftg\" is forbidden: unable to create new content in namespace dns-6327 because it is being terminated\nI0521 16:05:12.526576 1 event.go:291] \"Event occurred\" object=\"statefulset-437/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-0 in StatefulSet ss2 successful\"\nI0521 16:05:12.910744 1 namespace_controller.go:185] Namespace has been deleted statefulset-1212\nE0521 16:05:13.759146 1 tokens_controller.go:261] error synchronizing serviceaccount projected-8535/default: secrets \"default-token-cfsn8\" is forbidden: unable to create new content in namespace projected-8535 because it is being terminated\nI0521 16:05:13.931099 1 namespace_controller.go:185] Namespace has been deleted limitrange-6044\nE0521 16:05:14.632826 1 tokens_controller.go:261] error synchronizing serviceaccount lease-test-432/default: secrets \"default-token-hbdzb\" is forbidden: unable to create new content in namespace lease-test-432 because it is being terminated\nI0521 16:05:16.498642 1 namespace_controller.go:185] Namespace has been deleted webhook-9973-markers\nI0521 16:05:16.526318 1 namespace_controller.go:185] Namespace has been deleted webhook-9973\nI0521 16:05:16.772070 1 namespace_controller.go:185] Namespace has been deleted kubectl-4701\nI0521 16:05:17.583848 1 namespace_controller.go:185] Namespace has been deleted dns-6327\nI0521 16:05:18.932007 1 namespace_controller.go:185] Namespace has been deleted projected-8535\nE0521 16:05:19.386862 1 tokens_controller.go:261] error synchronizing serviceaccount container-probe-5211/default: secrets \"default-token-7k6vn\" is forbidden: unable to create new content in namespace container-probe-5211 because it is being terminated\nI0521 16:05:19.605484 1 namespace_controller.go:185] Namespace has been deleted container-runtime-9916\nI0521 16:05:19.683559 1 namespace_controller.go:185] Namespace has been deleted lease-test-432\nI0521 16:05:21.055552 1 event.go:291] \"Event occurred\" object=\"services-7725/affinity-nodeport-timeout\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-nodeport-timeout-fpzt6\"\nI0521 16:05:21.060582 1 event.go:291] \"Event occurred\" object=\"services-7725/affinity-nodeport-timeout\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-nodeport-timeout-7pdpm\"\nI0521 16:05:21.060627 1 event.go:291] \"Event occurred\" object=\"services-7725/affinity-nodeport-timeout\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-nodeport-timeout-rn54p\"\nI0521 16:05:24.158907 1 stateful_set.go:419] StatefulSet has been deleted statefulset-437/ss2\nE0521 16:05:26.895177 1 tokens_controller.go:261] error synchronizing serviceaccount container-runtime-5619/default: secrets \"default-token-tmxsp\" is forbidden: unable to create new content in namespace container-runtime-5619 because it is being terminated\nE0521 16:05:27.889219 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:05:29.219770 1 tokens_controller.go:261] error synchronizing serviceaccount statefulset-437/default: secrets \"default-token-2ddn8\" is forbidden: unable to create new content in namespace statefulset-437 because it is being terminated\nE0521 16:05:30.814123 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:05:31.668419 1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0521 16:05:31.768596 1 shared_informer.go:247] Caches are synced for garbage collector \nI0521 16:05:32.091345 1 namespace_controller.go:185] Namespace has been deleted container-runtime-5619\nE0521 16:05:32.503565 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:05:34.622896 1 namespace_controller.go:185] Namespace has been deleted statefulset-437\nE0521 16:05:39.391103 1 namespace_controller.go:162] deletion of namespace projected-7190 failed: unexpected items still remain in namespace: projected-7190 for gvr: /v1, Resource=pods\nE0521 16:05:39.571133 1 namespace_controller.go:162] deletion of namespace projected-7190 failed: unexpected items still remain in namespace: projected-7190 for gvr: /v1, Resource=pods\nE0521 16:05:39.759398 1 namespace_controller.go:162] deletion of namespace projected-7190 failed: unexpected items still remain in namespace: projected-7190 for gvr: /v1, Resource=pods\nE0521 16:05:39.958575 1 namespace_controller.go:162] deletion of namespace projected-7190 failed: unexpected items still remain in namespace: projected-7190 for gvr: /v1, Resource=pods\nE0521 16:05:40.181329 1 namespace_controller.go:162] deletion of namespace projected-7190 failed: unexpected items still remain in namespace: projected-7190 for gvr: /v1, Resource=pods\nE0521 16:05:40.445943 1 namespace_controller.go:162] deletion of namespace projected-7190 failed: unexpected items still remain in namespace: projected-7190 for gvr: /v1, Resource=pods\nE0521 16:05:45.674126 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:05:45.780373 1 namespace_controller.go:185] Namespace has been deleted projected-7190\nI0521 16:05:45.814364 1 namespace_controller.go:185] Namespace has been deleted container-probe-5211\nI0521 16:06:00.835426 1 namespace_controller.go:185] Namespace has been deleted services-7725\nE0521 16:06:02.235519 1 tokens_controller.go:261] error synchronizing serviceaccount var-expansion-9483/default: secrets \"default-token-rrkd6\" is forbidden: unable to create new content in namespace var-expansion-9483 because it is being terminated\nE0521 16:06:06.078836 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:06:06.856345 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:06:07.336570 1 namespace_controller.go:185] Namespace has been deleted var-expansion-9483\nE0521 16:06:11.839556 1 tokens_controller.go:261] error synchronizing serviceaccount configmap-6611/default: secrets \"default-token-gpmsc\" is forbidden: unable to create new content in namespace configmap-6611 because it is being terminated\nE0521 16:06:12.848905 1 tokens_controller.go:261] error synchronizing serviceaccount container-probe-327/default: secrets \"default-token-8dql4\" is forbidden: unable to create new content in namespace container-probe-327 because it is being terminated\nE0521 16:06:13.360212 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:06:16.018274 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:06:16.345053 1 tokens_controller.go:261] error synchronizing serviceaccount container-probe-8135/default: secrets \"default-token-kc6bt\" is forbidden: unable to create new content in namespace container-probe-8135 because it is being terminated\nE0521 16:06:17.159211 1 namespace_controller.go:162] deletion of namespace configmap-6611 failed: unexpected items still remain in namespace: configmap-6611 for gvr: /v1, Resource=pods\nE0521 16:06:17.339045 1 namespace_controller.go:162] deletion of namespace configmap-6611 failed: unexpected items still remain in namespace: configmap-6611 for gvr: /v1, Resource=pods\nE0521 16:06:17.412772 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:06:17.523665 1 namespace_controller.go:162] deletion of namespace configmap-6611 failed: unexpected items still remain in namespace: configmap-6611 for gvr: /v1, Resource=pods\nE0521 16:06:17.714276 1 namespace_controller.go:162] deletion of namespace configmap-6611 failed: unexpected items still remain in namespace: configmap-6611 for gvr: /v1, Resource=pods\nE0521 16:06:17.925890 1 namespace_controller.go:162] deletion of namespace configmap-6611 failed: unexpected items still remain in namespace: configmap-6611 for gvr: /v1, Resource=pods\nI0521 16:06:18.017009 1 namespace_controller.go:185] Namespace has been deleted container-probe-327\nE0521 16:06:18.184661 1 namespace_controller.go:162] deletion of namespace configmap-6611 failed: unexpected items still remain in namespace: configmap-6611 for gvr: /v1, Resource=pods\nE0521 16:06:18.525964 1 namespace_controller.go:162] deletion of namespace configmap-6611 failed: unexpected items still remain in namespace: configmap-6611 for gvr: /v1, Resource=pods\nE0521 16:06:19.020606 1 namespace_controller.go:162] deletion of namespace configmap-6611 failed: unexpected items still remain in namespace: configmap-6611 for gvr: /v1, Resource=pods\nE0521 16:06:19.840065 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:06:19.846305 1 namespace_controller.go:162] deletion of namespace configmap-6611 failed: unexpected items still remain in namespace: configmap-6611 for gvr: /v1, Resource=pods\nI0521 16:06:21.425707 1 namespace_controller.go:185] Namespace has been deleted container-probe-8135\nI0521 16:06:22.837296 1 namespace_controller.go:185] Namespace has been deleted gc-3187\nE0521 16:06:25.688760 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:06:25.698822 1 namespace_controller.go:185] Namespace has been deleted crd-watch-5000\nI0521 16:06:26.302724 1 namespace_controller.go:185] Namespace has been deleted configmap-6611\nE0521 16:06:27.234363 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:06:30.439682 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:06:32.822692 1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0521 16:06:32.822780 1 shared_informer.go:247] Caches are synced for garbage collector \nE0521 16:06:32.867259 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:06:34.551176 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:06:47.041333 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:06:52.307432 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:06:55.492408 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:07:05.910283 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:07:07.604628 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:07:18.824088 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:07:23.106711 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:07:28.056239 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:07:42.416134 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:07:44.093183 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:07:52.001055 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:07:58.513307 1 tokens_controller.go:261] error synchronizing serviceaccount sched-preemption-7782/default: secrets \"default-token-xz9mh\" is forbidden: unable to create new content in namespace sched-preemption-7782 because it is being terminated\nE0521 16:07:58.740435 1 namespace_controller.go:162] deletion of namespace sched-preemption-7782 failed: unexpected items still remain in namespace: sched-preemption-7782 for gvr: /v1, Resource=pods\nE0521 16:07:58.929642 1 namespace_controller.go:162] deletion of namespace sched-preemption-7782 failed: unexpected items still remain in namespace: sched-preemption-7782 for gvr: /v1, Resource=pods\nE0521 16:07:59.118597 1 namespace_controller.go:162] deletion of namespace sched-preemption-7782 failed: unexpected items still remain in namespace: sched-preemption-7782 for gvr: /v1, Resource=pods\nE0521 16:07:59.316197 1 namespace_controller.go:162] deletion of namespace sched-preemption-7782 failed: unexpected items still remain in namespace: sched-preemption-7782 for gvr: /v1, Resource=pods\nE0521 16:07:59.541664 1 namespace_controller.go:162] deletion of namespace sched-preemption-7782 failed: unexpected items still remain in namespace: sched-preemption-7782 for gvr: /v1, Resource=pods\nE0521 16:07:59.797953 1 namespace_controller.go:162] deletion of namespace sched-preemption-7782 failed: unexpected items still remain in namespace: sched-preemption-7782 for gvr: /v1, Resource=pods\nE0521 16:08:00.134521 1 namespace_controller.go:162] deletion of namespace sched-preemption-7782 failed: unexpected items still remain in namespace: sched-preemption-7782 for gvr: /v1, Resource=pods\nE0521 16:08:00.647173 1 namespace_controller.go:162] deletion of namespace sched-preemption-7782 failed: unexpected items still remain in namespace: sched-preemption-7782 for gvr: /v1, Resource=pods\nE0521 16:08:01.324724 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:08:01.474021 1 namespace_controller.go:162] deletion of namespace sched-preemption-7782 failed: unexpected items still remain in namespace: sched-preemption-7782 for gvr: /v1, Resource=pods\nE0521 16:08:02.933188 1 namespace_controller.go:162] deletion of namespace sched-preemption-7782 failed: unexpected items still remain in namespace: sched-preemption-7782 for gvr: /v1, Resource=pods\nE0521 16:08:04.577786 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:08:05.676150 1 namespace_controller.go:162] deletion of namespace sched-preemption-7782 failed: unexpected items still remain in namespace: sched-preemption-7782 for gvr: /v1, Resource=pods\nE0521 16:08:06.822855 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:08:06.948027 1 namespace_controller.go:162] deletion of namespace sched-pred-6469 failed: unexpected items still remain in namespace: sched-pred-6469 for gvr: /v1, Resource=pods\nE0521 16:08:07.127858 1 namespace_controller.go:162] deletion of namespace sched-pred-6469 failed: unexpected items still remain in namespace: sched-pred-6469 for gvr: /v1, Resource=pods\nE0521 16:08:07.320742 1 namespace_controller.go:162] deletion of namespace sched-pred-6469 failed: unexpected items still remain in namespace: sched-pred-6469 for gvr: /v1, Resource=pods\nE0521 16:08:07.536858 1 namespace_controller.go:162] deletion of namespace sched-pred-6469 failed: unexpected items still remain in namespace: sched-pred-6469 for gvr: /v1, Resource=pods\nE0521 16:08:07.769078 1 namespace_controller.go:162] deletion of namespace sched-pred-6469 failed: unexpected items still remain in namespace: sched-pred-6469 for gvr: /v1, Resource=pods\nE0521 16:08:08.035609 1 namespace_controller.go:162] deletion of namespace sched-pred-6469 failed: unexpected items still remain in namespace: sched-pred-6469 for gvr: /v1, Resource=pods\nE0521 16:08:08.382952 1 namespace_controller.go:162] deletion of namespace sched-pred-6469 failed: unexpected items still remain in namespace: sched-pred-6469 for gvr: /v1, Resource=pods\nE0521 16:08:08.792304 1 tokens_controller.go:261] error synchronizing serviceaccount nsdeletetest-8778/default: secrets \"default-token-spfw2\" is forbidden: unable to create new content in namespace nsdeletetest-8778 because it is being terminated\nE0521 16:08:08.864950 1 namespace_controller.go:162] deletion of namespace sched-pred-6469 failed: unexpected items still remain in namespace: sched-pred-6469 for gvr: /v1, Resource=pods\nI0521 16:08:14.690088 1 namespace_controller.go:185] Namespace has been deleted sched-pred-6469\nI0521 16:08:15.977745 1 namespace_controller.go:185] Namespace has been deleted sched-preemption-7782\nE0521 16:08:18.088191 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:08:19.135824 1 namespace_controller.go:185] Namespace has been deleted nsdeletetest-8778\nE0521 16:08:19.835106 1 tokens_controller.go:261] error synchronizing serviceaccount namespaces-6694/default: secrets \"default-token-sqd6x\" is forbidden: unable to create new content in namespace namespaces-6694 because it is being terminated\nE0521 16:08:23.412989 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:08:24.986534 1 namespace_controller.go:185] Namespace has been deleted nsdeletetest-5989\nI0521 16:08:24.987635 1 namespace_controller.go:185] Namespace has been deleted namespaces-6694\nI0521 16:08:25.461311 1 event.go:291] \"Event occurred\" object=\"emptydir-wrapper-9870/wrapped-volume-race-5543d2ea-98d2-47b1-8dc1-d9b75ca43e56\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: wrapped-volume-race-5543d2ea-98d2-47b1-8dc1-d9b75ca43e56-hr2q4\"\nI0521 16:08:25.471493 1 event.go:291] \"Event occurred\" object=\"emptydir-wrapper-9870/wrapped-volume-race-5543d2ea-98d2-47b1-8dc1-d9b75ca43e56\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: wrapped-volume-race-5543d2ea-98d2-47b1-8dc1-d9b75ca43e56-bvmph\"\nI0521 16:08:25.471579 1 event.go:291] \"Event occurred\" object=\"emptydir-wrapper-9870/wrapped-volume-race-5543d2ea-98d2-47b1-8dc1-d9b75ca43e56\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: wrapped-volume-race-5543d2ea-98d2-47b1-8dc1-d9b75ca43e56-85mw5\"\nI0521 16:08:25.480141 1 event.go:291] \"Event occurred\" object=\"emptydir-wrapper-9870/wrapped-volume-race-5543d2ea-98d2-47b1-8dc1-d9b75ca43e56\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: wrapped-volume-race-5543d2ea-98d2-47b1-8dc1-d9b75ca43e56-j2cwc\"\nI0521 16:08:25.480391 1 event.go:291] \"Event occurred\" object=\"emptydir-wrapper-9870/wrapped-volume-race-5543d2ea-98d2-47b1-8dc1-d9b75ca43e56\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: wrapped-volume-race-5543d2ea-98d2-47b1-8dc1-d9b75ca43e56-zvkhn\"\nE0521 16:08:25.887496 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:08:28.242468 1 namespace_controller.go:185] Namespace has been deleted sched-pred-5005\nE0521 16:08:30.237866 1 tokens_controller.go:261] error synchronizing serviceaccount sched-pred-9610/default: secrets \"default-token-2h5p9\" is forbidden: unable to create new content in namespace sched-pred-9610 because it is being terminated\nI0521 16:08:35.381643 1 namespace_controller.go:185] Namespace has been deleted sched-pred-9610\nE0521 16:08:40.395893 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:08:43.425098 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:08:44.052239 1 event.go:291] \"Event occurred\" object=\"emptydir-wrapper-9870/wrapped-volume-race-1da67ff7-6fa2-4132-989e-155511657bff\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: wrapped-volume-race-1da67ff7-6fa2-4132-989e-155511657bff-672lg\"\nI0521 16:08:44.060626 1 event.go:291] \"Event occurred\" object=\"emptydir-wrapper-9870/wrapped-volume-race-1da67ff7-6fa2-4132-989e-155511657bff\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: wrapped-volume-race-1da67ff7-6fa2-4132-989e-155511657bff-qr4k2\"\nI0521 16:08:44.067400 1 event.go:291] \"Event occurred\" object=\"emptydir-wrapper-9870/wrapped-volume-race-1da67ff7-6fa2-4132-989e-155511657bff\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: wrapped-volume-race-1da67ff7-6fa2-4132-989e-155511657bff-n5f69\"\nI0521 16:08:44.086260 1 event.go:291] \"Event occurred\" object=\"emptydir-wrapper-9870/wrapped-volume-race-1da67ff7-6fa2-4132-989e-155511657bff\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: wrapped-volume-race-1da67ff7-6fa2-4132-989e-155511657bff-qkdbd\"\nI0521 16:08:44.092197 1 event.go:291] \"Event occurred\" object=\"emptydir-wrapper-9870/wrapped-volume-race-1da67ff7-6fa2-4132-989e-155511657bff\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: wrapped-volume-race-1da67ff7-6fa2-4132-989e-155511657bff-xwz5d\"\nE0521 16:08:50.776438 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:09:00.338878 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:09:03.871348 1 event.go:291] \"Event occurred\" object=\"emptydir-wrapper-9870/wrapped-volume-race-8be6ce8f-023a-47e6-8d07-e94c415deaeb\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: wrapped-volume-race-8be6ce8f-023a-47e6-8d07-e94c415deaeb-rk2fw\"\nI0521 16:09:03.881422 1 event.go:291] \"Event occurred\" object=\"emptydir-wrapper-9870/wrapped-volume-race-8be6ce8f-023a-47e6-8d07-e94c415deaeb\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: wrapped-volume-race-8be6ce8f-023a-47e6-8d07-e94c415deaeb-mncc8\"\nI0521 16:09:03.882299 1 event.go:291] \"Event occurred\" object=\"emptydir-wrapper-9870/wrapped-volume-race-8be6ce8f-023a-47e6-8d07-e94c415deaeb\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: wrapped-volume-race-8be6ce8f-023a-47e6-8d07-e94c415deaeb-ggd2d\"\nI0521 16:09:03.891297 1 event.go:291] \"Event occurred\" object=\"emptydir-wrapper-9870/wrapped-volume-race-8be6ce8f-023a-47e6-8d07-e94c415deaeb\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: wrapped-volume-race-8be6ce8f-023a-47e6-8d07-e94c415deaeb-nbg4k\"\nI0521 16:09:03.892094 1 event.go:291] \"Event occurred\" object=\"emptydir-wrapper-9870/wrapped-volume-race-8be6ce8f-023a-47e6-8d07-e94c415deaeb\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: wrapped-volume-race-8be6ce8f-023a-47e6-8d07-e94c415deaeb-xmkzx\"\nE0521 16:09:11.523742 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:09:15.663112 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:09:18.112689 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:09:18.375989 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:09:30.923494 1 event.go:291] \"Event occurred\" object=\"daemonsets-5822/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: daemon-set-j88c5\"\nI0521 16:09:32.944085 1 event.go:291] \"Event occurred\" object=\"daemonsets-5822/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: daemon-set-j88c5\"\nE0521 16:09:33.067386 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:09:35.870527 1 tokens_controller.go:261] error synchronizing serviceaccount emptydir-wrapper-9870/default: secrets \"default-token-f2zhj\" is forbidden: unable to create new content in namespace emptydir-wrapper-9870 because it is being terminated\nE0521 16:09:37.517889 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:09:40.209261 1 event.go:291] \"Event occurred\" object=\"daemonsets-5822/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: daemon-set-q2pnv\"\nI0521 16:09:41.695844 1 namespace_controller.go:185] Namespace has been deleted emptydir-wrapper-9870\nE0521 16:09:47.803140 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:09:49.419426 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:09:49.494997 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:09:50.325871 1 event.go:291] \"Event occurred\" object=\"daemonsets-7708/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: daemon-set-hcfww\"\nI0521 16:09:50.330197 1 event.go:291] \"Event occurred\" object=\"daemonsets-7708/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: daemon-set-p7qkm\"\nE0521 16:09:51.432557 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:09:52.349415 1 event.go:291] \"Event occurred\" object=\"daemonsets-7708/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: daemon-set-hcfww\"\nE0521 16:09:55.404900 1 tokens_controller.go:261] error synchronizing serviceaccount daemonsets-5822/default: secrets \"default-token-5qjb6\" is forbidden: unable to create new content in namespace daemonsets-5822 because it is being terminated\nI0521 16:10:00.208695 1 event.go:291] \"Event occurred\" object=\"daemonsets-7708/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: daemon-set-srwks\"\nI0521 16:10:00.375932 1 event.go:291] \"Event occurred\" object=\"daemonsets-7708/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: daemon-set-srwks\"\nE0521 16:10:00.388696 1 daemon_controller.go:320] daemonsets-7708/daemon-set failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"daemon-set\", GenerateName:\"\", Namespace:\"daemonsets-7708\", SelfLink:\"/apis/apps/v1/namespaces/daemonsets-7708/daemonsets/daemon-set\", UID:\"f2c74e43-ad17-4f2a-b496-23d42ffa36ba\", ResourceVersion:\"32039\", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63757210190, loc:(*time.Location)(0x6a53ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"3\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"e2e.test\", Operation:\"Update\", APIVersion:\"apps/v1\", Time:(*v1.Time)(0xc003275e00), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003275e20)}, v1.ManagedFieldsEntry{Manager:\"kube-controller-manager\", Operation:\"Update\", APIVersion:\"apps/v1\", Time:(*v1.Time)(0xc003275e40), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003275e60)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc003275e80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"daemonset-name\":\"daemon-set\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"app\", Image:\"docker.io/library/httpd:2.4.38-alpine\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:\"\", HostPort:0, ContainerPort:9376, Protocol:\"TCP\", HostIP:\"\"}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc0033231b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"\", DeprecatedServiceAccount:\"\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0010d3500), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0006d2920)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0033231cc)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:1, ObservedGeneration:2, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"daemon-set\": the object has been modified; please apply your changes to the latest version and try again\nI0521 16:10:00.494883 1 namespace_controller.go:185] Namespace has been deleted daemonsets-5822\nE0521 16:10:03.430657 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:10:10.208343 1 event.go:291] \"Event occurred\" object=\"daemonsets-7708/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: daemon-set-7kt9j\"\nE0521 16:10:18.386511 1 tokens_controller.go:261] error synchronizing serviceaccount nsdeletetest-5489/default: secrets \"default-token-t7m54\" is forbidden: unable to create new content in namespace nsdeletetest-5489 because it is being terminated\nE0521 16:10:19.921578 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:10:21.103640 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:10:23.525134 1 namespace_controller.go:185] Namespace has been deleted nsdeletetest-5489\nI0521 16:10:23.525692 1 namespace_controller.go:185] Namespace has been deleted daemonsets-7708\nE0521 16:10:23.642021 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:10:24.465939 1 tokens_controller.go:261] error synchronizing serviceaccount namespaces-1242/default: secrets \"default-token-vdd9h\" is forbidden: unable to create new content in namespace namespaces-1242 because it is being terminated\nE0521 16:10:24.471456 1 tokens_controller.go:261] error synchronizing serviceaccount nsdeletetest-7217/default: secrets \"default-token-lw749\" is forbidden: unable to create new content in namespace nsdeletetest-7217 because it is being terminated\nE0521 16:10:28.757948 1 namespace_controller.go:162] deletion of namespace sched-pred-4049 failed: unexpected items still remain in namespace: sched-pred-4049 for gvr: /v1, Resource=pods\nE0521 16:10:28.936650 1 namespace_controller.go:162] deletion of namespace sched-pred-4049 failed: unexpected items still remain in namespace: sched-pred-4049 for gvr: /v1, Resource=pods\nE0521 16:10:29.125997 1 namespace_controller.go:162] deletion of namespace sched-pred-4049 failed: unexpected items still remain in namespace: sched-pred-4049 for gvr: /v1, Resource=pods\nE0521 16:10:29.314997 1 namespace_controller.go:162] deletion of namespace sched-pred-4049 failed: unexpected items still remain in namespace: sched-pred-4049 for gvr: /v1, Resource=pods\nE0521 16:10:29.529711 1 namespace_controller.go:162] deletion of namespace sched-pred-4049 failed: unexpected items still remain in namespace: sched-pred-4049 for gvr: /v1, Resource=pods\nI0521 16:10:29.569617 1 namespace_controller.go:185] Namespace has been deleted namespaces-1242\nI0521 16:10:29.576098 1 namespace_controller.go:185] Namespace has been deleted nsdeletetest-7217\nE0521 16:10:29.787152 1 namespace_controller.go:162] deletion of namespace sched-pred-4049 failed: unexpected items still remain in namespace: sched-pred-4049 for gvr: /v1, Resource=pods\nE0521 16:10:30.128690 1 namespace_controller.go:162] deletion of namespace sched-pred-4049 failed: unexpected items still remain in namespace: sched-pred-4049 for gvr: /v1, Resource=pods\nE0521 16:10:32.562590 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:10:35.627826 1 namespace_controller.go:185] Namespace has been deleted sched-pred-4049\nE0521 16:10:38.153199 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:10:39.348131 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:10:47.739793 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:10:53.995591 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:11:07.701394 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:11:08.578951 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:11:12.217596 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:11:22.283406 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:11:23.019603 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:11:28.561911 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:11:41.933909 1 event.go:291] \"Event occurred\" object=\"daemonsets-6017/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: daemon-set-mvg2h\"\nI0521 16:11:41.938256 1 event.go:291] \"Event occurred\" object=\"daemonsets-6017/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: daemon-set-68gfc\"\nI0521 16:11:43.951518 1 event.go:291] \"Event occurred\" object=\"daemonsets-6017/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedDaemonPod\" message=\"Found failed daemon pod daemonsets-6017/daemon-set-68gfc on node kali-worker2, will try to kill it\"\nI0521 16:11:43.959517 1 event.go:291] \"Event occurred\" object=\"daemonsets-6017/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: daemon-set-68gfc\"\nI0521 16:11:43.969148 1 event.go:291] \"Event occurred\" object=\"daemonsets-6017/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: daemon-set-772r4\"\nE0521 16:11:45.635537 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:11:46.916819 1 tokens_controller.go:261] error synchronizing serviceaccount sched-preemption-9369/default: secrets \"default-token-4jzj9\" is forbidden: unable to create new content in namespace sched-preemption-9369 because it is being terminated\nE0521 16:11:46.959398 1 namespace_controller.go:162] deletion of namespace sched-preemption-9369 failed: unexpected items still remain in namespace: sched-preemption-9369 for gvr: /v1, Resource=pods\nE0521 16:11:47.132848 1 namespace_controller.go:162] deletion of namespace sched-preemption-9369 failed: unexpected items still remain in namespace: sched-preemption-9369 for gvr: /v1, Resource=pods\nE0521 16:11:47.317222 1 namespace_controller.go:162] deletion of namespace sched-preemption-9369 failed: unexpected items still remain in namespace: sched-preemption-9369 for gvr: /v1, Resource=pods\nE0521 16:11:47.503109 1 namespace_controller.go:162] deletion of namespace sched-preemption-9369 failed: unexpected items still remain in namespace: sched-preemption-9369 for gvr: /v1, Resource=pods\nE0521 16:11:47.721315 1 namespace_controller.go:162] deletion of namespace sched-preemption-9369 failed: unexpected items still remain in namespace: sched-preemption-9369 for gvr: /v1, Resource=pods\nE0521 16:11:47.985446 1 namespace_controller.go:162] deletion of namespace sched-preemption-9369 failed: unexpected items still remain in namespace: sched-preemption-9369 for gvr: /v1, Resource=pods\nE0521 16:11:48.330673 1 namespace_controller.go:162] deletion of namespace sched-preemption-9369 failed: unexpected items still remain in namespace: sched-preemption-9369 for gvr: /v1, Resource=pods\nE0521 16:11:48.829614 1 namespace_controller.go:162] deletion of namespace sched-preemption-9369 failed: unexpected items still remain in namespace: sched-preemption-9369 for gvr: /v1, Resource=pods\nE0521 16:11:49.660834 1 namespace_controller.go:162] deletion of namespace sched-preemption-9369 failed: unexpected items still remain in namespace: sched-preemption-9369 for gvr: /v1, Resource=pods\nI0521 16:11:52.015860 1 namespace_controller.go:185] Namespace has been deleted namespaces-6112\nI0521 16:11:52.020855 1 namespace_controller.go:185] Namespace has been deleted nspatchtest-646a53b1-fe88-4261-8659-175cde7206b9-5471\nE0521 16:11:54.363759 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:11:56.131591 1 namespace_controller.go:185] Namespace has been deleted sched-preemption-9369\nE0521 16:11:58.878152 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:11:58.897419 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:11:59.642642 1 namespace_controller.go:185] Namespace has been deleted daemonsets-6017\nE0521 16:12:02.889537 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:12:05.967372 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:12:16.780905 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:12:31.615874 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:12:33.716466 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:12:39.337504 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:12:47.150673 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:12:48.051868 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:12:49.710046 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:12:55.465539 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:13:06.946020 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:13:07.829568 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:13:34.588744 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:13:34.766764 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:13:40.098757 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:13:42.708180 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:13:47.597411 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:13:51.763115 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:13:54.419402 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:14:12.675248 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:14:26.553728 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:14:32.307986 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:14:33.398631 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:14:40.127974 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:14:44.976642 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:14:49.378669 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:14:56.482015 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:15:06.991043 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:15:10.280285 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:15:13.201053 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:15:16.044848 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:15:22.173517 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:15:23.819981 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:15:30.642061 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:15:38.818885 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:15:56.084749 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:15:56.252906 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:16:04.941290 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:16:08.049270 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:16:10.825871 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:16:11.719269 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:16:26.495807 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:16:31.526253 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:16:43.044873 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:16:52.040566 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:16:53.596537 1 event.go:291] \"Event occurred\" object=\"daemonsets-7934/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: daemon-set-gfb44\"\nI0521 16:16:53.601331 1 event.go:291] \"Event occurred\" object=\"daemonsets-7934/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: daemon-set-57slb\"\nE0521 16:16:53.974086 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:16:54.216785 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:16:55.629987 1 event.go:291] \"Event occurred\" object=\"daemonsets-7934/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: daemon-set-gfb44\"\nE0521 16:16:58.178217 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:16:58.768541 1 namespace_controller.go:162] deletion of namespace sched-pred-9957 failed: unexpected items still remain in namespace: sched-pred-9957 for gvr: /v1, Resource=pods\nE0521 16:16:58.949281 1 namespace_controller.go:162] deletion of namespace sched-pred-9957 failed: unexpected items still remain in namespace: sched-pred-9957 for gvr: /v1, Resource=pods\nI0521 16:16:59.039355 1 event.go:291] \"Event occurred\" object=\"daemonsets-7934/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: daemon-set-jwkdz\"\nE0521 16:16:59.135682 1 namespace_controller.go:162] deletion of namespace sched-pred-9957 failed: unexpected items still remain in namespace: sched-pred-9957 for gvr: /v1, Resource=pods\nE0521 16:16:59.335111 1 namespace_controller.go:162] deletion of namespace sched-pred-9957 failed: unexpected items still remain in namespace: sched-pred-9957 for gvr: /v1, Resource=pods\nE0521 16:16:59.552669 1 namespace_controller.go:162] deletion of namespace sched-pred-9957 failed: unexpected items still remain in namespace: sched-pred-9957 for gvr: /v1, Resource=pods\nE0521 16:16:59.810842 1 namespace_controller.go:162] deletion of namespace sched-pred-9957 failed: unexpected items still remain in namespace: sched-pred-9957 for gvr: /v1, Resource=pods\nI0521 16:17:00.053157 1 event.go:291] \"Event occurred\" object=\"daemonsets-7934/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: daemon-set-57slb\"\nE0521 16:17:00.145407 1 namespace_controller.go:162] deletion of namespace sched-pred-9957 failed: unexpected items still remain in namespace: sched-pred-9957 for gvr: /v1, Resource=pods\nE0521 16:17:00.645757 1 namespace_controller.go:162] deletion of namespace sched-pred-9957 failed: unexpected items still remain in namespace: sched-pred-9957 for gvr: /v1, Resource=pods\nE0521 16:17:01.465407 1 namespace_controller.go:162] deletion of namespace sched-pred-9957 failed: unexpected items still remain in namespace: sched-pred-9957 for gvr: /v1, Resource=pods\nE0521 16:17:02.458592 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:17:02.908015 1 namespace_controller.go:162] deletion of namespace sched-pred-9957 failed: unexpected items still remain in namespace: sched-pred-9957 for gvr: /v1, Resource=pods\nE0521 16:17:05.648488 1 namespace_controller.go:162] deletion of namespace sched-pred-9957 failed: unexpected items still remain in namespace: sched-pred-9957 for gvr: /v1, Resource=pods\nE0521 16:17:10.083859 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:17:10.417336 1 event.go:291] \"Event occurred\" object=\"daemonsets-7934/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: daemon-set-jhxrx\"\nE0521 16:17:14.662605 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:17:15.954723 1 namespace_controller.go:185] Namespace has been deleted sched-pred-9957\nE0521 16:17:22.460494 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:17:25.502645 1 tokens_controller.go:261] error synchronizing serviceaccount daemonsets-7934/default: secrets \"default-token-pdrjs\" is forbidden: unable to create new content in namespace daemonsets-7934 because it is being terminated\nI0521 16:17:30.767686 1 namespace_controller.go:185] Namespace has been deleted daemonsets-7934\nE0521 16:17:31.924744 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:17:40.234574 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:17:44.807590 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:17:46.486837 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:17:59.203671 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:18:01.264215 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:18:04.073028 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:18:13.187469 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:18:22.227564 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:18:22.661046 1 event.go:291] \"Event occurred\" object=\"sched-preemption-path-1776/rs-pod1\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-pod1-k97xl\"\nI0521 16:18:26.672127 1 event.go:291] \"Event occurred\" object=\"sched-preemption-path-1776/rs-pod2\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-pod2-9zxf9\"\nI0521 16:18:28.684408 1 event.go:291] \"Event occurred\" object=\"sched-preemption-path-1776/rs-pod3\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-pod3-57m4m\"\nI0521 16:18:30.706901 1 event.go:291] \"Event occurred\" object=\"sched-preemption-path-1776/rs-pod2\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-pod2-dn8qt\"\nI0521 16:18:30.712124 1 event.go:291] \"Event occurred\" object=\"sched-preemption-path-1776/rs-pod1\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-pod1-q7zrn\"\nE0521 16:18:33.017455 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:18:38.271659 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:18:41.862147 1 event.go:291] \"Event occurred\" object=\"daemonsets-4111/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: daemon-set-rbtlv\"\nI0521 16:18:41.866220 1 event.go:291] \"Event occurred\" object=\"daemonsets-4111/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: daemon-set-h2thn\"\nE0521 16:18:44.668842 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:18:46.989323 1 event.go:291] \"Event occurred\" object=\"daemonsets-4111/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: daemon-set-dvthx\"\nE0521 16:18:47.045647 1 namespace_controller.go:162] deletion of namespace sched-preemption-path-1776 failed: unexpected items still remain in namespace: sched-preemption-path-1776 for gvr: /v1, Resource=pods\nE0521 16:18:47.224569 1 namespace_controller.go:162] deletion of namespace sched-preemption-path-1776 failed: unexpected items still remain in namespace: sched-preemption-path-1776 for gvr: /v1, Resource=pods\nE0521 16:18:47.415706 1 namespace_controller.go:162] deletion of namespace sched-preemption-path-1776 failed: unexpected items still remain in namespace: sched-preemption-path-1776 for gvr: /v1, Resource=pods\nE0521 16:18:47.614511 1 namespace_controller.go:162] deletion of namespace sched-preemption-path-1776 failed: unexpected items still remain in namespace: sched-preemption-path-1776 for gvr: /v1, Resource=pods\nE0521 16:18:47.839142 1 namespace_controller.go:162] deletion of namespace sched-preemption-path-1776 failed: unexpected items still remain in namespace: sched-preemption-path-1776 for gvr: /v1, Resource=pods\nE0521 16:18:48.097096 1 namespace_controller.go:162] deletion of namespace sched-preemption-path-1776 failed: unexpected items still remain in namespace: sched-preemption-path-1776 for gvr: /v1, Resource=pods\nE0521 16:18:48.183815 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:18:48.439625 1 namespace_controller.go:162] deletion of namespace sched-preemption-path-1776 failed: unexpected items still remain in namespace: sched-preemption-path-1776 for gvr: /v1, Resource=pods\nE0521 16:18:48.938535 1 namespace_controller.go:162] deletion of namespace sched-preemption-path-1776 failed: unexpected items still remain in namespace: sched-preemption-path-1776 for gvr: /v1, Resource=pods\nE0521 16:18:49.761179 1 namespace_controller.go:162] deletion of namespace sched-preemption-path-1776 failed: unexpected items still remain in namespace: sched-preemption-path-1776 for gvr: /v1, Resource=pods\nE0521 16:18:51.220451 1 namespace_controller.go:162] deletion of namespace sched-preemption-path-1776 failed: unexpected items still remain in namespace: sched-preemption-path-1776 for gvr: /v1, Resource=pods\nI0521 16:18:51.925315 1 namespace_controller.go:185] Namespace has been deleted sched-preemption-9161\nE0521 16:18:53.959754 1 namespace_controller.go:162] deletion of namespace sched-preemption-path-1776 failed: unexpected items still remain in namespace: sched-preemption-path-1776 for gvr: /v1, Resource=pods\nE0521 16:18:55.727114 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:18:59.263250 1 namespace_controller.go:162] deletion of namespace sched-preemption-path-1776 failed: unexpected items still remain in namespace: sched-preemption-path-1776 for gvr: /v1, Resource=pods\nE0521 16:19:01.619912 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:19:05.518206 1 tokens_controller.go:261] error synchronizing serviceaccount daemonsets-4111/default: secrets \"default-token-29xsd\" is forbidden: unable to create new content in namespace daemonsets-4111 because it is being terminated\nI0521 16:19:10.755034 1 namespace_controller.go:185] Namespace has been deleted daemonsets-4111\nE0521 16:19:14.303335 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:19:14.547021 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:19:14.642623 1 namespace_controller.go:185] Namespace has been deleted sched-preemption-path-1776\nE0521 16:19:15.909698 1 tokens_controller.go:261] error synchronizing serviceaccount health-2295/default: secrets \"default-token-vtskp\" is forbidden: unable to create new content in namespace health-2295 because it is being terminated\nI0521 16:19:15.985044 1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nE0521 16:19:15.993258 1 tokens_controller.go:261] error synchronizing serviceaccount clientset-9347/default: secrets \"default-token-xmnhp\" is forbidden: unable to create new content in namespace clientset-9347 because it is being terminated\nE0521 16:19:15.996675 1 tokens_controller.go:261] error synchronizing serviceaccount tables-7034/default: secrets \"default-token-fpj66\" is forbidden: unable to create new content in namespace tables-7034 because it is being terminated\nI0521 16:19:16.085196 1 shared_informer.go:247] Caches are synced for garbage collector \nE0521 16:19:16.257219 1 tokens_controller.go:261] error synchronizing serviceaccount tables-2237/default: secrets \"default-token-5fb7t\" is forbidden: unable to create new content in namespace tables-2237 because it is being terminated\nI0521 16:19:17.562831 1 event.go:291] \"Event occurred\" object=\"resourcequota-5873/test-claim\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0521 16:19:19.569702 1 event.go:291] \"Event occurred\" object=\"resourcequota-5873/test-claim\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0521 16:19:21.019383 1 namespace_controller.go:185] Namespace has been deleted health-2295\nI0521 16:19:21.033401 1 namespace_controller.go:185] Namespace has been deleted tables-7034\nI0521 16:19:21.118765 1 namespace_controller.go:185] Namespace has been deleted clientset-9347\nI0521 16:19:21.333175 1 namespace_controller.go:185] Namespace has been deleted tables-2237\nI0521 16:19:21.449877 1 namespace_controller.go:185] Namespace has been deleted tables-8969\nI0521 16:19:21.598047 1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-priorityclass-8612/quota-priorityclass\nI0521 16:19:21.767565 1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-priorityclass-6524/quota-priorityclass\nE0521 16:19:21.831538 1 tokens_controller.go:261] error synchronizing serviceaccount resourcequota-priorityclass-6524/default: secrets \"default-token-9rxnr\" is forbidden: unable to create new content in namespace resourcequota-priorityclass-6524 because it is being terminated\nE0521 16:19:22.149094 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:19:22.276437 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:19:22.286931 1 event.go:291] \"Event occurred\" object=\"gc-8243/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-xbxmp\"\nI0521 16:19:22.290425 1 event.go:291] \"Event occurred\" object=\"gc-8243/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-8d6ns\"\nI0521 16:19:22.923552 1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-priorityclass-8636/quota-priorityclass\nI0521 16:19:23.108790 1 namespace_controller.go:185] Namespace has been deleted clientset-1564\nE0521 16:19:23.116328 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:19:23.654398 1 tokens_controller.go:261] error synchronizing serviceaccount discovery-2779/default: secrets \"default-token-q7w2v\" is forbidden: unable to create new content in namespace discovery-2779 because it is being terminated\nE0521 16:19:24.503415 1 tokens_controller.go:261] error synchronizing serviceaccount resourcequota-priorityclass-7181/default: secrets \"default-token-hftk8\" is forbidden: unable to create new content in namespace resourcequota-priorityclass-7181 because it is being terminated\nI0521 16:19:24.566482 1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-priorityclass-7181/quota-priorityclass\nE0521 16:19:24.786749 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:19:26.061149 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:19:26.601127 1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-5873/test-quota\nI0521 16:19:26.645153 1 namespace_controller.go:185] Namespace has been deleted resourcequota-priorityclass-8612\nE0521 16:19:26.653409 1 tokens_controller.go:261] error synchronizing serviceaccount resourcequota-5873/default: secrets \"default-token-x44b8\" is forbidden: unable to create new content in namespace resourcequota-5873 because it is being terminated\nI0521 16:19:26.943217 1 namespace_controller.go:185] Namespace has been deleted resourcequota-priorityclass-6524\nI0521 16:19:27.989635 1 namespace_controller.go:185] Namespace has been deleted resourcequota-priorityclass-8636\nI0521 16:19:28.019128 1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-priorityclass-5162/quota-priorityclass\nE0521 16:19:28.707608 1 pv_controller.go:1432] error finding provisioning plugin for claim resourcequota-1376/test-claim: storageclass.storage.k8s.io \"gold\" not found\nI0521 16:19:28.707764 1 event.go:291] \"Event occurred\" object=\"resourcequota-1376/test-claim\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"gold\\\" not found\"\nI0521 16:19:28.850194 1 namespace_controller.go:185] Namespace has been deleted discovery-2779\nI0521 16:19:29.668235 1 namespace_controller.go:185] Namespace has been deleted resourcequota-priorityclass-7181\nI0521 16:19:30.688385 1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-priorityclass-4360/quota-priorityclass\nE0521 16:19:30.714963 1 pv_controller.go:1432] error finding provisioning plugin for claim resourcequota-1376/test-claim: storageclass.storage.k8s.io \"gold\" not found\nI0521 16:19:30.715054 1 event.go:291] \"Event occurred\" object=\"resourcequota-1376/test-claim\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"gold\\\" not found\"\nI0521 16:19:31.191785 1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-priorityclass-6696/quota-priorityclass\nE0521 16:19:31.594399 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:19:31.752445 1 tokens_controller.go:261] error synchronizing serviceaccount scope-selectors-4766/default: secrets \"default-token-lhxc9\" is forbidden: unable to create new content in namespace scope-selectors-4766 because it is being terminated\nI0521 16:19:31.782954 1 namespace_controller.go:185] Namespace has been deleted resourcequota-5873\nE0521 16:19:31.799274 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:19:31.850254 1 resource_quota_controller.go:306] Resource quota has been deleted scope-selectors-4766/quota-not-terminating\nI0521 16:19:31.853578 1 resource_quota_controller.go:306] Resource quota has been deleted scope-selectors-4766/quota-terminating\nI0521 16:19:31.961662 1 namespace_controller.go:185] Namespace has been deleted gc-9278\nI0521 16:19:32.056988 1 resource_quota_controller.go:306] Resource quota has been deleted scope-selectors-908/quota-besteffort\nI0521 16:19:32.060561 1 resource_quota_controller.go:306] Resource quota has been deleted scope-selectors-908/quota-not-besteffort\nI0521 16:19:33.144258 1 namespace_controller.go:185] Namespace has been deleted resourcequota-priorityclass-5162\nI0521 16:19:35.772731 1 namespace_controller.go:185] Namespace has been deleted resourcequota-priorityclass-4360\nI0521 16:19:36.221865 1 namespace_controller.go:185] Namespace has been deleted resourcequota-priorityclass-6696\nI0521 16:19:36.859930 1 namespace_controller.go:185] Namespace has been deleted scope-selectors-4766\nI0521 16:19:37.192637 1 namespace_controller.go:185] Namespace has been deleted scope-selectors-908\nE0521 16:19:37.840661 1 tokens_controller.go:261] error synchronizing serviceaccount resourcequota-1376/default: secrets \"default-token-n85np\" is forbidden: unable to create new content in namespace resourcequota-1376 because it is being terminated\nI0521 16:19:37.851481 1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-1376/test-quota\nE0521 16:19:38.207021 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:19:40.145235 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:19:40.804615 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for e2e-test-resourcequota-599-crds.resourcequota.example.com\nI0521 16:19:40.804716 1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0521 16:19:40.904986 1 shared_informer.go:247] Caches are synced for resource quota \nI0521 16:19:42.903195 1 namespace_controller.go:185] Namespace has been deleted resourcequota-1376\nI0521 16:19:43.851898 1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-2673/quota-for-e2e-test-resourcequota-599-crds\nI0521 16:19:44.241915 1 namespace_controller.go:185] Namespace has been deleted chunking-1968\nI0521 16:19:47.292576 1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0521 16:19:47.292833 1 shared_informer.go:247] Caches are synced for garbage collector \nE0521 16:19:47.742098 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:19:48.505759 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:19:48.775866 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:19:50.822065 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:19:55.762062 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:19:55.915361 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:19:57.024396 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:19:57.427130 1 namespace_controller.go:185] Namespace has been deleted gc-6061\nE0521 16:20:00.138820 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:20:00.491600 1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-2673/test-quota\nE0521 16:20:01.874809 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:20:02.442741 1 event.go:291] \"Event occurred\" object=\"gc-8418/simple\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job simple-1621614000\"\nI0521 16:20:02.448839 1 event.go:291] \"Event occurred\" object=\"gc-8418/simple-1621614000\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simple-1621614000-hk86z\"\nI0521 16:20:02.451307 1 cronjob_controller.go:190] Unable to update status for gc-8418/simple (rv = 35153): Operation cannot be fulfilled on cronjobs.batch \"simple\": the object has been modified; please apply your changes to the latest version and try again\nE0521 16:20:02.740130 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:20:03.065358 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:20:03.591277 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:20:05.605053 1 namespace_controller.go:185] Namespace has been deleted resourcequota-2673\nE0521 16:20:09.077180 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:20:11.407081 1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0521 16:20:11.407142 1 shared_informer.go:247] Caches are synced for resource quota \nE0521 16:20:12.933433 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:20:15.100702 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:20:15.978566 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:20:17.795056 1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0521 16:20:17.795137 1 shared_informer.go:247] Caches are synced for garbage collector \nE0521 16:20:18.183371 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:20:22.827976 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:20:23.936377 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:20:28.969455 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:20:38.213050 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:20:45.141142 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:20:45.660985 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:20:52.168472 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:21:00.474247 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:21:06.311083 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:21:08.927097 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:21:09.586354 1 namespace_controller.go:185] Namespace has been deleted gc-8243\nE0521 16:21:10.869491 1 tokens_controller.go:261] error synchronizing serviceaccount gc-8418/default: secrets \"default-token-5pm56\" is forbidden: unable to create new content in namespace gc-8418 because it is being terminated\nE0521 16:21:12.124269 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:21:16.003885 1 namespace_controller.go:185] Namespace has been deleted gc-8418\nE0521 16:21:16.385610 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:21:17.472979 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:21:21.692940 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:21:23.778812 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:21:28.668333 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:21:29.071074 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:21:36.635348 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:21:55.075689 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:21:55.585332 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:21:56.183392 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:21:58.103643 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:22:02.646527 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:22:03.282029 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:22:11.711419 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:22:16.449916 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:22:16.696813 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:22:26.015769 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:22:28.997610 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:22:34.178233 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:22:43.044853 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:22:45.735948 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:22:48.617178 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:22:57.625057 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:22:59.067181 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:22:59.173126 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:23:00.082179 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:23:07.971028 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:23:11.337746 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:23:19.819537 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:23:25.851531 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:23:37.684220 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:23:40.486037 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:23:42.951355 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:23:46.870281 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:23:47.310647 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:23:53.524161 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:23:53.793586 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:23:59.246066 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:24:03.498915 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:24:09.449083 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:24:17.308665 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:24:19.252411 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:24:24.069081 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:24:27.175740 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:24:40.220946 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:24:40.484543 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:24:40.562668 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:24:47.447076 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:24:49.649018 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:24:51.430997 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:24:53.288871 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:24:55.869653 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:25:12.710946 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:25:13.009401 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:25:15.532306 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:25:15.831980 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:25:24.600076 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:25:26.734974 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:25:29.975480 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:25:30.975804 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:25:40.711724 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:25:44.292392 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:25:45.021411 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:25:50.954305 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:25:51.440815 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:26:03.873696 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:26:09.315460 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:26:13.996124 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:26:19.461455 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:26:21.052288 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:26:25.999373 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:26:27.560612 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:26:29.804135 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:26:41.560032 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:26:43.872896 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:26:48.354105 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:27:01.261633 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:27:03.292364 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:27:04.539907 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:27:14.418350 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:27:16.956747 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:27:18.064556 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:27:18.582425 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:27:18.637310 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:27:24.698135 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:27:30.388573 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:27:33.806229 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:27:46.052435 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:27:59.458384 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:28:01.232616 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:28:05.401459 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:28:07.629843 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:28:08.262999 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:28:10.741385 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:28:11.797946 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:28:13.898537 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:28:18.123042 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:28:28.914379 1 tokens_controller.go:261] error synchronizing serviceaccount chunking-9511/default: secrets \"default-token-dnvrt\" is forbidden: unable to create new content in namespace chunking-9511 because it is being terminated\nE0521 16:28:31.218934 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:28:35.448110 1 namespace_controller.go:185] Namespace has been deleted chunking-9511\nE0521 16:28:37.247116 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:28:37.722768 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:28:42.861138 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:28:45.697668 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:28:56.768189 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:29:00.387034 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:29:02.072622 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:29:03.615068 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-99-2066/default: secrets \"default-token-9ttlz\" is forbidden: unable to create new content in namespace nslifetest-99-2066 because it is being terminated\nE0521 16:29:03.617340 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-21-1146/default: secrets \"default-token-w6jcx\" is forbidden: unable to create new content in namespace nslifetest-21-1146 because it is being terminated\nE0521 16:29:03.679588 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-72-8208/default: secrets \"default-token-r5965\" is forbidden: unable to create new content in namespace nslifetest-72-8208 because it is being terminated\nE0521 16:29:03.804506 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-86-614/default: secrets \"default-token-q99kc\" is forbidden: unable to create new content in namespace nslifetest-86-614 because it is being terminated\nE0521 16:29:03.829693 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-57-5996/default: secrets \"default-token-vqgvp\" is forbidden: unable to create new content in namespace nslifetest-57-5996 because it is being terminated\nE0521 16:29:03.889233 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-60-6439/default: serviceaccounts \"default\" not found\nE0521 16:29:05.389728 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-89-19/default: secrets \"default-token-4bxjd\" is forbidden: unable to create new content in namespace nslifetest-89-19 because it is being terminated\nE0521 16:29:05.640245 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-36-2298/default: serviceaccounts \"default\" not found\nE0521 16:29:05.740205 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-64-3362/default: serviceaccounts \"default\" not found\nE0521 16:29:05.989342 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-87-3475/default: secrets \"default-token-5c4n5\" is forbidden: unable to create new content in namespace nslifetest-87-3475 because it is being terminated\nE0521 16:29:06.389698 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-37-5276/default: secrets \"default-token-f5xnp\" is forbidden: unable to create new content in namespace nslifetest-37-5276 because it is being terminated\nE0521 16:29:06.439714 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-84-3592/default: secrets \"default-token-4fd25\" is forbidden: unable to create new content in namespace nslifetest-84-3592 because it is being terminated\nE0521 16:29:06.789900 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-32-9154/default: secrets \"default-token-gjh4h\" is forbidden: unable to create new content in namespace nslifetest-32-9154 because it is being terminated\nE0521 16:29:07.139638 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-53-9606/default: secrets \"default-token-wk784\" is forbidden: unable to create new content in namespace nslifetest-53-9606 because it is being terminated\nE0521 16:29:07.439255 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-88-6490/default: secrets \"default-token-hn8c6\" is forbidden: unable to create new content in namespace nslifetest-88-6490 because it is being terminated\nE0521 16:29:07.899263 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-27-7991/default: secrets \"default-token-qg62f\" is forbidden: unable to create new content in namespace nslifetest-27-7991 because it is being terminated\nE0521 16:29:08.089579 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-38-6885/default: secrets \"default-token-vns47\" is forbidden: unable to create new content in namespace nslifetest-38-6885 because it is being terminated\nE0521 16:29:08.389733 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-40-3697/default: secrets \"default-token-sw75r\" is forbidden: unable to create new content in namespace nslifetest-40-3697 because it is being terminated\nE0521 16:29:08.515780 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:29:08.589382 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-39-3278/default: secrets \"default-token-2tnmq\" is forbidden: unable to create new content in namespace nslifetest-39-3278 because it is being terminated\nE0521 16:29:09.140138 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-41-785/default: secrets \"default-token-x9w5b\" is forbidden: unable to create new content in namespace nslifetest-41-785 because it is being terminated\nE0521 16:29:09.297103 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-5-3598/default: secrets \"default-token-xvdq4\" is forbidden: unable to create new content in namespace nslifetest-5-3598 because it is being terminated\nE0521 16:29:09.699686 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-67-9133/default: secrets \"default-token-cbndp\" is forbidden: unable to create new content in namespace nslifetest-67-9133 because it is being terminated\nE0521 16:29:09.910719 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-79-5289/default: secrets \"default-token-j2jhs\" is forbidden: unable to create new content in namespace nslifetest-79-5289 because it is being terminated\nE0521 16:29:09.949658 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-78-4885/default: secrets \"default-token-xcdnk\" is forbidden: unable to create new content in namespace nslifetest-78-4885 because it is being terminated\nE0521 16:29:10.297066 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-56-9269/default: secrets \"default-token-kjbxr\" is forbidden: unable to create new content in namespace nslifetest-56-9269 because it is being terminated\nE0521 16:29:11.054055 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-51-1434/default: secrets \"default-token-hg8nq\" is forbidden: unable to create new content in namespace nslifetest-51-1434 because it is being terminated\nE0521 16:29:11.211766 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-42-3904/default: secrets \"default-token-v5xs2\" is forbidden: unable to create new content in namespace nslifetest-42-3904 because it is being terminated\nE0521 16:29:11.886042 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-83-2653/default: secrets \"default-token-dx7dz\" is forbidden: unable to create new content in namespace nslifetest-83-2653 because it is being terminated\nE0521 16:29:11.942018 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-28-8154/default: secrets \"default-token-qq7sv\" is forbidden: unable to create new content in namespace nslifetest-28-8154 because it is being terminated\nI0521 16:29:12.392413 1 namespace_controller.go:185] Namespace has been deleted nslifetest-71-9782\nI0521 16:29:12.392467 1 namespace_controller.go:185] Namespace has been deleted nslifetest-72-8208\nI0521 16:29:12.392490 1 namespace_controller.go:185] Namespace has been deleted nslifetest-21-1146\nI0521 16:29:12.392506 1 namespace_controller.go:185] Namespace has been deleted nslifetest-58-7833\nI0521 16:29:12.392521 1 namespace_controller.go:185] Namespace has been deleted nslifetest-15-1420\nI0521 16:29:12.392538 1 namespace_controller.go:185] Namespace has been deleted nslifetest-14-9397\nI0521 16:29:12.392564 1 namespace_controller.go:185] Namespace has been deleted nslifetest-70-7436\nI0521 16:29:12.392595 1 namespace_controller.go:185] Namespace has been deleted nslifetest-16-1770\nI0521 16:29:12.392611 1 namespace_controller.go:185] Namespace has been deleted nslifetest-17-2807\nI0521 16:29:12.392641 1 namespace_controller.go:185] Namespace has been deleted nslifetest-99-2066\nI0521 16:29:12.392670 1 namespace_controller.go:185] Namespace has been deleted nslifetest-59-1244\nI0521 16:29:12.392687 1 namespace_controller.go:185] Namespace has been deleted nslifetest-86-614\nI0521 16:29:12.392712 1 namespace_controller.go:185] Namespace has been deleted nslifetest-2-9606\nI0521 16:29:12.392737 1 namespace_controller.go:185] Namespace has been deleted nslifetest-57-5996\nI0521 16:29:12.392763 1 namespace_controller.go:185] Namespace has been deleted nslifetest-63-5799\nI0521 16:29:12.392792 1 namespace_controller.go:185] Namespace has been deleted nslifetest-85-9038\nI0521 16:29:12.392823 1 namespace_controller.go:185] Namespace has been deleted nslifetest-98-4976\nI0521 16:29:12.392850 1 namespace_controller.go:185] Namespace has been deleted nslifetest-60-6439\nI0521 16:29:12.392865 1 namespace_controller.go:185] Namespace has been deleted nslifetest-43-185\nI0521 16:29:12.392879 1 namespace_controller.go:185] Namespace has been deleted nslifetest-19-6202\nI0521 16:29:12.392898 1 namespace_controller.go:185] Namespace has been deleted nslifetest-95-1584\nI0521 16:29:12.392924 1 namespace_controller.go:185] Namespace has been deleted nslifetest-92-7906\nI0521 16:29:12.392948 1 namespace_controller.go:185] Namespace has been deleted nslifetest-25-7867\nI0521 16:29:12.392962 1 namespace_controller.go:185] Namespace has been deleted nslifetest-20-5035\nI0521 16:29:12.392978 1 namespace_controller.go:185] Namespace has been deleted nslifetest-24-6922\nI0521 16:29:12.393003 1 namespace_controller.go:185] Namespace has been deleted nslifetest-47-8573\nI0521 16:29:12.393030 1 namespace_controller.go:185] Namespace has been deleted nslifetest-75-614\nI0521 16:29:12.393047 1 namespace_controller.go:185] Namespace has been deleted nslifetest-96-2432\nI0521 16:29:12.393061 1 namespace_controller.go:185] Namespace has been deleted nslifetest-34-2509\nI0521 16:29:12.393097 1 namespace_controller.go:185] Namespace has been deleted nslifetest-46-3519\nI0521 16:29:12.393122 1 namespace_controller.go:185] Namespace has been deleted nslifetest-94-4323\nI0521 16:29:12.393138 1 namespace_controller.go:185] Namespace has been deleted nslifetest-12-6201\nI0521 16:29:12.393153 1 namespace_controller.go:185] Namespace has been deleted nslifetest-62-2918\nI0521 16:29:12.393187 1 namespace_controller.go:185] Namespace has been deleted nslifetest-3-9049\nI0521 16:29:12.393212 1 namespace_controller.go:185] Namespace has been deleted nslifetest-10-3562\nI0521 16:29:12.393228 1 namespace_controller.go:185] Namespace has been deleted nslifetest-74-9500\nI0521 16:29:12.393252 1 namespace_controller.go:185] Namespace has been deleted nslifetest-76-7022\nI0521 16:29:12.393277 1 namespace_controller.go:185] Namespace has been deleted nslifetest-44-6793\nI0521 16:29:12.393295 1 namespace_controller.go:185] Namespace has been deleted nslifetest-73-2556\nI0521 16:29:12.393310 1 namespace_controller.go:185] Namespace has been deleted nslifetest-23-9589\nI0521 16:29:12.393326 1 namespace_controller.go:185] Namespace has been deleted nslifetest-61-5567\nI0521 16:29:12.393361 1 namespace_controller.go:185] Namespace has been deleted nslifetest-97-9763\nI0521 16:29:12.393384 1 namespace_controller.go:185] Namespace has been deleted nslifetest-1-9842\nI0521 16:29:12.393399 1 namespace_controller.go:185] Namespace has been deleted nslifetest-45-8751\nI0521 16:29:12.393414 1 namespace_controller.go:185] Namespace has been deleted nslifetest-22-2741\nI0521 16:29:12.393436 1 namespace_controller.go:185] Namespace has been deleted nslifetest-6-9377\nI0521 16:29:12.393458 1 namespace_controller.go:185] Namespace has been deleted nslifetest-93-7768\nI0521 16:29:12.393474 1 namespace_controller.go:185] Namespace has been deleted nslifetest-0-911\nI0521 16:29:12.393489 1 namespace_controller.go:185] Namespace has been deleted nslifetest-18-6092\nI0521 16:29:12.393506 1 namespace_controller.go:185] Namespace has been deleted nslifetest-48-7800\nI0521 16:29:12.393538 1 namespace_controller.go:185] Namespace has been deleted nslifetest-35-7304\nI0521 16:29:12.393564 1 namespace_controller.go:185] Namespace has been deleted nslifetest-89-19\nI0521 16:29:12.393579 1 namespace_controller.go:185] Namespace has been deleted nslifetest-13-5935\nI0521 16:29:12.393594 1 namespace_controller.go:185] Namespace has been deleted nslifetest-30-9848\nI0521 16:29:12.393629 1 namespace_controller.go:185] Namespace has been deleted nslifetest-80-9466\nI0521 16:29:12.393651 1 namespace_controller.go:185] Namespace has been deleted nslifetest-36-2298\nI0521 16:29:12.393666 1 namespace_controller.go:185] Namespace has been deleted nslifetest-81-2307\nI0521 16:29:12.393681 1 namespace_controller.go:185] Namespace has been deleted nslifetest-64-3362\nI0521 16:29:12.393708 1 namespace_controller.go:185] Namespace has been deleted nslifetest-52-8634\nI0521 16:29:12.393735 1 namespace_controller.go:185] Namespace has been deleted nslifetest-87-3475\nI0521 16:29:12.428315 1 namespace_controller.go:185] Namespace has been deleted nslifetest-90-6769\nI0521 16:29:12.504774 1 namespace_controller.go:185] Namespace has been deleted nslifetest-31-6081\nE0521 16:29:12.525454 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-69-3072/default: secrets \"default-token-lzvb6\" is forbidden: unable to create new content in namespace nslifetest-69-3072 because it is being terminated\nI0521 16:29:12.549837 1 namespace_controller.go:185] Namespace has been deleted nslifetest-53-9606\nI0521 16:29:12.561814 1 namespace_controller.go:185] Namespace has been deleted nslifetest-37-5276\nI0521 16:29:12.581433 1 namespace_controller.go:185] Namespace has been deleted nslifetest-84-3592\nI0521 16:29:12.637907 1 namespace_controller.go:185] Namespace has been deleted nslifetest-65-4580\nI0521 16:29:12.667979 1 namespace_controller.go:185] Namespace has been deleted nslifetest-9-5568\nI0521 16:29:12.694321 1 namespace_controller.go:185] Namespace has been deleted nslifetest-32-9154\nI0521 16:29:12.703876 1 namespace_controller.go:185] Namespace has been deleted nslifetest-26-5512\nI0521 16:29:12.718741 1 namespace_controller.go:185] Namespace has been deleted nslifetest-88-6490\nI0521 16:29:14.076031 1 namespace_controller.go:185] Namespace has been deleted nslifetest-54-9777\nI0521 16:29:14.158153 1 namespace_controller.go:185] Namespace has been deleted nslifetest-38-6885\nE0521 16:29:14.185679 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:29:14.194689 1 namespace_controller.go:185] Namespace has been deleted nslifetest-40-3697\nI0521 16:29:14.209503 1 namespace_controller.go:185] Namespace has been deleted nslifetest-66-6392\nI0521 16:29:14.232195 1 namespace_controller.go:185] Namespace has been deleted nslifetest-49-1118\nI0521 16:29:14.297379 1 namespace_controller.go:185] Namespace has been deleted nslifetest-29-4522\nI0521 16:29:14.317130 1 namespace_controller.go:185] Namespace has been deleted nslifetest-33-1154\nI0521 16:29:14.341521 1 namespace_controller.go:185] Namespace has been deleted nslifetest-27-7991\nI0521 16:29:14.347223 1 namespace_controller.go:185] Namespace has been deleted nslifetest-55-2514\nI0521 16:29:14.366100 1 namespace_controller.go:185] Namespace has been deleted nslifetest-39-3278\nI0521 16:29:15.729543 1 namespace_controller.go:185] Namespace has been deleted nslifetest-41-785\nI0521 16:29:15.802461 1 namespace_controller.go:185] Namespace has been deleted nslifetest-91-8787\nI0521 16:29:15.839104 1 namespace_controller.go:185] Namespace has been deleted nslifetest-11-4814\nI0521 16:29:15.855605 1 namespace_controller.go:185] Namespace has been deleted nslifetest-78-4885\nI0521 16:29:15.884453 1 namespace_controller.go:185] Namespace has been deleted nslifetest-5-3598\nI0521 16:29:15.944982 1 namespace_controller.go:185] Namespace has been deleted nslifetest-67-9133\nE0521 16:29:15.969323 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:29:15.981946 1 namespace_controller.go:185] Namespace has been deleted nslifetest-4-3461\nI0521 16:29:15.989900 1 namespace_controller.go:185] Namespace has been deleted nslifetest-56-9269\nI0521 16:29:16.007626 1 namespace_controller.go:185] Namespace has been deleted nslifetest-79-5289\nI0521 16:29:16.013638 1 namespace_controller.go:185] Namespace has been deleted nslifetest-68-8507\nE0521 16:29:16.200364 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:29:17.380005 1 namespace_controller.go:185] Namespace has been deleted nslifetest-42-3904\nI0521 16:29:17.457568 1 namespace_controller.go:185] Namespace has been deleted nslifetest-51-1434\nI0521 16:29:17.487098 1 namespace_controller.go:185] Namespace has been deleted nslifetest-83-2653\nI0521 16:29:17.495306 1 namespace_controller.go:185] Namespace has been deleted nslifetest-77-4890\nI0521 16:29:17.515075 1 namespace_controller.go:185] Namespace has been deleted nslifetest-7-7696\nI0521 16:29:17.554990 1 namespace_controller.go:185] Namespace has been deleted nslifetest-69-3072\nI0521 16:29:17.563891 1 namespace_controller.go:185] Namespace has been deleted nslifetest-8-8634\nI0521 16:29:17.569363 1 namespace_controller.go:185] Namespace has been deleted nslifetest-28-8154\nI0521 16:29:17.573726 1 namespace_controller.go:185] Namespace has been deleted nslifetest-82-1845\nI0521 16:29:17.576193 1 namespace_controller.go:185] Namespace has been deleted nslifetest-50-7987\nI0521 16:29:23.285607 1 namespace_controller.go:185] Namespace has been deleted namespaces-3598\nE0521 16:29:30.427672 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:29:35.552612 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:29:44.599586 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-4-5638/default: secrets \"default-token-tnrlv\" is forbidden: unable to create new content in namespace nslifetest-4-5638 because it is being terminated\nE0521 16:29:44.611521 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-13-2856/default: secrets \"default-token-tz7q4\" is forbidden: unable to create new content in namespace nslifetest-13-2856 because it is being terminated\nE0521 16:29:44.622251 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-10-3072/default: secrets \"default-token-hf6b5\" is forbidden: unable to create new content in namespace nslifetest-10-3072 because it is being terminated\nE0521 16:29:44.774217 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-43-7700/default: secrets \"default-token-67hbv\" is forbidden: unable to create new content in namespace nslifetest-43-7700 because it is being terminated\nE0521 16:29:44.798587 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-15-3026/default: secrets \"default-token-d4776\" is forbidden: unable to create new content in namespace nslifetest-15-3026 because it is being terminated\nE0521 16:29:44.873307 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-56-7103/default: serviceaccounts \"default\" not found\nE0521 16:29:44.973453 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-6-7042/default: serviceaccounts \"default\" not found\nE0521 16:29:45.371643 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:29:46.574117 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-83-9242/default: serviceaccounts \"default\" not found\nE0521 16:29:46.624330 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-31-7204/default: serviceaccounts \"default\" not found\nE0521 16:29:47.110192 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-86-1073/default: secrets \"default-token-6ndg6\" is forbidden: unable to create new content in namespace nslifetest-86-1073 because it is being terminated\nE0521 16:29:47.399552 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-71-9388/default: secrets \"default-token-tmcg7\" is forbidden: unable to create new content in namespace nslifetest-71-9388 because it is being terminated\nE0521 16:29:47.731787 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-41-2943/default: secrets \"default-token-58dm8\" is forbidden: unable to create new content in namespace nslifetest-41-2943 because it is being terminated\nE0521 16:29:48.100193 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-74-4728/default: secrets \"default-token-z76wd\" is forbidden: unable to create new content in namespace nslifetest-74-4728 because it is being terminated\nE0521 16:29:48.605788 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-46-7994/default: secrets \"default-token-hg2qm\" is forbidden: unable to create new content in namespace nslifetest-46-7994 because it is being terminated\nE0521 16:29:48.665574 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-68-4929/default: secrets \"default-token-p7hjz\" is forbidden: unable to create new content in namespace nslifetest-68-4929 because it is being terminated\nE0521 16:29:49.071841 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-72-691/default: secrets \"default-token-zcb9r\" is forbidden: unable to create new content in namespace nslifetest-72-691 because it is being terminated\nE0521 16:29:49.075927 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-76-1891/default: secrets \"default-token-sz89v\" is forbidden: unable to create new content in namespace nslifetest-76-1891 because it is being terminated\nE0521 16:29:49.594399 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-92-7359/default: secrets \"default-token-snr89\" is forbidden: unable to create new content in namespace nslifetest-92-7359 because it is being terminated\nE0521 16:29:50.378975 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-8-8089/default: secrets \"default-token-wzh68\" is forbidden: unable to create new content in namespace nslifetest-8-8089 because it is being terminated\nE0521 16:29:50.433162 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:29:50.536623 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-90-2556/default: secrets \"default-token-s5sdx\" is forbidden: unable to create new content in namespace nslifetest-90-2556 because it is being terminated\nE0521 16:29:50.917674 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-88-5383/default: secrets \"default-token-frbvr\" is forbidden: unable to create new content in namespace nslifetest-88-5383 because it is being terminated\nE0521 16:29:50.991748 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-89-2581/default: secrets \"default-token-7vswd\" is forbidden: unable to create new content in namespace nslifetest-89-2581 because it is being terminated\nE0521 16:29:51.131648 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:29:51.336221 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-96-8533/default: secrets \"default-token-kgrf5\" is forbidden: unable to create new content in namespace nslifetest-96-8533 because it is being terminated\nE0521 16:29:51.773133 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:29:52.099588 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-95-5195/default: secrets \"default-token-8zxlv\" is forbidden: unable to create new content in namespace nslifetest-95-5195 because it is being terminated\nE0521 16:29:52.706896 1 tokens_controller.go:261] error synchronizing serviceaccount nslifetest-98-8490/default: secrets \"default-token-k4fqf\" is forbidden: unable to create new content in namespace nslifetest-98-8490 because it is being terminated\nE0521 16:29:53.223981 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:29:53.449312 1 namespace_controller.go:185] Namespace has been deleted nslifetest-12-2560\nI0521 16:29:53.449354 1 namespace_controller.go:185] Namespace has been deleted nslifetest-23-8733\nI0521 16:29:53.449373 1 namespace_controller.go:185] Namespace has been deleted nslifetest-1-344\nI0521 16:29:53.449388 1 namespace_controller.go:185] Namespace has been deleted nslifetest-0-335\nI0521 16:29:53.449402 1 namespace_controller.go:185] Namespace has been deleted nslifetest-13-2856\nI0521 16:29:53.449416 1 namespace_controller.go:185] Namespace has been deleted nslifetest-10-3072\nI0521 16:29:53.449428 1 namespace_controller.go:185] Namespace has been deleted nslifetest-16-8070\nI0521 16:29:53.449443 1 namespace_controller.go:185] Namespace has been deleted nslifetest-36-6754\nI0521 16:29:53.449456 1 namespace_controller.go:185] Namespace has been deleted nslifetest-4-5638\nI0521 16:29:53.449470 1 namespace_controller.go:185] Namespace has been deleted nslifetest-18-2251\nI0521 16:29:53.449486 1 namespace_controller.go:185] Namespace has been deleted nslifetest-15-3026\nI0521 16:29:53.449503 1 namespace_controller.go:185] Namespace has been deleted nslifetest-65-1475\nI0521 16:29:53.449531 1 namespace_controller.go:185] Namespace has been deleted nslifetest-62-9893\nI0521 16:29:53.449545 1 namespace_controller.go:185] Namespace has been deleted nslifetest-43-7700\nI0521 16:29:53.449567 1 namespace_controller.go:185] Namespace has been deleted nslifetest-81-3420\nI0521 16:29:53.449581 1 namespace_controller.go:185] Namespace has been deleted nslifetest-66-9068\nI0521 16:29:53.449604 1 namespace_controller.go:185] Namespace has been deleted nslifetest-6-7042\nI0521 16:29:53.449619 1 namespace_controller.go:185] Namespace has been deleted nslifetest-40-5743\nI0521 16:29:53.449638 1 namespace_controller.go:185] Namespace has been deleted nslifetest-35-6127\nI0521 16:29:53.449651 1 namespace_controller.go:185] Namespace has been deleted nslifetest-56-7103\nI0521 16:29:53.449666 1 namespace_controller.go:185] Namespace has been deleted nslifetest-51-8159\nI0521 16:29:53.449689 1 namespace_controller.go:185] Namespace has been deleted nslifetest-44-9090\nI0521 16:29:53.449710 1 namespace_controller.go:185] Namespace has been deleted nslifetest-50-3641\nI0521 16:29:53.449723 1 namespace_controller.go:185] Namespace has been deleted nslifetest-22-8633\nI0521 16:29:53.449737 1 namespace_controller.go:185] Namespace has been deleted nslifetest-82-1431\nI0521 16:29:53.449750 1 namespace_controller.go:185] Namespace has been deleted nslifetest-60-1345\nI0521 16:29:53.449764 1 namespace_controller.go:185] Namespace has been deleted nslifetest-57-9905\nI0521 16:29:53.449776 1 namespace_controller.go:185] Namespace has been deleted nslifetest-20-1360\nI0521 16:29:53.449790 1 namespace_controller.go:185] Namespace has been deleted nslifetest-54-5154\nI0521 16:29:53.449997 1 namespace_controller.go:185] Namespace has been deleted nslifetest-55-8792\nI0521 16:29:53.450090 1 namespace_controller.go:185] Namespace has been deleted nslifetest-17-2534\nI0521 16:29:53.450104 1 namespace_controller.go:185] Namespace has been deleted nslifetest-49-1011\nI0521 16:29:53.450116 1 namespace_controller.go:185] Namespace has been deleted nslifetest-64-8104\nI0521 16:29:53.450129 1 namespace_controller.go:185] Namespace has been deleted nslifetest-11-7040\nI0521 16:29:53.450144 1 namespace_controller.go:185] Namespace has been deleted nslifetest-48-7384\nI0521 16:29:53.450161 1 namespace_controller.go:185] Namespace has been deleted nslifetest-5-3959\nI0521 16:29:53.450175 1 namespace_controller.go:185] Namespace has been deleted nslifetest-14-4885\nI0521 16:29:53.450199 1 namespace_controller.go:185] Namespace has been deleted nslifetest-52-3664\nI0521 16:29:53.450212 1 namespace_controller.go:185] Namespace has been deleted nslifetest-26-2650\nI0521 16:29:53.450227 1 namespace_controller.go:185] Namespace has been deleted nslifetest-63-1483\nI0521 16:29:53.450240 1 namespace_controller.go:185] Namespace has been deleted nslifetest-2-6791\nI0521 16:29:53.450255 1 namespace_controller.go:185] Namespace has been deleted nslifetest-34-6068\nI0521 16:29:53.450277 1 namespace_controller.go:185] Namespace has been deleted nslifetest-19-2103\nI0521 16:29:53.450290 1 namespace_controller.go:185] Namespace has been deleted nslifetest-24-4001\nI0521 16:29:53.450304 1 namespace_controller.go:185] Namespace has been deleted nslifetest-21-3440\nI0521 16:29:53.450317 1 namespace_controller.go:185] Namespace has been deleted nslifetest-25-9169\nI0521 16:29:53.450331 1 namespace_controller.go:185] Namespace has been deleted nslifetest-61-734\nI0521 16:29:53.450345 1 namespace_controller.go:185] Namespace has been deleted nslifetest-58-1363\nI0521 16:29:53.450367 1 namespace_controller.go:185] Namespace has been deleted nslifetest-99-5884\nI0521 16:29:53.450379 1 namespace_controller.go:185] Namespace has been deleted nslifetest-30-2185\nI0521 16:29:53.450401 1 namespace_controller.go:185] Namespace has been deleted nslifetest-83-9242\nI0521 16:29:53.450413 1 namespace_controller.go:185] Namespace has been deleted nslifetest-45-3432\nI0521 16:29:53.450433 1 namespace_controller.go:185] Namespace has been deleted nslifetest-27-8074\nI0521 16:29:53.450446 1 namespace_controller.go:185] Namespace has been deleted nslifetest-84-7004\nI0521 16:29:53.450467 1 namespace_controller.go:185] Namespace has been deleted nslifetest-73-5748\nI0521 16:29:53.450480 1 namespace_controller.go:185] Namespace has been deleted nslifetest-59-9237\nI0521 16:29:53.450495 1 namespace_controller.go:185] Namespace has been deleted nslifetest-53-1098\nI0521 16:29:53.450514 1 namespace_controller.go:185] Namespace has been deleted nslifetest-28-6111\nI0521 16:29:53.450529 1 namespace_controller.go:185] Namespace has been deleted nslifetest-31-7204\nI0521 16:29:53.450541 1 namespace_controller.go:185] Namespace has been deleted nslifetest-85-1742\nI0521 16:29:53.505227 1 namespace_controller.go:185] Namespace has been deleted nslifetest-67-6173\nI0521 16:29:53.538724 1 namespace_controller.go:185] Namespace has been deleted nslifetest-41-2943\nI0521 16:29:53.562937 1 namespace_controller.go:185] Namespace has been deleted nslifetest-74-4728\nI0521 16:29:53.567043 1 namespace_controller.go:185] Namespace has been deleted nslifetest-7-653\nI0521 16:29:53.569015 1 namespace_controller.go:185] Namespace has been deleted nslifetest-86-1073\nI0521 16:29:53.594398 1 namespace_controller.go:185] Namespace has been deleted nslifetest-32-4107\nI0521 16:29:53.594458 1 namespace_controller.go:185] Namespace has been deleted nslifetest-71-9388\nI0521 16:29:53.618414 1 namespace_controller.go:185] Namespace has been deleted nslifetest-42-2746\nI0521 16:29:53.624407 1 namespace_controller.go:185] Namespace has been deleted nslifetest-47-2329\nI0521 16:29:53.628832 1 namespace_controller.go:185] Namespace has been deleted nslifetest-75-8001\nI0521 16:29:55.150459 1 namespace_controller.go:185] Namespace has been deleted nslifetest-46-7994\nI0521 16:29:55.189211 1 namespace_controller.go:185] Namespace has been deleted nslifetest-72-691\nI0521 16:29:55.209116 1 namespace_controller.go:185] Namespace has been deleted nslifetest-87-710\nI0521 16:29:55.218998 1 namespace_controller.go:185] Namespace has been deleted nslifetest-29-6766\nI0521 16:29:55.223531 1 namespace_controller.go:185] Namespace has been deleted nslifetest-68-4929\nI0521 16:29:55.244726 1 namespace_controller.go:185] Namespace has been deleted nslifetest-92-7359\nI0521 16:29:55.245861 1 namespace_controller.go:185] Namespace has been deleted nslifetest-69-5234\nI0521 16:29:55.269638 1 namespace_controller.go:185] Namespace has been deleted nslifetest-76-1891\nI0521 16:29:55.272358 1 namespace_controller.go:185] Namespace has been deleted nslifetest-93-3687\nI0521 16:29:55.278664 1 namespace_controller.go:185] Namespace has been deleted nslifetest-33-2408\nE0521 16:29:55.982428 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:29:56.797214 1 namespace_controller.go:185] Namespace has been deleted nslifetest-90-2556\nI0521 16:29:56.834198 1 namespace_controller.go:185] Namespace has been deleted nslifetest-70-7863\nI0521 16:29:56.859900 1 namespace_controller.go:185] Namespace has been deleted nslifetest-9-6392\nI0521 16:29:56.873957 1 namespace_controller.go:185] Namespace has been deleted nslifetest-88-5383\nI0521 16:29:56.884789 1 namespace_controller.go:185] Namespace has been deleted nslifetest-8-8089\nI0521 16:29:56.893528 1 namespace_controller.go:185] Namespace has been deleted nslifetest-89-2581\nI0521 16:29:56.900990 1 namespace_controller.go:185] Namespace has been deleted nslifetest-38-2929\nI0521 16:29:56.914500 1 namespace_controller.go:185] Namespace has been deleted nslifetest-94-2531\nI0521 16:29:56.930069 1 namespace_controller.go:185] Namespace has been deleted nslifetest-91-3195\nI0521 16:29:56.933337 1 namespace_controller.go:185] Namespace has been deleted nslifetest-96-8533\nE0521 16:29:57.323423 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:29:58.437086 1 namespace_controller.go:185] Namespace has been deleted nslifetest-37-9910\nI0521 16:29:58.466693 1 namespace_controller.go:185] Namespace has been deleted nslifetest-97-3394\nI0521 16:29:58.499930 1 namespace_controller.go:185] Namespace has been deleted nslifetest-80-7672\nI0521 16:29:58.511100 1 namespace_controller.go:185] Namespace has been deleted nslifetest-98-8490\nI0521 16:29:58.523750 1 namespace_controller.go:185] Namespace has been deleted nslifetest-78-9912\nI0521 16:29:58.529063 1 namespace_controller.go:185] Namespace has been deleted nslifetest-79-8035\nI0521 16:29:58.539057 1 namespace_controller.go:185] Namespace has been deleted nslifetest-77-8594\nI0521 16:29:58.545997 1 namespace_controller.go:185] Namespace has been deleted nslifetest-39-1941\nI0521 16:29:58.548466 1 namespace_controller.go:185] Namespace has been deleted nslifetest-3-8843\nI0521 16:29:58.551392 1 namespace_controller.go:185] Namespace has been deleted nslifetest-95-5195\nI0521 16:30:02.242855 1 namespace_controller.go:185] Namespace has been deleted namespaces-4055\nI0521 16:30:05.500005 1 event.go:291] \"Event occurred\" object=\"disruption-595/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-nks6h\"\nI0521 16:30:05.502485 1 event.go:291] \"Event occurred\" object=\"disruption-595/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-k9hgt\"\nI0521 16:30:05.503722 1 event.go:291] \"Event occurred\" object=\"disruption-595/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-8vcjs\"\nI0521 16:30:05.505140 1 event.go:291] \"Event occurred\" object=\"disruption-645/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-tpdqz\"\nI0521 16:30:05.507866 1 event.go:291] \"Event occurred\" object=\"disruption-595/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-wjhxr\"\nI0521 16:30:05.507977 1 event.go:291] \"Event occurred\" object=\"disruption-595/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-dms2p\"\nI0521 16:30:05.508006 1 event.go:291] \"Event occurred\" object=\"disruption-595/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-28jr4\"\nI0521 16:30:05.508266 1 event.go:291] \"Event occurred\" object=\"disruption-595/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-95bb8\"\nI0521 16:30:05.508830 1 event.go:291] \"Event occurred\" object=\"disruption-645/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-2kkwk\"\nI0521 16:30:05.509503 1 event.go:291] \"Event occurred\" object=\"disruption-645/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-mhn56\"\nI0521 16:30:05.512307 1 event.go:291] \"Event occurred\" object=\"disruption-595/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-bnxv5\"\nI0521 16:30:05.512490 1 event.go:291] \"Event occurred\" object=\"disruption-595/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-7f5fn\"\nI0521 16:30:05.512586 1 event.go:291] \"Event occurred\" object=\"disruption-645/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-br5xc\"\nI0521 16:30:05.512614 1 event.go:291] \"Event occurred\" object=\"disruption-595/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-sq2lj\"\nI0521 16:30:05.513233 1 event.go:291] \"Event occurred\" object=\"disruption-645/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-nc97n\"\nI0521 16:30:05.513798 1 event.go:291] \"Event occurred\" object=\"disruption-645/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-sgjms\"\nI0521 16:30:05.513864 1 event.go:291] \"Event occurred\" object=\"disruption-645/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-cgmvq\"\nI0521 16:30:05.521917 1 event.go:291] \"Event occurred\" object=\"disruption-645/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-jmd7g\"\nI0521 16:30:05.522044 1 event.go:291] \"Event occurred\" object=\"disruption-645/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-rljck\"\nI0521 16:30:05.522389 1 event.go:291] \"Event occurred\" object=\"disruption-645/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-lbdwm\"\nI0521 16:30:05.565872 1 event.go:291] \"Event occurred\" object=\"deployment-3747/test-orphan-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-orphan-deployment-dd94f59b7 to 1\"\nI0521 16:30:05.611594 1 event.go:291] \"Event occurred\" object=\"statefulset-7786/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI0521 16:30:05.699626 1 event.go:291] \"Event occurred\" object=\"deployment-3747/test-orphan-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-orphan-deployment-dd94f59b7-9qdk9\"\nI0521 16:30:05.730993 1 event.go:291] \"Event occurred\" object=\"job-2929/all-succeed\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: all-succeed-ng9s4\"\nI0521 16:30:05.734523 1 event.go:291] \"Event occurred\" object=\"job-2929/all-succeed\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: all-succeed-j9tx9\"\nE0521 16:30:07.541548 1 disruption.go:505] Error syncing PodDisruptionBudget disruption-8076/foo, requeuing: Operation cannot be fulfilled on poddisruptionbudgets.policy \"foo\": the object has been modified; please apply your changes to the latest version and try again\nI0521 16:30:07.597238 1 event.go:291] \"Event occurred\" object=\"disruption-69/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-br4tf\"\nI0521 16:30:07.603343 1 event.go:291] \"Event occurred\" object=\"disruption-69/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-vb9p4\"\nI0521 16:30:07.603760 1 event.go:291] \"Event occurred\" object=\"disruption-69/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-646rd\"\nE0521 16:30:11.043655 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:30:12.228436 1 event.go:291] \"Event occurred\" object=\"statefulset-5276/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0521 16:30:12.228455 1 event.go:291] \"Event occurred\" object=\"statefulset-5276/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success\"\nI0521 16:30:12.236153 1 event.go:291] \"Event occurred\" object=\"statefulset-5276/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0521 16:30:12.244089 1 event.go:291] \"Event occurred\" object=\"statefulset-5276/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0521 16:30:12.721116 1 event.go:291] \"Event occurred\" object=\"statefulset-7786/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nE0521 16:30:13.282577 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:30:14.000305 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:30:14.520862 1 event.go:291] \"Event occurred\" object=\"job-2929/all-succeed\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: all-succeed-6t4kd\"\nI0521 16:30:15.540371 1 event.go:291] \"Event occurred\" object=\"disruption-595/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-bp6wb\"\nI0521 16:30:16.275208 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0521 16:30:16.275286 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success\"\nI0521 16:30:16.279383 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0521 16:30:16.288058 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0521 16:30:16.720293 1 event.go:291] \"Event occurred\" object=\"job-2929/all-succeed\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: all-succeed-gxwkg\"\nE0521 16:30:16.761560 1 tokens_controller.go:261] error synchronizing serviceaccount disruption-2232/default: secrets \"default-token-shfjn\" is forbidden: unable to create new content in namespace disruption-2232 because it is being terminated\nI0521 16:30:17.544491 1 event.go:291] \"Event occurred\" object=\"disruption-645/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-k7r4m\"\nE0521 16:30:17.550483 1 disruption.go:505] Error syncing PodDisruptionBudget disruption-645/foo, requeuing: Operation cannot be fulfilled on poddisruptionbudgets.policy \"foo\": the object has been modified; please apply your changes to the latest version and try again\nI0521 16:30:17.646943 1 event.go:291] \"Event occurred\" object=\"disruption-69/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-gjflz\"\nE0521 16:30:17.663622 1 disruption.go:505] Error syncing PodDisruptionBudget disruption-69/foo, requeuing: Operation cannot be fulfilled on poddisruptionbudgets.policy \"foo\": the object has been modified; please apply your changes to the latest version and try again\nI0521 16:30:18.319945 1 event.go:291] \"Event occurred\" object=\"job-241/fail-once-non-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-non-local-m4cmq\"\nI0521 16:30:18.323565 1 event.go:291] \"Event occurred\" object=\"job-241/fail-once-non-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-non-local-sp96n\"\nI0521 16:30:18.922425 1 event.go:291] \"Event occurred\" object=\"statefulset-7786/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nI0521 16:30:20.130450 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0521 16:30:21.118681 1 event.go:291] \"Event occurred\" object=\"job-241/fail-once-non-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-non-local-g6t6g\"\nE0521 16:30:21.125581 1 job_controller.go:402] Error syncing job: failed pod(s) detected for job key \"job-241/fail-once-non-local\"\nI0521 16:30:21.190130 1 event.go:291] \"Event occurred\" object=\"replicaset-6283/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: condition-test-ppjps\"\nI0521 16:30:21.194687 1 event.go:291] \"Event occurred\" object=\"replicaset-6283/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-tks6w\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0521 16:30:21.195836 1 event.go:291] \"Event occurred\" object=\"replicaset-6283/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: condition-test-ndgqb\"\nE0521 16:30:21.199723 1 replica_set.go:532] sync \"replicaset-6283/condition-test\" failed with pods \"condition-test-tks6w\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0521 16:30:21.201666 1 event.go:291] \"Event occurred\" object=\"replicaset-6283/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-4l6qs\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0521 16:30:21.205671 1 replica_set.go:532] sync \"replicaset-6283/condition-test\" failed with pods \"condition-test-4l6qs\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nE0521 16:30:21.207175 1 replica_set.go:532] sync \"replicaset-6283/condition-test\" failed with pods \"condition-test-v782g\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0521 16:30:21.207194 1 event.go:291] \"Event occurred\" object=\"replicaset-6283/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-v782g\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0521 16:30:21.217427 1 replica_set.go:532] sync \"replicaset-6283/condition-test\" failed with pods \"condition-test-2bjs9\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0521 16:30:21.217469 1 event.go:291] \"Event occurred\" object=\"replicaset-6283/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-2bjs9\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0521 16:30:21.259806 1 replica_set.go:532] sync \"replicaset-6283/condition-test\" failed with pods \"condition-test-ssqxn\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0521 16:30:21.259833 1 event.go:291] \"Event occurred\" object=\"replicaset-6283/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-ssqxn\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0521 16:30:21.342427 1 replica_set.go:532] sync \"replicaset-6283/condition-test\" failed with pods \"condition-test-5nrkk\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0521 16:30:21.342513 1 event.go:291] \"Event occurred\" object=\"replicaset-6283/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-5nrkk\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0521 16:30:21.505494 1 replica_set.go:532] sync \"replicaset-6283/condition-test\" failed with pods \"condition-test-29rq4\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0521 16:30:21.505595 1 event.go:291] \"Event occurred\" object=\"replicaset-6283/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-29rq4\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0521 16:30:21.829244 1 replica_set.go:532] sync \"replicaset-6283/condition-test\" failed with pods \"condition-test-gl4mg\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0521 16:30:21.829264 1 event.go:291] \"Event occurred\" object=\"replicaset-6283/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-gl4mg\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0521 16:30:21.832257 1 event.go:291] \"Event occurred\" object=\"replicaset-6283/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-9vmg2\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0521 16:30:21.832259 1 replica_set.go:532] sync \"replicaset-6283/condition-test\" failed with pods \"condition-test-9vmg2\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nE0521 16:30:21.867423 1 replica_set.go:532] sync \"replicaset-6283/condition-test\" failed with pods \"condition-test-wbwgh\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0521 16:30:21.867513 1 event.go:291] \"Event occurred\" object=\"replicaset-6283/condition-test\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-wbwgh\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0521 16:30:21.879867 1 namespace_controller.go:185] Namespace has been deleted disruption-2-9203\nI0521 16:30:21.888789 1 namespace_controller.go:185] Namespace has been deleted disruption-2232\nE0521 16:30:22.578616 1 disruption.go:552] Failed to sync pdb disruption-645/foo: found no controllers for pod \"rs-lbdwm\"\nE0521 16:30:22.580285 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"foo.168121e855603bc5\", GenerateName:\"\", Namespace:\"disruption-645\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"PodDisruptionBudget\", Namespace:\"disruption-645\", Name:\"foo\", UID:\"70e1106e-b23a-4f88-b95c-9e19c7e2bbb8\", APIVersion:\"policy/v1beta1\", ResourceVersion:\"41750\", FieldPath:\"\"}, Reason:\"NoControllers\", Message:\"found no controllers for pod \\\"rs-lbdwm\\\"\", Source:v1.EventSource{Component:\"controllermanager\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02215e7a27c8fc5, ext:4634659997587, loc:(*time.Location)(0x6a53ca0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02215e7a27c8fc5, ext:4634659997587, loc:(*time.Location)(0x6a53ca0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"foo.168121e855603bc5\" is forbidden: unable to create new content in namespace disruption-645 because it is being terminated' (will not retry!)\nE0521 16:30:22.582033 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"foo.168121e855606a9f\", GenerateName:\"\", Namespace:\"disruption-645\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"PodDisruptionBudget\", Namespace:\"disruption-645\", Name:\"foo\", UID:\"70e1106e-b23a-4f88-b95c-9e19c7e2bbb8\", APIVersion:\"policy/v1beta1\", ResourceVersion:\"41750\", FieldPath:\"\"}, Reason:\"CalculateExpectedPodCountFailed\", Message:\"Failed to calculate the number of expected pods: found no controllers for pod \\\"rs-lbdwm\\\"\", Source:v1.EventSource{Component:\"controllermanager\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02215e7a27cbe9f, ext:4634660009582, loc:(*time.Location)(0x6a53ca0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02215e7a27cbe9f, ext:4634660009582, loc:(*time.Location)(0x6a53ca0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"foo.168121e855606a9f\" is forbidden: unable to create new content in namespace disruption-645 because it is being terminated' (will not retry!)\nE0521 16:30:22.584324 1 disruption.go:552] Failed to sync pdb disruption-645/foo: found no controllers for pod \"rs-br5xc\"\nE0521 16:30:22.585609 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"foo.168121e855b7344a\", GenerateName:\"\", Namespace:\"disruption-645\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"PodDisruptionBudget\", Namespace:\"disruption-645\", Name:\"foo\", UID:\"70e1106e-b23a-4f88-b95c-9e19c7e2bbb8\", APIVersion:\"policy/v1beta1\", ResourceVersion:\"41750\", FieldPath:\"\"}, Reason:\"NoControllers\", Message:\"found no controllers for pod \\\"rs-br5xc\\\"\", Source:v1.EventSource{Component:\"controllermanager\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02215e7a2d3884a, ext:4634665697295, loc:(*time.Location)(0x6a53ca0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02215e7a2d3884a, ext:4634665697295, loc:(*time.Location)(0x6a53ca0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"foo.168121e855b7344a\" is forbidden: unable to create new content in namespace disruption-645 because it is being terminated' (will not retry!)\nE0521 16:30:22.586893 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"foo.168121e855b77529\", GenerateName:\"\", Namespace:\"disruption-645\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"PodDisruptionBudget\", Namespace:\"disruption-645\", Name:\"foo\", UID:\"70e1106e-b23a-4f88-b95c-9e19c7e2bbb8\", APIVersion:\"policy/v1beta1\", ResourceVersion:\"41750\", FieldPath:\"\"}, Reason:\"CalculateExpectedPodCountFailed\", Message:\"Failed to calculate the number of expected pods: found no controllers for pod \\\"rs-br5xc\\\"\", Source:v1.EventSource{Component:\"controllermanager\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02215e7a2d3c929, ext:4634665713908, loc:(*time.Location)(0x6a53ca0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02215e7a2d3c929, ext:4634665713908, loc:(*time.Location)(0x6a53ca0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"foo.168121e855b77529\" is forbidden: unable to create new content in namespace disruption-645 because it is being terminated' (will not retry!)\nE0521 16:30:22.671992 1 tokens_controller.go:261] error synchronizing serviceaccount disruption-645/default: secrets \"default-token-79mzm\" is forbidden: unable to create new content in namespace disruption-645 because it is being terminated\nE0521 16:30:22.713323 1 disruption.go:552] Failed to sync pdb disruption-645/foo: found no controllers for pod \"rs-2kkwk\"\nE0521 16:30:22.714822 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"foo.168121e85d678789\", GenerateName:\"\", Namespace:\"disruption-645\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"PodDisruptionBudget\", Namespace:\"disruption-645\", Name:\"foo\", UID:\"70e1106e-b23a-4f88-b95c-9e19c7e2bbb8\", APIVersion:\"policy/v1beta1\", ResourceVersion:\"41750\", FieldPath:\"\"}, Reason:\"NoControllers\", Message:\"found no controllers for pod \\\"rs-2kkwk\\\"\", Source:v1.EventSource{Component:\"controllermanager\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02215e7aa83db89, ext:4634794693458, loc:(*time.Location)(0x6a53ca0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02215e7aa83db89, ext:4634794693458, loc:(*time.Location)(0x6a53ca0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"foo.168121e85d678789\" is forbidden: unable to create new content in namespace disruption-645 because it is being terminated' (will not retry!)\nE0521 16:30:22.716578 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"foo.168121e85d67d507\", GenerateName:\"\", Namespace:\"disruption-645\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"PodDisruptionBudget\", Namespace:\"disruption-645\", Name:\"foo\", UID:\"70e1106e-b23a-4f88-b95c-9e19c7e2bbb8\", APIVersion:\"policy/v1beta1\", ResourceVersion:\"41750\", FieldPath:\"\"}, Reason:\"CalculateExpectedPodCountFailed\", Message:\"Failed to calculate the number of expected pods: found no controllers for pod \\\"rs-2kkwk\\\"\", Source:v1.EventSource{Component:\"controllermanager\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02215e7aa842907, ext:4634794713296, loc:(*time.Location)(0x6a53ca0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02215e7aa842907, ext:4634794713296, loc:(*time.Location)(0x6a53ca0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"foo.168121e85d67d507\" is forbidden: unable to create new content in namespace disruption-645 because it is being terminated' (will not retry!)\nI0521 16:30:23.713731 1 event.go:291] \"Event occurred\" object=\"job-2929/all-succeed\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Completed\" message=\"Job completed\"\nI0521 16:30:23.714525 1 event.go:291] \"Event occurred\" object=\"disruption-69/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-tqgzx\"\nE0521 16:30:23.718396 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs.168121e89913ccc9\", GenerateName:\"\", Namespace:\"disruption-69\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"disruption-69\", Name:\"rs\", UID:\"6b8810e4-87d1-474c-90f3-7e430adccb41\", APIVersion:\"apps/v1\", ResourceVersion:\"42022\", FieldPath:\"\"}, Reason:\"SuccessfulCreate\", Message:\"Created pod: rs-tqgzx\", Source:v1.EventSource{Component:\"replicaset-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02215e7ea9556c9, ext:4635795839135, loc:(*time.Location)(0x6a53ca0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02215e7ea9556c9, ext:4635795839135, loc:(*time.Location)(0x6a53ca0)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs.168121e89913ccc9\" is forbidden: unable to create new content in namespace disruption-69 because it is being terminated' (will not retry!)\nI0521 16:30:24.209380 1 event.go:291] \"Event occurred\" object=\"deployment-1588/test-new-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-new-deployment-dd94f59b7 to 1\"\nI0521 16:30:24.215616 1 event.go:291] \"Event occurred\" object=\"deployment-1588/test-new-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-new-deployment-dd94f59b7-z7gk7\"\nE0521 16:30:24.690142 1 tokens_controller.go:261] error synchronizing serviceaccount deployment-3747/default: secrets \"default-token-22dst\" is forbidden: unable to create new content in namespace deployment-3747 because it is being terminated\nI0521 16:30:24.716134 1 event.go:291] \"Event occurred\" object=\"job-241/fail-once-non-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-non-local-qxqch\"\nE0521 16:30:24.722866 1 job_controller.go:402] Error syncing job: failed pod(s) detected for job key \"job-241/fail-once-non-local\"\nI0521 16:30:26.356128 1 namespace_controller.go:185] Namespace has been deleted disruption-6894\nI0521 16:30:26.660570 1 event.go:291] \"Event occurred\" object=\"job-9468/all-pods-removed\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: all-pods-removed-tsm6m\"\nI0521 16:30:26.665017 1 event.go:291] \"Event occurred\" object=\"job-9468/all-pods-removed\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: all-pods-removed-rsdj5\"\nE0521 16:30:28.254852 1 tokens_controller.go:261] error synchronizing serviceaccount replicaset-6283/default: secrets \"default-token-zgsfv\" is forbidden: unable to create new content in namespace replicaset-6283 because it is being terminated\nI0521 16:30:28.327607 1 resource_quota_controller.go:306] Resource quota has been deleted replicaset-6283/condition-test\nE0521 16:30:28.756667 1 tokens_controller.go:261] error synchronizing serviceaccount disruption-69/default: secrets \"default-token-br699\" is forbidden: unable to create new content in namespace disruption-69 because it is being terminated\nE0521 16:30:28.891242 1 tokens_controller.go:261] error synchronizing serviceaccount deployment-2619/default: secrets \"default-token-ws6gl\" is forbidden: unable to create new content in namespace deployment-2619 because it is being terminated\nI0521 16:30:29.843796 1 namespace_controller.go:185] Namespace has been deleted deployment-3747\nI0521 16:30:30.320173 1 event.go:291] \"Event occurred\" object=\"job-241/fail-once-non-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-non-local-hj955\"\nE0521 16:30:31.221894 1 tokens_controller.go:261] error synchronizing serviceaccount disruption-4534/default: secrets \"default-token-g7qq2\" is forbidden: unable to create new content in namespace disruption-4534 because it is being terminated\nI0521 16:30:32.923160 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-1 Pod ss-1 in StatefulSet ss success\"\nI0521 16:30:32.923314 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0521 16:30:32.927590 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-1 in StatefulSet ss successful\"\nI0521 16:30:32.939730 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0521 16:30:32.941326 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0521 16:30:33.318258 1 event.go:291] \"Event occurred\" object=\"job-241/fail-once-non-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-non-local-jpqfh\"\nI0521 16:30:33.479485 1 namespace_controller.go:185] Namespace has been deleted replicaset-6283\nI0521 16:30:33.832315 1 namespace_controller.go:185] Namespace has been deleted replicaset-1835\nI0521 16:30:33.956576 1 namespace_controller.go:185] Namespace has been deleted job-2929\nI0521 16:30:33.981587 1 namespace_controller.go:185] Namespace has been deleted deployment-2619\nE0521 16:30:34.142746 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:30:34.824583 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:30:35.130688 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nW0521 16:30:35.626466 1 endpointslice_controller.go:284] Error syncing endpoint slices for service \"statefulset-7786/test\", retrying. Error: EndpointSlice informer cache is out of date\nI0521 16:30:35.642066 1 event.go:291] \"Event occurred\" object=\"statefulset-7786/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI0521 16:30:35.643582 1 event.go:291] \"Event occurred\" object=\"statefulset-7786/test\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint statefulset-7786/test: Operation cannot be fulfilled on endpoints \\\"test\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0521 16:30:36.057673 1 stateful_set.go:392] error syncing StatefulSet statefulset-5276/ss, requeuing: The POST operation against Pod could not be completed at this time, please try again.\nI0521 16:30:36.057918 1 event.go:291] \"Event occurred\" object=\"statefulset-5276/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.\"\nE0521 16:30:36.063813 1 stateful_set.go:392] error syncing StatefulSet statefulset-5276/ss, requeuing: The POST operation against Pod could not be completed at this time, please try again.\nI0521 16:30:36.064033 1 event.go:291] \"Event occurred\" object=\"statefulset-5276/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.\"\nE0521 16:30:36.070049 1 stateful_set.go:392] error syncing StatefulSet statefulset-5276/ss, requeuing: The POST operation against Pod could not be completed at this time, please try again.\nI0521 16:30:36.070113 1 event.go:291] \"Event occurred\" object=\"statefulset-5276/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.\"\nE0521 16:30:36.080142 1 stateful_set.go:392] error syncing StatefulSet statefulset-5276/ss, requeuing: The POST operation against Pod could not be completed at this time, please try again.\nI0521 16:30:36.080237 1 event.go:291] \"Event occurred\" object=\"statefulset-5276/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.\"\nE0521 16:30:36.126775 1 stateful_set.go:392] error syncing StatefulSet statefulset-5276/ss, requeuing: The POST operation against Pod could not be completed at this time, please try again.\nI0521 16:30:36.126891 1 event.go:291] \"Event occurred\" object=\"statefulset-5276/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.\"\nE0521 16:30:36.214655 1 stateful_set.go:392] error syncing StatefulSet statefulset-5276/ss, requeuing: The POST operation against Pod could not be completed at this time, please try again.\nI0521 16:30:36.214791 1 event.go:291] \"Event occurred\" object=\"statefulset-5276/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.\"\nI0521 16:30:36.314736 1 event.go:291] \"Event occurred\" object=\"job-241/fail-once-non-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Completed\" message=\"Job completed\"\nI0521 16:30:36.338750 1 namespace_controller.go:185] Namespace has been deleted disruption-595\nE0521 16:30:36.383227 1 stateful_set.go:392] error syncing StatefulSet statefulset-5276/ss, requeuing: The POST operation against Pod could not be completed at this time, please try again.\nI0521 16:30:36.383353 1 event.go:291] \"Event occurred\" object=\"statefulset-5276/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.\"\nE0521 16:30:36.712557 1 stateful_set.go:392] error syncing StatefulSet statefulset-5276/ss, requeuing: The POST operation against Pod could not be completed at this time, please try again.\nI0521 16:30:36.712688 1 event.go:291] \"Event occurred\" object=\"statefulset-5276/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.\"\nE0521 16:30:37.360441 1 stateful_set.go:392] error syncing StatefulSet statefulset-5276/ss, requeuing: The POST operation against Pod could not be completed at this time, please try again.\nI0521 16:30:37.360581 1 event.go:291] \"Event occurred\" object=\"statefulset-5276/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.\"\nI0521 16:30:38.050123 1 event.go:291] \"Event occurred\" object=\"statefulset-7786/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nI0521 16:30:39.153727 1 namespace_controller.go:185] Namespace has been deleted disruption-69\nI0521 16:30:40.586017 1 event.go:291] \"Event occurred\" object=\"statefulset-5276/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0521 16:30:40.922606 1 event.go:291] \"Event occurred\" object=\"statefulset-7786/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nE0521 16:30:42.974297 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:30:45.140093 1 namespace_controller.go:185] Namespace has been deleted disruption-8076\nE0521 16:30:45.177363 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:30:46.329326 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/datadir-ss-2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0521 16:30:46.329597 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-2 Pod ss-2 in StatefulSet ss success\"\nI0521 16:30:46.335348 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-2 in StatefulSet ss successful\"\nI0521 16:30:46.344195 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/datadir-ss-2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0521 16:30:46.631470 1 namespace_controller.go:185] Namespace has been deleted deployment-1588\nI0521 16:30:46.773727 1 namespace_controller.go:185] Namespace has been deleted disruption-4534\nE0521 16:30:47.263275 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:30:48.661792 1 namespace_controller.go:185] Namespace has been deleted job-241\nE0521 16:30:49.182105 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:30:49.301065 1 namespace_controller.go:185] Namespace has been deleted disruption-645\nI0521 16:30:50.002019 1 event.go:291] \"Event occurred\" object=\"statefulset-1467/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success\"\nI0521 16:30:50.002078 1 event.go:291] \"Event occurred\" object=\"statefulset-1467/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0521 16:30:50.007296 1 event.go:291] \"Event occurred\" object=\"statefulset-1467/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0521 16:30:50.014622 1 event.go:291] \"Event occurred\" object=\"statefulset-1467/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0521 16:30:50.014711 1 event.go:291] \"Event occurred\" object=\"statefulset-1467/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0521 16:30:50.130865 1 event.go:291] \"Event occurred\" object=\"statefulset-1467/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nE0521 16:30:50.410606 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:30:50.594485 1 stateful_set.go:419] StatefulSet has been deleted statefulset-5276/ss\nE0521 16:30:51.471219 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:30:51.542636 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:30:51.813427 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:30:55.925041 1 namespace_controller.go:185] Namespace has been deleted disruption-5297\nI0521 16:30:58.683115 1 namespace_controller.go:185] Namespace has been deleted disruption-8601\nI0521 16:31:00.525860 1 event.go:291] \"Event occurred\" object=\"statefulset-1467/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0521 16:31:00.525929 1 event.go:291] \"Event occurred\" object=\"statefulset-1467/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-1 Pod ss-1 in StatefulSet ss success\"\nI0521 16:31:00.530542 1 event.go:291] \"Event occurred\" object=\"statefulset-1467/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-1 in StatefulSet ss successful\"\nI0521 16:31:00.539481 1 event.go:291] \"Event occurred\" object=\"statefulset-1467/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0521 16:31:03.118008 1 event.go:291] \"Event occurred\" object=\"cronjob-2274/replace\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job replace-1621614660\"\nI0521 16:31:03.125184 1 event.go:291] \"Event occurred\" object=\"cronjob-2274/replace-1621614660\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: replace-1621614660-bhwkg\"\nI0521 16:31:03.128778 1 cronjob_controller.go:190] Unable to update status for cronjob-2274/replace (rv = 42630): Operation cannot be fulfilled on cronjobs.batch \"replace\": the object has been modified; please apply your changes to the latest version and try again\nI0521 16:31:03.146921 1 event.go:291] \"Event occurred\" object=\"cronjob-8018/failed-jobs-history-limit\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job failed-jobs-history-limit-1621614660\"\nI0521 16:31:03.158767 1 event.go:291] \"Event occurred\" object=\"cronjob-8018/failed-jobs-history-limit-1621614660\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: failed-jobs-history-limit-1621614660-mwbfl\"\nI0521 16:31:03.162509 1 cronjob_controller.go:190] Unable to update status for cronjob-8018/failed-jobs-history-limit (rv = 41104): Operation cannot be fulfilled on cronjobs.batch \"failed-jobs-history-limit\": the object has been modified; please apply your changes to the latest version and try again\nI0521 16:31:03.171761 1 event.go:291] \"Event occurred\" object=\"cronjob-8340/successful-jobs-history-limit\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job successful-jobs-history-limit-1621614660\"\nI0521 16:31:03.176957 1 event.go:291] \"Event occurred\" object=\"cronjob-8340/successful-jobs-history-limit-1621614660\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: successful-jobs-history-limit-1621614660-5qfmb\"\nI0521 16:31:03.180141 1 cronjob_controller.go:190] Unable to update status for cronjob-8340/successful-jobs-history-limit (rv = 41099): Operation cannot be fulfilled on cronjobs.batch \"successful-jobs-history-limit\": the object has been modified; please apply your changes to the latest version and try again\nI0521 16:31:03.197297 1 event.go:291] \"Event occurred\" object=\"cronjob-991/concurrent\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job concurrent-1621614660\"\nI0521 16:31:03.202427 1 event.go:291] \"Event occurred\" object=\"cronjob-991/concurrent-1621614660\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: concurrent-1621614660-ks2mr\"\nI0521 16:31:03.205680 1 cronjob_controller.go:190] Unable to update status for cronjob-991/concurrent (rv = 42160): Operation cannot be fulfilled on cronjobs.batch \"concurrent\": the object has been modified; please apply your changes to the latest version and try again\nI0521 16:31:04.514053 1 event.go:291] \"Event occurred\" object=\"cronjob-991/concurrent-1621614660\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Completed\" message=\"Job completed\"\nI0521 16:31:04.974236 1 event.go:291] \"Event occurred\" object=\"cronjob-8340/successful-jobs-history-limit-1621614660\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Completed\" message=\"Job completed\"\nI0521 16:31:05.090265 1 namespace_controller.go:185] Namespace has been deleted disruption-7538\nI0521 16:31:05.702052 1 event.go:291] \"Event occurred\" object=\"statefulset-7786/test\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint statefulset-7786/test: Operation cannot be fulfilled on endpoints \\\"test\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0521 16:31:05.712176 1 event.go:291] \"Event occurred\" object=\"statefulset-7786/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nE0521 16:31:05.739749 1 tokens_controller.go:261] error synchronizing serviceaccount statefulset-5276/default: secrets \"default-token-bttwb\" is forbidden: unable to create new content in namespace statefulset-5276 because it is being terminated\nI0521 16:31:06.533673 1 event.go:291] \"Event occurred\" object=\"job-2108/exceed-active-deadline\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: exceed-active-deadline-k2qsc\"\nI0521 16:31:06.538045 1 event.go:291] \"Event occurred\" object=\"job-2108/exceed-active-deadline\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: exceed-active-deadline-bjbcd\"\nI0521 16:31:07.060551 1 event.go:291] \"Event occurred\" object=\"job-2108/exceed-active-deadline\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: exceed-active-deadline-bjbcd\"\nI0521 16:31:07.060986 1 event.go:291] \"Event occurred\" object=\"job-2108/exceed-active-deadline\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: exceed-active-deadline-k2qsc\"\nI0521 16:31:07.061056 1 event.go:291] \"Event occurred\" object=\"job-2108/exceed-active-deadline\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Warning\" reason=\"DeadlineExceeded\" message=\"Job was active longer than specified deadline\"\nI0521 16:31:07.576441 1 event.go:291] \"Event occurred\" object=\"cronjob-8018/failed-jobs-history-limit-1621614660\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: failed-jobs-history-limit-1621614660-mwbfl\"\nI0521 16:31:07.576503 1 event.go:291] \"Event occurred\" object=\"cronjob-8018/failed-jobs-history-limit-1621614660\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Warning\" reason=\"BackoffLimitExceeded\" message=\"Job has reached the specified backoff limit\"\nI0521 16:31:08.079549 1 event.go:291] \"Event occurred\" object=\"statefulset-7786/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nI0521 16:31:09.129686 1 event.go:291] \"Event occurred\" object=\"statefulset-2590/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0521 16:31:09.129693 1 event.go:291] \"Event occurred\" object=\"statefulset-2590/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success\"\nI0521 16:31:09.134239 1 event.go:291] \"Event occurred\" object=\"statefulset-2590/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0521 16:31:09.144114 1 event.go:291] \"Event occurred\" object=\"statefulset-2590/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0521 16:31:09.144273 1 event.go:291] \"Event occurred\" object=\"statefulset-2590/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0521 16:31:10.284581 1 event.go:291] \"Event occurred\" object=\"statefulset-1467/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0521 16:31:10.860285 1 namespace_controller.go:185] Namespace has been deleted statefulset-5276\nI0521 16:31:11.380523 1 event.go:291] \"Event occurred\" object=\"statefulset-7786/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nE0521 16:31:11.517550 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:31:13.234043 1 event.go:291] \"Event occurred\" object=\"cronjob-8018/failed-jobs-history-limit\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SawCompletedJob\" message=\"Saw completed job: failed-jobs-history-limit-1621614660, status: Failed\"\nI0521 16:31:13.243982 1 event.go:291] \"Event occurred\" object=\"cronjob-8340/successful-jobs-history-limit\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SawCompletedJob\" message=\"Saw completed job: successful-jobs-history-limit-1621614660, status: Complete\"\nI0521 16:31:13.262728 1 event.go:291] \"Event occurred\" object=\"cronjob-991/concurrent\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SawCompletedJob\" message=\"Saw completed job: concurrent-1621614660, status: Complete\"\nE0521 16:31:14.548519 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:31:15.743458 1 event.go:291] \"Event occurred\" object=\"statefulset-7786/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-2 in StatefulSet ss2 successful\"\nI0521 16:31:16.668388 1 namespace_controller.go:185] Namespace has been deleted job-9468\nI0521 16:31:17.645923 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-2 in StatefulSet ss successful\"\nI0521 16:31:19.963763 1 event.go:291] \"Event occurred\" object=\"statefulset-2590/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0521 16:31:19.963973 1 event.go:291] \"Event occurred\" object=\"statefulset-2590/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-1 Pod ss-1 in StatefulSet ss success\"\nI0521 16:31:19.968579 1 event.go:291] \"Event occurred\" object=\"statefulset-2590/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-1 in StatefulSet ss successful\"\nI0521 16:31:19.977062 1 event.go:291] \"Event occurred\" object=\"statefulset-2590/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0521 16:31:20.131694 1 event.go:291] \"Event occurred\" object=\"statefulset-2590/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nE0521 16:31:21.707511 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:31:27.104810 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:31:30.024432 1 event.go:291] \"Event occurred\" object=\"statefulset-2590/datadir-ss-2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0521 16:31:30.024591 1 event.go:291] \"Event occurred\" object=\"statefulset-2590/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-2 Pod ss-2 in StatefulSet ss success\"\nI0521 16:31:30.034734 1 event.go:291] \"Event occurred\" object=\"statefulset-2590/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-2 in StatefulSet ss successful\"\nI0521 16:31:30.042444 1 event.go:291] \"Event occurred\" object=\"statefulset-2590/datadir-ss-2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0521 16:31:30.216723 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-2 in StatefulSet ss successful\"\nI0521 16:31:30.220787 1 event.go:291] \"Event occurred\" object=\"statefulset-7786/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-1 in StatefulSet ss2 successful\"\nI0521 16:31:30.585136 1 event.go:291] \"Event occurred\" object=\"statefulset-1467/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-1 in StatefulSet ss successful\"\nI0521 16:31:33.021527 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-1 in StatefulSet ss successful\"\nE0521 16:31:35.902838 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:31:38.547803 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:31:40.046313 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:31:40.210728 1 event.go:291] \"Event occurred\" object=\"statefulset-1467/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0521 16:31:40.218936 1 event.go:291] \"Event occurred\" object=\"statefulset-7786/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-0 in StatefulSet ss2 successful\"\nI0521 16:31:40.231097 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-1 in StatefulSet ss successful\"\nE0521 16:31:41.479300 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:31:42.807629 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0521 16:31:43.733298 1 event.go:291] \"Event occurred\" object=\"statefulset-2590/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-2 in StatefulSet ss successful\"\nE0521 16:31:46.154982 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:31:49.967327 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:31:50.451824 1 event.go:291] \"Event occurred\" object=\"statefulset-2590/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-1 in StatefulSet ss successful\"\nI0521 16:31:50.463294 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0521 16:31:50.593883 1 stateful_set.go:419] StatefulSet has been deleted statefulset-1467/ss\nE0521 16:31:51.311757 1 namespace_controller.go:162] deletion of namespace job-2108 failed: unexpected items still remain in namespace: job-2108 for gvr: /v1, Resource=pods\nE0521 16:31:51.490815 1 namespace_controller.go:162] deletion of namespace job-2108 failed: unexpected items still remain in namespace: job-2108 for gvr: /v1, Resource=pods\nE0521 16:31:51.680223 1 namespace_controller.go:162] deletion of namespace job-2108 failed: unexpected items still remain in namespace: job-2108 for gvr: /v1, Resource=pods\nE0521 16:31:51.871465 1 namespace_controller.go:162] deletion of namespace job-2108 failed: unexpected items still remain in namespace: job-2108 for gvr: /v1, Resource=pods\nE0521 16:31:52.085903 1 namespace_controller.go:162] deletion of namespace job-2108 failed: unexpected items still remain in namespace: job-2108 for gvr: /v1, Resource=pods\nE0521 16:31:52.127532 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:31:52.346189 1 namespace_controller.go:162] deletion of namespace job-2108 failed: unexpected items still remain in namespace: job-2108 for gvr: /v1, Resource=pods\nE0521 16:31:52.667572 1 namespace_controller.go:162] deletion of namespace job-2108 failed: unexpected items still remain in namespace: job-2108 for gvr: /v1, Resource=pods\nE0521 16:31:53.164501 1 namespace_controller.go:162] deletion of namespace job-2108 failed: unexpected items still remain in namespace: job-2108 for gvr: /v1, Resource=pods\nE0521 16:31:53.983661 1 namespace_controller.go:162] deletion of namespace job-2108 failed: unexpected items still remain in namespace: job-2108 for gvr: /v1, Resource=pods\nE0521 16:31:55.439896 1 namespace_controller.go:162] deletion of namespace job-2108 failed: unexpected items still remain in namespace: job-2108 for gvr: /v1, Resource=pods\nI0521 16:31:55.753324 1 stateful_set.go:419] StatefulSet has been deleted statefulset-7786/ss2\nI0521 16:31:56.307487 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-dd94f59b7 to 6\"\nI0521 16:31:56.314176 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-rthqt\"\nI0521 16:31:56.317706 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-sf2ws\"\nI0521 16:31:56.318508 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-v47gc\"\nI0521 16:31:56.322157 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-4xdls\"\nI0521 16:31:56.322940 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-lvr5c\"\nI0521 16:31:56.323043 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-vwv4v\"\nI0521 16:31:56.323879 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-dd94f59b7 to 7\"\nI0521 16:31:56.343409 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-4nrvs\"\nI0521 16:31:56.815906 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-b94d4cf79 to 2\"\nI0521 16:31:56.820502 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-b94d4cf79-7n5k5\"\nI0521 16:31:56.824248 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-b94d4cf79-2v8ln\"\nI0521 16:31:56.832212 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 6\"\nI0521 16:31:56.851525 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-rthqt\"\nI0521 16:31:56.857632 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-b94d4cf79 to 3\"\nI0521 16:31:56.860141 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-b94d4cf79-xzdj6\"\nI0521 16:31:57.771278 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"DeploymentRollback\" message=\"Rolled back deployment \\\"webserver\\\" to revision 1\"\nI0521 16:31:58.065917 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-b94d4cf79 to 2\"\nI0521 16:31:58.071614 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-b94d4cf79-2v8ln\"\nI0521 16:31:58.077007 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-dd94f59b7 to 7\"\nI0521 16:31:58.079884 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-f92c9\"\nE0521 16:31:58.140134 1 namespace_controller.go:162] deletion of namespace job-2108 failed: unexpected items still remain in namespace: job-2108 for gvr: /v1, Resource=pods\nI0521 16:31:58.426732 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-b94d4cf79 to 1\"\nI0521 16:31:58.432479 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-b94d4cf79-7n5k5\"\nI0521 16:31:58.741106 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-b94d4cf79 to 0\"\nI0521 16:31:58.748350 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-b94d4cf79-xzdj6\"\nI0521 16:32:00.339534 1 event.go:291] \"Event occurred\" object=\"statefulset-2590/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nE0521 16:32:00.845732 1 tokens_controller.go:261] error synchronizing serviceaccount statefulset-7786/default: secrets \"default-token-b7bgj\" is forbidden: unable to create new content in namespace statefulset-7786 because it is being terminated\nE0521 16:32:01.155114 1 tokens_controller.go:261] error synchronizing serviceaccount replication-controller-3800/default: secrets \"default-token-5htxm\" is forbidden: unable to create new content in namespace replication-controller-3800 because it is being terminated\nI0521 16:32:01.236966 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-fs9dx\"\nI0521 16:32:01.251240 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-hdv4s\"\nI0521 16:32:01.257769 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-8wcq4\"\nI0521 16:32:01.264943 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-7xhkc\"\nE0521 16:32:01.510225 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:32:03.077497 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:32:03.285980 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-dd94f59b7 to 8\"\nI0521 16:32:03.290551 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-tfmwt\"\nI0521 16:32:03.297238 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-2mm9g\"\nI0521 16:32:03.309586 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-s48lw\"\nI0521 16:32:03.313090 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-gw9mq\"\nI0521 16:32:03.313304 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-mw8m4\"\nI0521 16:32:03.320550 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-hzptk\"\nI0521 16:32:03.333238 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 7\"\nI0521 16:32:03.338613 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-s48lw\"\nE0521 16:32:03.419365 1 namespace_controller.go:162] deletion of namespace job-2108 failed: unexpected items still remain in namespace: job-2108 for gvr: /v1, Resource=pods\nI0521 16:32:03.554485 1 event.go:291] \"Event occurred\" object=\"cronjob-2274/replace\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted job replace-1621614660\"\nI0521 16:32:03.559798 1 event.go:291] \"Event occurred\" object=\"cronjob-2274/replace\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job replace-1621614720\"\nI0521 16:32:03.565572 1 event.go:291] \"Event occurred\" object=\"cronjob-2274/replace-1621614720\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: replace-1621614720-hnqjx\"\nI0521 16:32:03.568624 1 cronjob_controller.go:190] Unable to update status for cronjob-2274/replace (rv = 43196): Operation cannot be fulfilled on cronjobs.batch \"replace\": the object has been modified; please apply your changes to the latest version and try again\nI0521 16:32:03.576757 1 event.go:291] \"Event occurred\" object=\"cronjob-4832/concurrent\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job concurrent-1621614720\"\nI0521 16:32:03.585908 1 event.go:291] \"Event occurred\" object=\"cronjob-4832/concurrent-1621614720\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: concurrent-1621614720-bhxm5\"\nI0521 16:32:03.588320 1 cronjob_controller.go:190] Unable to update status for cronjob-4832/concurrent (rv = 43169): Operation cannot be fulfilled on cronjobs.batch \"concurrent\": the object has been modified; please apply your changes to the latest version and try again\nI0521 16:32:03.596877 1 event.go:291] \"Event occurred\" object=\"cronjob-8018/failed-jobs-history-limit\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job failed-jobs-history-limit-1621614720\"\nI0521 16:32:03.602146 1 event.go:291] \"Event occurred\" object=\"cronjob-8018/failed-jobs-history-limit-1621614720\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: failed-jobs-history-limit-1621614720-z77s5\"\nI0521 16:32:03.606711 1 cronjob_controller.go:190] Unable to update status for cronjob-8018/failed-jobs-history-limit (rv = 43560): Operation cannot be fulfilled on cronjobs.batch \"failed-jobs-history-limit\": the object has been modified; please apply your changes to the latest version and try again\nI0521 16:32:03.619329 1 event.go:291] \"Event occurred\" object=\"cronjob-8340/successful-jobs-history-limit\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job successful-jobs-history-limit-1621614720\"\nI0521 16:32:03.625364 1 event.go:291] \"Event occurred\" object=\"cronjob-8340/successful-jobs-history-limit-1621614720\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: successful-jobs-history-limit-1621614720-59n29\"\nI0521 16:32:03.630920 1 cronjob_controller.go:190] Unable to update status for cronjob-8340/successful-jobs-history-limit (rv = 43562): Operation cannot be fulfilled on cronjobs.batch \"successful-jobs-history-limit\": the object has been modified; please apply your changes to the latest version and try again\nI0521 16:32:03.653683 1 event.go:291] \"Event occurred\" object=\"cronjob-991/concurrent\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job concurrent-1621614720\"\nI0521 16:32:03.658787 1 event.go:291] \"Event occurred\" object=\"cronjob-991/concurrent-1621614720\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: concurrent-1621614720-zslq2\"\nI0521 16:32:03.744908 1 cronjob_controller.go:190] Unable to update status for cronjob-991/concurrent (rv = 43565): Operation cannot be fulfilled on cronjobs.batch \"concurrent\": the object has been modified; please apply your changes to the latest version and try again\nI0521 16:32:04.721060 1 event.go:291] \"Event occurred\" object=\"cronjob-991/concurrent-1621614720\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Completed\" message=\"Job completed\"\nI0521 16:32:05.078666 1 event.go:291] \"Event occurred\" object=\"job-5351/backofflimit\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: backofflimit-7dlxn\"\nE0521 16:32:05.883018 1 tokens_controller.go:261] error synchronizing serviceaccount statefulset-1467/default: secrets \"default-token-77ktj\" is forbidden: unable to create new content in namespace statefulset-1467 because it is being terminated\nI0521 16:32:06.178476 1 namespace_controller.go:185] Namespace has been deleted statefulset-7786\nI0521 16:32:06.289716 1 namespace_controller.go:185] Namespace has been deleted replication-controller-3800\nI0521 16:32:06.731744 1 event.go:291] \"Event occurred\" object=\"job-5351/backofflimit\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: backofflimit-6rsrr\"\nE0521 16:32:06.739580 1 job_controller.go:402] Error syncing job: failed pod(s) detected for job key \"job-5351/backofflimit\"\nI0521 16:32:09.301054 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"DeploymentRollback\" message=\"Rolled back deployment \\\"webserver\\\" to revision 2\"\nI0521 16:32:09.325044 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-b94d4cf79 to 2\"\nI0521 16:32:09.328989 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-b94d4cf79-9r6gx\"\nI0521 16:32:09.332339 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-b94d4cf79-mfkg7\"\nI0521 16:32:09.338072 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 6\"\nI0521 16:32:09.351959 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-mw8m4\"\nI0521 16:32:09.354098 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-b94d4cf79 to 3\"\nI0521 16:32:09.357331 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-b94d4cf79-jwxtc\"\nI0521 16:32:10.759606 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 5\"\nI0521 16:32:10.768269 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-tfmwt\"\nI0521 16:32:10.770485 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-b94d4cf79 to 4\"\nI0521 16:32:10.773406 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-b94d4cf79-9xvjp\"\nI0521 16:32:10.784768 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 4\"\nI0521 16:32:10.790491 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-hzptk\"\nI0521 16:32:10.795691 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-b94d4cf79 to 5\"\nI0521 16:32:10.799202 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-b94d4cf79-zrqvl\"\nI0521 16:32:10.929724 1 namespace_controller.go:185] Namespace has been deleted statefulset-1467\nE0521 16:32:10.949090 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:32:11.067855 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-b94d4cf79-8dgp9\"\nI0521 16:32:11.074989 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-28b9f\"\nI0521 16:32:11.083737 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-l847h\"\nI0521 16:32:11.097196 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-hgjk4\"\nI0521 16:32:11.127260 1 event.go:291] \"Event occurred\" object=\"cronjob-8340/successful-jobs-history-limit-1621614720\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Completed\" message=\"Job completed\"\nI0521 16:32:11.536186 1 event.go:291] \"Event occurred\" object=\"cronjob-8018/failed-jobs-history-limit-1621614720\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: failed-jobs-history-limit-1621614720-z77s5\"\nI0521 16:32:11.536299 1 event.go:291] \"Event occurred\" object=\"cronjob-8018/failed-jobs-history-limit-1621614720\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Warning\" reason=\"BackoffLimitExceeded\" message=\"Job has reached the specified backoff limit\"\nI0521 16:32:13.018306 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 3\"\nI0521 16:32:13.025742 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-28b9f\"\nI0521 16:32:13.030078 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-b94d4cf79 to 6\"\nI0521 16:32:13.032992 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-b94d4cf79-2q4cf\"\nI0521 16:32:13.115353 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-b94d4cf79 to 7\"\nI0521 16:32:13.120808 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-b94d4cf79-jhsqs\"\nI0521 16:32:13.541959 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 2\"\nI0521 16:32:13.549484 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-hgjk4\"\nI0521 16:32:13.554635 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-b94d4cf79 to 8\"\nI0521 16:32:13.559040 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-b94d4cf79-247zk\"\nI0521 16:32:13.754582 1 event.go:291] \"Event occurred\" object=\"statefulset-2590/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0521 16:32:13.760829 1 event.go:291] \"Event occurred\" object=\"cronjob-8018/failed-jobs-history-limit\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SawCompletedJob\" message=\"Saw completed job: failed-jobs-history-limit-1621614720, status: Failed\"\nI0521 16:32:13.768373 1 event.go:291] \"Event occurred\" object=\"cronjob-8018/failed-jobs-history-limit\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted job failed-jobs-history-limit-1621614660\"\nI0521 16:32:13.772667 1 event.go:291] \"Event occurred\" object=\"cronjob-8340/successful-jobs-history-limit\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SawCompletedJob\" message=\"Saw completed job: successful-jobs-history-limit-1621614720, status: Complete\"\nI0521 16:32:13.781916 1 event.go:291] \"Event occurred\" object=\"cronjob-8340/successful-jobs-history-limit\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted job successful-jobs-history-limit-1621614660\"\nI0521 16:32:13.801155 1 event.go:291] \"Event occurred\" object=\"cronjob-991/concurrent\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SawCompletedJob\" message=\"Saw completed job: concurrent-1621614720, status: Complete\"\nE0521 16:32:13.826075 1 namespace_controller.go:162] deletion of namespace job-2108 failed: unexpected items still remain in namespace: job-2108 for gvr: /v1, Resource=pods\nI0521 16:32:14.142366 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 1\"\nI0521 16:32:14.148974 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-l847h\"\nI0521 16:32:14.818337 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 0\"\nI0521 16:32:14.825319 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-gw9mq\"\nI0521 16:32:15.991532 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-b94d4cf79-pqbrn\"\nI0521 16:32:16.000104 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-b94d4cf79-h62kc\"\nI0521 16:32:16.008334 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-b94d4cf79-tpr9t\"\nI0521 16:32:16.016997 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-b94d4cf79-hmkcb\"\nI0521 16:32:16.024329 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-b94d4cf79-mccg6\"\nI0521 16:32:16.031120 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-b94d4cf79-bgd28\"\nI0521 16:32:16.740266 1 event.go:291] \"Event occurred\" object=\"job-5351/backofflimit\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Warning\" reason=\"BackoffLimitExceeded\" message=\"Job has reached the specified backoff limit\"\nI0521 16:32:18.814732 1 event.go:291] \"Event occurred\" object=\"statefulset-2590/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-1 in StatefulSet ss successful\"\nI0521 16:32:18.936119 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-2 in StatefulSet ss successful\"\nE0521 16:32:20.028385 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:32:20.435171 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:32:21.287256 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-566c87d976 to 2\"\nI0521 16:32:21.291437 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-566c87d976\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-566c87d976-hww4q\"\nI0521 16:32:21.294879 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-566c87d976\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-566c87d976-77gfz\"\nI0521 16:32:21.296303 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-b94d4cf79 to 6\"\nI0521 16:32:21.304225 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-b94d4cf79-tpr9t\"\nI0521 16:32:21.304783 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-b94d4cf79-pqbrn\"\nI0521 16:32:21.309759 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-566c87d976 to 4\"\nI0521 16:32:21.312786 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-566c87d976\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-566c87d976-k6rv5\"\nI0521 16:32:21.315564 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-566c87d976\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-566c87d976-qgmxm\"\nE0521 16:32:22.527107 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:32:23.832637 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-b94d4cf79 to 5\"\nI0521 16:32:23.840057 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-b94d4cf79-hmkcb\"\nI0521 16:32:23.843798 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-566c87d976 to 5\"\nI0521 16:32:23.846736 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-566c87d976\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-566c87d976-rpv7c\"\nI0521 16:32:23.855345 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-b94d4cf79 to 4\"\nI0521 16:32:23.861838 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-b94d4cf79-mccg6\"\nI0521 16:32:23.862566 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-566c87d976 to 6\"\nI0521 16:32:23.865749 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-566c87d976\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-566c87d976-l8d6v\"\nE0521 16:32:23.975524 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:32:24.137521 1 event.go:291] \"Event occurred\" object=\"statefulset-2590/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-2 in StatefulSet ss successful\"\nI0521 16:32:24.537388 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-b94d4cf79 to 3\"\nI0521 16:32:24.544337 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-566c87d976 to 7\"\nI0521 16:32:24.544650 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-b94d4cf79-bgd28\"\nI0521 16:32:24.547183 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-566c87d976\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-566c87d976-fzhcv\"\nI0521 16:32:24.935786 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-b94d4cf79 to 2\"\nI0521 16:32:24.941942 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-b94d4cf79-h62kc\"\nI0521 16:32:24.942570 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-566c87d976 to 8\"\nI0521 16:32:24.946567 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-566c87d976\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-566c87d976-wnvsb\"\nE0521 16:32:25.271790 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:32:25.851016 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-b94d4cf79 to 1\"\nI0521 16:32:25.857957 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-b94d4cf79-247zk\"\nI0521 16:32:25.873374 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-b94d4cf79 to 0\"\nI0521 16:32:25.880725 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-b94d4cf79\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-b94d4cf79-9r6gx\"\nI0521 16:32:27.303645 1 namespace_controller.go:185] Namespace has been deleted job-5351\nE0521 16:32:27.528636 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:32:27.723547 1 namespace_controller.go:185] Namespace has been deleted cronjob-8340\nI0521 16:32:27.752062 1 namespace_controller.go:185] Namespace has been deleted cronjob-8018\nI0521 16:32:29.105449 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-566c87d976 to 9\"\nI0521 16:32:29.115662 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-566c87d976\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-566c87d976-n67cl\"\nI0521 16:32:30.215562 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-2 in StatefulSet ss successful\"\nI0521 16:32:31.607970 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-1 in StatefulSet ss successful\"\nE0521 16:32:34.471231 1 namespace_controller.go:162] deletion of namespace job-2108 failed: unexpected items still remain in namespace: job-2108 for gvr: /v1, Resource=pods\nI0521 16:32:36.655489 1 event.go:291] \"Event occurred\" object=\"statefulset-2590/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-2 in StatefulSet ss successful\"\nI0521 16:32:38.025012 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-566c87d976 to 8\"\nI0521 16:32:38.032631 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-566c87d976\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-566c87d976-n67cl\"\nE0521 16:32:38.817012 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:32:38.912447 1 event.go:291] \"Event occurred\" object=\"statefulset-2590/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-1 in StatefulSet ss successful\"\nE0521 16:32:39.213201 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:32:40.211516 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-1 in StatefulSet ss successful\"\nI0521 16:32:41.008447 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-566c87d976 to 7\"\nI0521 16:32:41.019833 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-7cd6db5c9d to 2\"\nI0521 16:32:41.022362 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-566c87d976\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-566c87d976-wnvsb\"\nI0521 16:32:41.023495 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-7cd6db5c9d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7cd6db5c9d-s285m\"\nI0521 16:32:41.026703 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-7cd6db5c9d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7cd6db5c9d-jpcs5\"\nI0521 16:32:41.028948 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-566c87d976\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-566c87d976-22dhb\"\nI0521 16:32:41.035299 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-566c87d976\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-566c87d976-xrw65\"\nI0521 16:32:41.035890 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-566c87d976 to 6\"\nI0521 16:32:41.042603 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-566c87d976 to 8\"\nE0521 16:32:41.043049 1 replica_set.go:532] sync \"deployment-4203/webserver-566c87d976\" failed with Operation cannot be fulfilled on replicasets.apps \"webserver-566c87d976\": the object has been modified; please apply your changes to the latest version and try again\nI0521 16:32:41.045641 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-566c87d976\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-566c87d976-8bnk9\"\nI0521 16:32:41.055301 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-566c87d976 to 6\"\nI0521 16:32:41.060639 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-7cd6db5c9d to 4\"\nI0521 16:32:41.061615 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-566c87d976\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-566c87d976-22dhb\"\nI0521 16:32:41.061972 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-566c87d976\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-566c87d976-8bnk9\"\nI0521 16:32:41.063182 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-7cd6db5c9d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7cd6db5c9d-gff2h\"\nI0521 16:32:41.065537 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-7cd6db5c9d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7cd6db5c9d-qtgjn\"\nI0521 16:32:42.922671 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-566c87d976 to 5\"\nI0521 16:32:42.929427 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-566c87d976\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-566c87d976-xrw65\"\nI0521 16:32:42.930391 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-7cd6db5c9d to 5\"\nI0521 16:32:42.933964 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-7cd6db5c9d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7cd6db5c9d-g7nn7\"\nI0521 16:32:42.944415 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-566c87d976 to 4\"\nI0521 16:32:42.951359 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-566c87d976\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-566c87d976-fzhcv\"\nI0521 16:32:42.951418 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-7cd6db5c9d to 6\"\nI0521 16:32:42.955320 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-7cd6db5c9d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7cd6db5c9d-8bhtg\"\nI0521 16:32:44.609524 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0521 16:32:44.941003 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-566c87d976 to 3\"\nI0521 16:32:44.947457 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-566c87d976\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-566c87d976-rpv7c\"\nI0521 16:32:44.948601 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-7cd6db5c9d to 7\"\nI0521 16:32:44.951120 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-7cd6db5c9d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7cd6db5c9d-n2vn4\"\nI0521 16:32:45.204119 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-566c87d976 to 2\"\nI0521 16:32:45.211703 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-566c87d976\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-566c87d976-77gfz\"\nI0521 16:32:45.212647 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-7cd6db5c9d to 8\"\nI0521 16:32:45.215730 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-7cd6db5c9d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7cd6db5c9d-9kqdh\"\nI0521 16:32:45.603934 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-566c87d976 to 1\"\nI0521 16:32:45.610807 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-566c87d976\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-566c87d976-l8d6v\"\nI0521 16:32:46.208549 1 namespace_controller.go:185] Namespace has been deleted cronjob-991\nI0521 16:32:46.604392 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-566c87d976 to 0\"\nI0521 16:32:46.610326 1 event.go:291] \"Event occurred\" object=\"deployment-4203/webserver-566c87d976\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-566c87d976-qgmxm\"\nE0521 16:32:47.582976 1 namespace_controller.go:162] deletion of namespace cronjob-2274 failed: unexpected items still remain in namespace: cronjob-2274 for gvr: /v1, Resource=pods\nE0521 16:32:47.762352 1 namespace_controller.go:162] deletion of namespace cronjob-2274 failed: unexpected items still remain in namespace: cronjob-2274 for gvr: /v1, Resource=pods\nE0521 16:32:47.948170 1 namespace_controller.go:162] deletion of namespace cronjob-2274 failed: unexpected items still remain in namespace: cronjob-2274 for gvr: /v1, Resource=pods\nE0521 16:32:48.145233 1 namespace_controller.go:162] deletion of namespace cronjob-2274 failed: unexpected items still remain in namespace: cronjob-2274 for gvr: /v1, Resource=pods\nE0521 16:32:48.369618 1 namespace_controller.go:162] deletion of namespace cronjob-2274 failed: unexpected items still remain in namespace: cronjob-2274 for gvr: /v1, Resource=pods\nE0521 16:32:48.406121 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:32:48.622211 1 namespace_controller.go:162] deletion of namespace cronjob-2274 failed: unexpected items still remain in namespace: cronjob-2274 for gvr: /v1, Resource=pods\nE0521 16:32:48.951358 1 namespace_controller.go:162] deletion of namespace cronjob-2274 failed: unexpected items still remain in namespace: cronjob-2274 for gvr: /v1, Resource=pods\nE0521 16:32:49.443420 1 namespace_controller.go:162] deletion of namespace cronjob-2274 failed: unexpected items still remain in namespace: cronjob-2274 for gvr: /v1, Resource=pods\nI0521 16:32:50.209094 1 event.go:291] \"Event occurred\" object=\"statefulset-2590/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nE0521 16:32:50.261850 1 namespace_controller.go:162] deletion of namespace cronjob-2274 failed: unexpected items still remain in namespace: cronjob-2274 for gvr: /v1, Resource=pods\nI0521 16:32:50.426038 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nE0521 16:32:51.715838 1 namespace_controller.go:162] deletion of namespace cronjob-2274 failed: unexpected items still remain in namespace: cronjob-2274 for gvr: /v1, Resource=pods\nE0521 16:32:52.127592 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:32:54.414356 1 namespace_controller.go:162] deletion of namespace cronjob-2274 failed: unexpected items still remain in namespace: cronjob-2274 for gvr: /v1, Resource=pods\nE0521 16:32:57.045079 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:32:57.448095 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-2 in StatefulSet ss successful\"\nE0521 16:32:59.712525 1 namespace_controller.go:162] deletion of namespace cronjob-2274 failed: unexpected items still remain in namespace: cronjob-2274 for gvr: /v1, Resource=pods\nI0521 16:33:00.667808 1 namespace_controller.go:185] Namespace has been deleted deployment-4203\nE0521 16:33:00.999693 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:33:04.051895 1 event.go:291] \"Event occurred\" object=\"cronjob-4832/concurrent\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job concurrent-1621614780\"\nI0521 16:33:04.064331 1 event.go:291] \"Event occurred\" object=\"cronjob-4832/concurrent-1621614780\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: concurrent-1621614780-vjzq9\"\nI0521 16:33:04.085869 1 cronjob_controller.go:190] Unable to update status for cronjob-4832/concurrent (rv = 44562): Operation cannot be fulfilled on cronjobs.batch \"concurrent\": the object has been modified; please apply your changes to the latest version and try again\nI0521 16:33:04.093429 1 event.go:291] \"Event occurred\" object=\"cronjob-9237/forbid\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job forbid-1621614780\"\nI0521 16:33:04.097777 1 event.go:291] \"Event occurred\" object=\"cronjob-9237/forbid-1621614780\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: forbid-1621614780-jkr4z\"\nI0521 16:33:04.100974 1 cronjob_controller.go:190] Unable to update status for cronjob-9237/forbid (rv = 44319): Operation cannot be fulfilled on cronjobs.batch \"forbid\": the object has been modified; please apply your changes to the latest version and try again\nI0521 16:33:04.113308 1 event.go:291] \"Event occurred\" object=\"cronjob-9496/forbid\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job forbid-1621614780\"\nI0521 16:33:04.119379 1 event.go:291] \"Event occurred\" object=\"cronjob-9496/forbid-1621614780\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: forbid-1621614780-2xsh7\"\nI0521 16:33:04.123155 1 cronjob_controller.go:190] Unable to update status for cronjob-9496/forbid (rv = 45242): Operation cannot be fulfilled on cronjobs.batch \"forbid\": the object has been modified; please apply your changes to the latest version and try again\nI0521 16:33:06.664997 1 stateful_set.go:419] StatefulSet has been deleted statefulset-2590/ss\nE0521 16:33:07.546997 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:33:09.814564 1 tokens_controller.go:261] error synchronizing serviceaccount cronjob-4832/default: secrets \"default-token-wqsf6\" is forbidden: unable to create new content in namespace cronjob-4832 because it is being terminated\nE0521 16:33:10.121084 1 namespace_controller.go:162] deletion of namespace cronjob-2274 failed: unexpected items still remain in namespace: cronjob-2274 for gvr: /v1, Resource=pods\nI0521 16:33:10.209902 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-1 in StatefulSet ss successful\"\nE0521 16:33:12.068542 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:33:12.469260 1 event.go:291] \"Event occurred\" object=\"statefulset-6758/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0521 16:33:14.148834 1 event.go:291] \"Event occurred\" object=\"cronjob-9496/forbid\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"MissingJob\" message=\"Active job went missing: forbid-1621614780\"\nE0521 16:33:16.373483 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:33:18.056160 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:33:20.331800 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:33:20.609559 1 namespace_controller.go:185] Namespace has been deleted job-2108\nI0521 16:33:27.126656 1 namespace_controller.go:185] Namespace has been deleted statefulset-2590\nI0521 16:33:27.457771 1 stateful_set.go:419] StatefulSet has been deleted statefulset-6758/ss\nE0521 16:33:30.058423 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:33:30.778743 1 namespace_controller.go:162] deletion of namespace cronjob-2274 failed: unexpected items still remain in namespace: cronjob-2274 for gvr: /v1, Resource=pods\nE0521 16:33:33.089235 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:33:39.265758 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:33:40.182409 1 namespace_controller.go:162] deletion of namespace cronjob-2274 failed: unexpected items still remain in namespace: cronjob-2274 for gvr: /v1, Resource=pods\nE0521 16:33:40.191239 1 namespace_controller.go:162] deletion of namespace cronjob-4832 failed: unexpected items still remain in namespace: cronjob-4832 for gvr: /v1, Resource=pods\nE0521 16:33:40.373768 1 namespace_controller.go:162] deletion of namespace cronjob-4832 failed: unexpected items still remain in namespace: cronjob-4832 for gvr: /v1, Resource=pods\nE0521 16:33:40.559654 1 namespace_controller.go:162] deletion of namespace cronjob-4832 failed: unexpected items still remain in namespace: cronjob-4832 for gvr: /v1, Resource=pods\nE0521 16:33:40.754791 1 namespace_controller.go:162] deletion of namespace cronjob-4832 failed: unexpected items still remain in namespace: cronjob-4832 for gvr: /v1, Resource=pods\nE0521 16:33:40.972418 1 namespace_controller.go:162] deletion of namespace cronjob-4832 failed: unexpected items still remain in namespace: cronjob-4832 for gvr: /v1, Resource=pods\nE0521 16:33:41.225179 1 namespace_controller.go:162] deletion of namespace cronjob-4832 failed: unexpected items still remain in namespace: cronjob-4832 for gvr: /v1, Resource=pods\nE0521 16:33:41.561367 1 namespace_controller.go:162] deletion of namespace cronjob-4832 failed: unexpected items still remain in namespace: cronjob-4832 for gvr: /v1, Resource=pods\nE0521 16:33:41.735364 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:33:42.056690 1 namespace_controller.go:162] deletion of namespace cronjob-4832 failed: unexpected items still remain in namespace: cronjob-4832 for gvr: /v1, Resource=pods\nE0521 16:33:42.846588 1 namespace_controller.go:162] deletion of namespace cronjob-4832 failed: unexpected items still remain in namespace: cronjob-4832 for gvr: /v1, Resource=pods\nE0521 16:33:44.301781 1 namespace_controller.go:162] deletion of namespace cronjob-4832 failed: unexpected items still remain in namespace: cronjob-4832 for gvr: /v1, Resource=pods\nE0521 16:33:47.040820 1 namespace_controller.go:162] deletion of namespace cronjob-4832 failed: unexpected items still remain in namespace: cronjob-4832 for gvr: /v1, Resource=pods\nI0521 16:33:47.957110 1 namespace_controller.go:185] Namespace has been deleted statefulset-6758\nE0521 16:33:51.050703 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:33:52.339806 1 namespace_controller.go:162] deletion of namespace cronjob-4832 failed: unexpected items still remain in namespace: cronjob-4832 for gvr: /v1, Resource=pods\nE0521 16:33:53.130199 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:33:55.890936 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:34:02.760949 1 namespace_controller.go:162] deletion of namespace cronjob-4832 failed: unexpected items still remain in namespace: cronjob-4832 for gvr: /v1, Resource=pods\nE0521 16:34:03.523648 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:34:04.339251 1 event.go:291] \"Event occurred\" object=\"cronjob-9496/forbid\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job forbid-1621614840\"\nI0521 16:34:04.346302 1 event.go:291] \"Event occurred\" object=\"cronjob-9496/forbid-1621614840\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: forbid-1621614840-v54ch\"\nI0521 16:34:04.349008 1 cronjob_controller.go:190] Unable to update status for cronjob-9496/forbid (rv = 46866): Operation cannot be fulfilled on cronjobs.batch \"forbid\": the object has been modified; please apply your changes to the latest version and try again\nE0521 16:34:04.644843 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:34:08.387751 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:34:14.243501 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:34:21.996639 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:34:23.424220 1 namespace_controller.go:162] deletion of namespace cronjob-4832 failed: unexpected items still remain in namespace: cronjob-4832 for gvr: /v1, Resource=pods\nE0521 16:34:23.812995 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:34:25.120792 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:34:27.480898 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:34:39.077675 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:34:42.459133 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:34:45.359329 1 namespace_controller.go:185] Namespace has been deleted cronjob-2274\nE0521 16:34:49.643016 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:34:54.539815 1 cronjob_controller.go:253] Unable to update status for cronjob-9496/forbid (rv = 47266): Operation cannot be fulfilled on cronjobs.batch \"forbid\": StorageError: invalid object, Code: 4, Key: /registry/cronjobs/cronjob-9496/forbid, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 50931699-5d7e-41fa-b7fc-bb1070dff8b2, UID in object meta: \nI0521 16:34:54.542381 1 cronjob_controller.go:190] Unable to update status for cronjob-9496/forbid (rv = 47266): Operation cannot be fulfilled on cronjobs.batch \"forbid\": StorageError: invalid object, Code: 4, Key: /registry/cronjobs/cronjob-9496/forbid, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 50931699-5d7e-41fa-b7fc-bb1070dff8b2, UID in object meta: \nE0521 16:34:55.460092 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:35:01.851537 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:35:06.949108 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:35:07.045763 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:35:09.569316 1 namespace_controller.go:185] Namespace has been deleted cronjob-4832\nE0521 16:35:10.245646 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:35:10.815080 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:35:12.920818 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:35:20.404089 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:35:28.991957 1 tokens_controller.go:261] error synchronizing serviceaccount cronjob-9671/default: secrets \"default-token-lqp88\" is forbidden: unable to create new content in namespace cronjob-9671 because it is being terminated\nE0521 16:35:29.159917 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:35:34.097974 1 namespace_controller.go:185] Namespace has been deleted cronjob-9671\nE0521 16:35:36.251893 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:35:42.269041 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:35:42.326932 1 namespace_controller.go:185] Namespace has been deleted cronjob-9496\nE0521 16:35:45.123207 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:35:50.016879 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:35:51.410918 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:35:56.146933 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:36:03.953449 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:36:06.317679 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:36:06.372354 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:36:17.594678 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:36:25.003444 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:36:27.968264 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:36:30.341950 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:36:41.787709 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:36:42.308390 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:36:44.245707 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:36:44.287354 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:36:50.221126 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:36:52.396139 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:37:03.232171 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:37:08.523972 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:37:13.035890 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:37:16.069628 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:37:29.784023 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:37:30.652030 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:37:31.894658 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:37:34.005124 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:37:35.353237 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:37:42.342681 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:37:47.686925 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:37:57.934038 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:38:04.205076 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:38:11.351098 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:38:11.821377 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:38:14.933669 1 namespace_controller.go:185] Namespace has been deleted cronjob-9237\nI0521 16:38:17.433513 1 event.go:291] \"Event occurred\" object=\"disruption-4102/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-52wx2\"\nI0521 16:38:17.437150 1 event.go:291] \"Event occurred\" object=\"disruption-4102/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-cdx8j\"\nI0521 16:38:17.439006 1 event.go:291] \"Event occurred\" object=\"disruption-4102/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-z7fmb\"\nI0521 16:38:17.443175 1 event.go:291] \"Event occurred\" object=\"disruption-4102/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-bb72h\"\nI0521 16:38:17.444289 1 event.go:291] \"Event occurred\" object=\"disruption-4102/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-h4rmq\"\nI0521 16:38:17.444467 1 event.go:291] \"Event occurred\" object=\"disruption-4102/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-phfhk\"\nI0521 16:38:17.444632 1 event.go:291] \"Event occurred\" object=\"disruption-4102/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-hkpgl\"\nI0521 16:38:17.447785 1 event.go:291] \"Event occurred\" object=\"disruption-4102/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-l58ns\"\nI0521 16:38:17.447911 1 event.go:291] \"Event occurred\" object=\"disruption-4102/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-zsxk4\"\nI0521 16:38:17.447979 1 event.go:291] \"Event occurred\" object=\"disruption-4102/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-q4c6k\"\nI0521 16:38:19.542735 1 event.go:291] \"Event occurred\" object=\"daemonsets-271/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: daemon-set-pkw7j\"\nI0521 16:38:21.562607 1 event.go:291] \"Event occurred\" object=\"daemonsets-271/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: daemon-set-pkw7j\"\nE0521 16:38:22.647342 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:38:24.505502 1 disruption.go:505] Error syncing PodDisruptionBudget disruption-4102/foo, requeuing: Operation cannot be fulfilled on poddisruptionbudgets.policy \"foo\": the object has been modified; please apply your changes to the latest version and try again\nE0521 16:38:24.876379 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:38:25.187087 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:38:25.203630 1 event.go:291] \"Event occurred\" object=\"disruption-3936/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-46x9b\"\nI0521 16:38:25.208736 1 event.go:291] \"Event occurred\" object=\"disruption-3936/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-jtc6v\"\nI0521 16:38:25.208774 1 event.go:291] \"Event occurred\" object=\"disruption-3936/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-zf52p\"\nI0521 16:38:25.213058 1 event.go:291] \"Event occurred\" object=\"disruption-3936/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-jpcl2\"\nI0521 16:38:25.214171 1 event.go:291] \"Event occurred\" object=\"disruption-3936/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-l8vvz\"\nI0521 16:38:25.214270 1 event.go:291] \"Event occurred\" object=\"disruption-3936/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-45sdc\"\nI0521 16:38:25.214320 1 event.go:291] \"Event occurred\" object=\"disruption-3936/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-4xwjv\"\nI0521 16:38:25.218864 1 event.go:291] \"Event occurred\" object=\"disruption-3936/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-nckrj\"\nI0521 16:38:25.218896 1 event.go:291] \"Event occurred\" object=\"disruption-3936/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-52rz4\"\nI0521 16:38:25.218972 1 event.go:291] \"Event occurred\" object=\"disruption-3936/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-vp26b\"\nE0521 16:38:25.887790 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:38:26.597648 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:38:27.298430 1 event.go:291] \"Event occurred\" object=\"daemonsets-5872/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: daemon-set-vbm46\"\nI0521 16:38:27.303245 1 event.go:291] \"Event occurred\" object=\"daemonsets-5872/daemon-set\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: daemon-set-295hm\"\nE0521 16:38:30.298707 1 tokens_controller.go:261] error synchronizing serviceaccount daemonsets-271/default: secrets \"default-token-pqlwz\" is forbidden: unable to create new content in namespace daemonsets-271 because it is being terminated\nE0521 16:38:32.263305 1 disruption.go:552] Failed to sync pdb disruption-3936/foo: found no controllers for pod \"rs-vp26b\"\nE0521 16:38:32.264813 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"foo.1681225a58db5f96\", GenerateName:\"\", Namespace:\"disruption-3936\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"PodDisruptionBudget\", Namespace:\"disruption-3936\", Name:\"foo\", UID:\"49430079-8802-4a28-ac5a-4c26b873ade2\", APIVersion:\"policy/v1beta1\", ResourceVersion:\"48493\", FieldPath:\"\"}, Reason:\"NoControllers\", Message:\"found no controllers for pod \\\"rs-vp26b\\\"\", Source:v1.EventSource{Component:\"controllermanager\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02216620fb10f96, ext:5124344671076, loc:(*time.Location)(0x6a53ca0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02216620fb10f96, ext:5124344671076, loc:(*time.Location)(0x6a53ca0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"foo.1681225a58db5f96\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated' (will not retry!)\nE0521 16:38:32.266592 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"foo.1681225a58dbc315\", GenerateName:\"\", Namespace:\"disruption-3936\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"PodDisruptionBudget\", Namespace:\"disruption-3936\", Name:\"foo\", UID:\"49430079-8802-4a28-ac5a-4c26b873ade2\", APIVersion:\"policy/v1beta1\", ResourceVersion:\"48493\", FieldPath:\"\"}, Reason:\"CalculateExpectedPodCountFailed\", Message:\"Failed to calculate the number of expected pods: found no controllers for pod \\\"rs-vp26b\\\"\", Source:v1.EventSource{Component:\"controllermanager\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02216620fb17315, ext:5124344696555, loc:(*time.Location)(0x6a53ca0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02216620fb17315, ext:5124344696555, loc:(*time.Location)(0x6a53ca0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"foo.1681225a58dbc315\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated' (will not retry!)\nE0521 16:38:32.268851 1 disruption.go:552] Failed to sync pdb disruption-3936/foo: found no controllers for pod \"rs-46x9b\"\nE0521 16:38:32.270223 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"foo.1681225a59305cf0\", GenerateName:\"\", Namespace:\"disruption-3936\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"PodDisruptionBudget\", Namespace:\"disruption-3936\", Name:\"foo\", UID:\"49430079-8802-4a28-ac5a-4c26b873ade2\", APIVersion:\"policy/v1beta1\", ResourceVersion:\"48493\", FieldPath:\"\"}, Reason:\"NoControllers\", Message:\"found no controllers for pod \\\"rs-46x9b\\\"\", Source:v1.EventSource{Component:\"controllermanager\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022166210060cf0, ext:5124350240942, loc:(*time.Location)(0x6a53ca0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022166210060cf0, ext:5124350240942, loc:(*time.Location)(0x6a53ca0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"foo.1681225a59305cf0\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated' (will not retry!)\nE0521 16:38:32.271800 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"foo.1681225a59308089\", GenerateName:\"\", Namespace:\"disruption-3936\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"PodDisruptionBudget\", Namespace:\"disruption-3936\", Name:\"foo\", UID:\"49430079-8802-4a28-ac5a-4c26b873ade2\", APIVersion:\"policy/v1beta1\", ResourceVersion:\"48493\", FieldPath:\"\"}, Reason:\"CalculateExpectedPodCountFailed\", Message:\"Failed to calculate the number of expected pods: found no controllers for pod \\\"rs-46x9b\\\"\", Source:v1.EventSource{Component:\"controllermanager\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022166210063089, ext:5124350250055, loc:(*time.Location)(0x6a53ca0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022166210063089, ext:5124350250055, loc:(*time.Location)(0x6a53ca0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"foo.1681225a59308089\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated' (will not retry!)\nE0521 16:38:32.274108 1 disruption.go:552] Failed to sync pdb disruption-3936/foo: found no controllers for pod \"rs-46x9b\"\nE0521 16:38:32.277578 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"foo.1681225a59305cf0\", GenerateName:\"\", Namespace:\"disruption-3936\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"PodDisruptionBudget\", Namespace:\"disruption-3936\", Name:\"foo\", UID:\"49430079-8802-4a28-ac5a-4c26b873ade2\", APIVersion:\"policy/v1beta1\", ResourceVersion:\"48493\", FieldPath:\"\"}, Reason:\"NoControllers\", Message:\"found no controllers for pod \\\"rs-46x9b\\\"\", Source:v1.EventSource{Component:\"controllermanager\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022166210060cf0, ext:5124350240942, loc:(*time.Location)(0x6a53ca0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02216621055db8c, ext:5124355471221, loc:(*time.Location)(0x6a53ca0)}}, Count:2, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"foo.1681225a59305cf0\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated' (will not retry!)\nE0521 16:38:32.281186 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"foo.1681225a59308089\", GenerateName:\"\", Namespace:\"disruption-3936\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"PodDisruptionBudget\", Namespace:\"disruption-3936\", Name:\"foo\", UID:\"49430079-8802-4a28-ac5a-4c26b873ade2\", APIVersion:\"policy/v1beta1\", ResourceVersion:\"48493\", FieldPath:\"\"}, Reason:\"CalculateExpectedPodCountFailed\", Message:\"Failed to calculate the number of expected pods: found no controllers for pod \\\"rs-46x9b\\\"\", Source:v1.EventSource{Component:\"controllermanager\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022166210063089, ext:5124350250055, loc:(*time.Location)(0x6a53ca0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022166210564e41, ext:5124355500577, loc:(*time.Location)(0x6a53ca0)}}, Count:2, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"foo.1681225a59308089\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated' (will not retry!)\nE0521 16:38:32.331137 1 tokens_controller.go:261] error synchronizing serviceaccount disruption-3936/default: secrets \"default-token-gqjld\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated\nI0521 16:38:35.378546 1 namespace_controller.go:185] Namespace has been deleted daemonsets-271\nI0521 16:38:43.387423 1 namespace_controller.go:185] Namespace has been deleted daemonsets-5872\nI0521 16:38:45.184499 1 namespace_controller.go:185] Namespace has been deleted disruption-4102\nE0521 16:38:45.283444 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:38:45.784763 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:38:47.876571 1 tokens_controller.go:261] error synchronizing serviceaccount node-authz-2646/default: secrets \"default-token-mhvvv\" is forbidden: unable to create new content in namespace node-authz-2646 because it is being terminated\nE0521 16:38:48.515965 1 tokens_controller.go:261] error synchronizing serviceaccount node-authz-9608/default: secrets \"default-token-gc58f\" is forbidden: unable to create new content in namespace node-authz-9608 because it is being terminated\nE0521 16:38:48.591253 1 tokens_controller.go:261] error synchronizing serviceaccount node-authz-755/default: secrets \"default-token-tzwpq\" is forbidden: unable to create new content in namespace node-authz-755 because it is being terminated\nE0521 16:38:51.895787 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0521 16:38:52.939051 1 namespace_controller.go:185] Namespace has been deleted node-authz-4669\nI0521 16:38:52.963180 1 namespace_controller.go:185] Namespace has been deleted node-authz-2646\nI0521 16:38:53.252562 1 namespace_controller.go:185] Namespace has been deleted metadata-concealment-7221\nI0521 16:38:53.263483 1 namespace_controller.go:185] Namespace has been deleted node-authz-2177\nI0521 16:38:53.515447 1 namespace_controller.go:185] Namespace has been deleted node-authz-1335\nI0521 16:38:53.564660 1 namespace_controller.go:185] Namespace has been deleted node-authz-9608\nI0521 16:38:53.734457 1 namespace_controller.go:185] Namespace has been deleted node-authz-755\nI0521 16:38:55.549418 1 namespace_controller.go:185] Namespace has been deleted node-authn-1265\nE0521 16:38:55.734576 1 tokens_controller.go:261] error synchronizing serviceaccount svcaccounts-6367/default: secrets \"default-token-rjqc6\" is forbidden: unable to create new content in namespace svcaccounts-6367 because it is being terminated\nI0521 16:38:55.898724 1 namespace_controller.go:185] Namespace has been deleted node-authn-406\nE0521 16:38:57.078202 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:38:57.922971 1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0521 16:38:58.195814 1 tokens_controller.go:261] error synchronizing serviceaccount certificates-5180/default: secrets \"default-token-c5mhr\" is forbidden: unable to create new content in namespace certificates-5180 because it is being terminated\nI0521 16:38:58.875949 1 namespace_controller.go:185] Namespace has been deleted disruption-3936\nI0521 16:39:00.850706 1 namespace_controller.go:185] Namespace has been deleted svcaccounts-6367\nI0521 16:39:03.343538 1 namespace_controller.go:185] Namespace has been deleted certificates-5180\n==== END logs for container kube-controller-manager of pod kube-system/kube-controller-manager-kali-control-plane ====\n==== START logs for container kube-multus of pod kube-system/kube-multus-ds-f4mr9 ====\n2021-05-21T15:16:40+0000 Generating Multus configuration file using files in /host/etc/cni/net.d...\n2021-05-21T15:16:41+0000 Nested capabilities string: \"capabilities\": {\"portMappings\": true},\n2021-05-21T15:16:41+0000 Using /host/etc/cni/net.d/10-kindnet.conflist as a source to generate the Multus configuration\n2021-05-21T15:16:41+0000 Config file created @ /host/etc/cni/net.d/00-multus.conf\n{ \"cniVersion\": \"0.3.1\", \"name\": \"multus-cni-network\", \"type\": \"multus\", \"capabilities\": {\"portMappings\": true}, \"kubeconfig\": \"/etc/cni/net.d/multus.d/multus.kubeconfig\", \"delegates\": [ { \"cniVersion\": \"0.3.1\", \"name\": \"kindnet\", \"plugins\": [ { \"type\": \"ptp\", \"ipMasq\": false, \"ipam\": { \"type\": \"host-local\", \"dataDir\": \"/run/cni-ipam-state\", \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"ranges\": [ [ { \"subnet\": \"10.244.1.0/24\" } ] ] } , \"mtu\": 1500 }, { \"type\": \"portmap\", \"capabilities\": { \"portMappings\": true } } ] } ] }\n2021-05-21T15:16:41+0000 Entering sleep (success)...\n==== END logs for container kube-multus of pod kube-system/kube-multus-ds-f4mr9 ====\n==== START logs for container kube-multus of pod kube-system/kube-multus-ds-xtw9p ====\n2021-05-21T15:16:42+0000 Generating Multus configuration file using files in /host/etc/cni/net.d...\n2021-05-21T15:16:43+0000 Nested capabilities string: \"capabilities\": {\"portMappings\": true},\n2021-05-21T15:16:43+0000 Using /host/etc/cni/net.d/10-kindnet.conflist as a source to generate the Multus configuration\n2021-05-21T15:16:43+0000 Config file created @ /host/etc/cni/net.d/00-multus.conf\n{ \"cniVersion\": \"0.3.1\", \"name\": \"multus-cni-network\", \"type\": \"multus\", \"capabilities\": {\"portMappings\": true}, \"kubeconfig\": \"/etc/cni/net.d/multus.d/multus.kubeconfig\", \"delegates\": [ { \"cniVersion\": \"0.3.1\", \"name\": \"kindnet\", \"plugins\": [ { \"type\": \"ptp\", \"ipMasq\": false, \"ipam\": { \"type\": \"host-local\", \"dataDir\": \"/run/cni-ipam-state\", \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"ranges\": [ [ { \"subnet\": \"10.244.0.0/24\" } ] ] } , \"mtu\": 1500 }, { \"type\": \"portmap\", \"capabilities\": { \"portMappings\": true } } ] } ] }\n2021-05-21T15:16:43+0000 Entering sleep (success)...\n==== END logs for container kube-multus of pod kube-system/kube-multus-ds-xtw9p ====\n==== START logs for container kube-multus of pod kube-system/kube-multus-ds-zr9pd ====\n2021-05-21T15:16:21+0000 Generating Multus configuration file using files in /host/etc/cni/net.d...\n2021-05-21T15:16:22+0000 Nested capabilities string: \"capabilities\": {\"portMappings\": true},\n2021-05-21T15:16:22+0000 Using /host/etc/cni/net.d/10-kindnet.conflist as a source to generate the Multus configuration\n2021-05-21T15:16:22+0000 Config file created @ /host/etc/cni/net.d/00-multus.conf\n{ \"cniVersion\": \"0.3.1\", \"name\": \"multus-cni-network\", \"type\": \"multus\", \"capabilities\": {\"portMappings\": true}, \"kubeconfig\": \"/etc/cni/net.d/multus.d/multus.kubeconfig\", \"delegates\": [ { \"cniVersion\": \"0.3.1\", \"name\": \"kindnet\", \"plugins\": [ { \"type\": \"ptp\", \"ipMasq\": false, \"ipam\": { \"type\": \"host-local\", \"dataDir\": \"/run/cni-ipam-state\", \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"ranges\": [ [ { \"subnet\": \"10.244.2.0/24\" } ] ] } , \"mtu\": 1500 }, { \"type\": \"portmap\", \"capabilities\": { \"portMappings\": true } } ] } ] }\n2021-05-21T15:16:22+0000 Entering sleep (success)...\n==== END logs for container kube-multus of pod kube-system/kube-multus-ds-zr9pd ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-87457 ====\nI0521 15:13:52.981767 1 node.go:136] Successfully retrieved node IP: 172.18.0.4\nI0521 15:13:52.981867 1 server_others.go:142] kube-proxy node IP is an IPv4 address (172.18.0.4), assume IPv4 operation\nI0521 15:13:53.004999 1 server_others.go:185] Using iptables Proxier.\nI0521 15:13:53.005563 1 server.go:650] Version: v1.19.11\nI0521 15:13:53.006467 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\nI0521 15:13:53.006637 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\nI0521 15:13:53.006899 1 config.go:315] Starting service config controller\nI0521 15:13:53.006918 1 shared_informer.go:240] Waiting for caches to sync for service config\nI0521 15:13:53.006944 1 config.go:224] Starting endpoint slice config controller\nI0521 15:13:53.006969 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config\nI0521 15:13:53.107108 1 shared_informer.go:247] Caches are synced for endpoint slice config \nI0521 15:13:53.107098 1 shared_informer.go:247] Caches are synced for service config \nE0521 16:02:23.425134 1 proxier.go:814] Failed to get local addresses during proxy sync: route ip+net: no such network interface, assuming external IPs are not local\n==== END logs for container kube-proxy of pod kube-system/kube-proxy-87457 ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-c6n8g ====\nI0521 15:13:37.305598 1 node.go:136] Successfully retrieved node IP: 172.18.0.3\nI0521 15:13:37.305721 1 server_others.go:142] kube-proxy node IP is an IPv4 address (172.18.0.3), assume IPv4 operation\nI0521 15:13:37.329396 1 server_others.go:185] Using iptables Proxier.\nI0521 15:13:37.330063 1 server.go:650] Version: v1.19.11\nI0521 15:13:37.331061 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\nI0521 15:13:37.331251 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\nI0521 15:13:37.331569 1 config.go:315] Starting service config controller\nI0521 15:13:37.331589 1 shared_informer.go:240] Waiting for caches to sync for service config\nI0521 15:13:37.331644 1 config.go:224] Starting endpoint slice config controller\nI0521 15:13:37.331672 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config\nI0521 15:13:37.431904 1 shared_informer.go:247] Caches are synced for endpoint slice config \nI0521 15:13:37.431990 1 shared_informer.go:247] Caches are synced for service config \n==== END logs for container kube-proxy of pod kube-system/kube-proxy-c6n8g ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-ggwmf ====\nI0521 15:13:52.975559 1 node.go:136] Successfully retrieved node IP: 172.18.0.2\nI0521 15:13:52.975736 1 server_others.go:142] kube-proxy node IP is an IPv4 address (172.18.0.2), assume IPv4 operation\nI0521 15:13:52.998024 1 server_others.go:185] Using iptables Proxier.\nI0521 15:13:52.998589 1 server.go:650] Version: v1.19.11\nI0521 15:13:52.999465 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\nI0521 15:13:52.999586 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\nI0521 15:13:52.999904 1 config.go:315] Starting service config controller\nI0521 15:13:52.999936 1 shared_informer.go:240] Waiting for caches to sync for service config\nI0521 15:13:52.999979 1 config.go:224] Starting endpoint slice config controller\nI0521 15:13:53.000010 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config\nI0521 15:13:53.100154 1 shared_informer.go:247] Caches are synced for service config \nI0521 15:13:53.100177 1 shared_informer.go:247] Caches are synced for endpoint slice config \nE0521 16:01:07.452584 1 proxier.go:814] Failed to get local addresses during proxy sync: route ip+net: no such network interface, assuming external IPs are not local\n==== END logs for container kube-proxy of pod kube-system/kube-proxy-ggwmf ====\n==== START logs for container kube-scheduler of pod kube-system/kube-scheduler-kali-control-plane ====\nI0521 15:13:07.977915 1 registry.go:173] Registering SelectorSpread plugin\nI0521 15:13:07.977971 1 registry.go:173] Registering SelectorSpread plugin\nI0521 15:13:08.331436 1 serving.go:331] Generated self-signed cert in-memory\nW0521 15:13:14.772678 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'\nW0521 15:13:14.772759 1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"\nW0521 15:13:14.772787 1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous.\nW0521 15:13:14.772799 1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false\nI0521 15:13:14.792632 1 registry.go:173] Registering SelectorSpread plugin\nI0521 15:13:14.792652 1 registry.go:173] Registering SelectorSpread plugin\nI0521 15:13:14.796183 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0521 15:13:14.796227 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0521 15:13:14.796693 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259\nI0521 15:13:14.797006 1 tlsconfig.go:240] Starting DynamicServingCertificateController\nE0521 15:13:14.799863 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"\nE0521 15:13:14.799907 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope\nE0521 15:13:14.800320 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope\nE0521 15:13:14.800823 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope\nE0521 15:13:14.801472 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope\nE0521 15:13:14.801631 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope\nE0521 15:13:14.801793 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope\nE0521 15:13:14.801898 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope\nE0521 15:13:14.801899 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope\nE0521 15:13:14.801967 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope\nE0521 15:13:14.801985 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope\nE0521 15:13:14.802099 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope\nE0521 15:13:14.802128 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope\nE0521 15:13:15.632073 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope\nE0521 15:13:15.634607 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope\nE0521 15:13:15.637458 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope\nE0521 15:13:15.796583 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope\nE0521 15:13:15.806000 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope\nE0521 15:13:15.885009 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope\nE0521 15:13:15.915745 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"\nI0521 15:13:18.897158 1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-scheduler...\nI0521 15:13:18.914005 1 leaderelection.go:253] successfully acquired lease kube-system/kube-scheduler\nI0521 15:13:19.096513 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file \nE0521 16:00:41.330889 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"webserver-deployment-795d758f88-7n5tt.168120499a04ca7c\", GenerateName:\"\", Namespace:\"deployment-6688\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"deployment-6688\", Name:\"webserver-deployment-795d758f88-7n5tt\", UID:\"c1d33c83-5bd6-40aa-889a-b9c49c99e3cd\", APIVersion:\"v1\", ResourceVersion:\"18465\", FieldPath:\"\"}, Reason:\"Scheduled\", Message:\"Successfully assigned deployment-6688/webserver-deployment-795d758f88-7n5tt to kali-worker\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022142a5300707c, ext:2853403517786, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022142a5300707c, ext:2853403517786, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"webserver-deployment-795d758f88-7n5tt.168120499a04ca7c\" is forbidden: unable to create new content in namespace deployment-6688 because it is being terminated' (will not retry!)\nE0521 16:00:41.332734 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"webserver-deployment-dd94f59b7-bg684.168120499a1720bd\", GenerateName:\"\", Namespace:\"deployment-6688\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"deployment-6688\", Name:\"webserver-deployment-dd94f59b7-bg684\", UID:\"6cca0ca3-b89e-4a57-872e-ade2a71a78c3\", APIVersion:\"v1\", ResourceVersion:\"18466\", FieldPath:\"\"}, Reason:\"Scheduled\", Message:\"Successfully assigned deployment-6688/webserver-deployment-dd94f59b7-bg684 to kali-worker\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022142a5312c6bd, ext:2853404719505, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022142a5312c6bd, ext:2853404719505, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"webserver-deployment-dd94f59b7-bg684.168120499a1720bd\" is forbidden: unable to create new content in namespace deployment-6688 because it is being terminated' (will not retry!)\nE0521 16:00:41.334625 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"webserver-deployment-dd94f59b7-bhnsx.168120499a1d186c\", GenerateName:\"\", Namespace:\"deployment-6688\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"deployment-6688\", Name:\"webserver-deployment-dd94f59b7-bhnsx\", UID:\"f79cf742-2be3-44d9-a21e-d7863d35e3b6\", APIVersion:\"v1\", ResourceVersion:\"18467\", FieldPath:\"\"}, Reason:\"Scheduled\", Message:\"Successfully assigned deployment-6688/webserver-deployment-dd94f59b7-bhnsx to kali-worker2\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022142a5318be6c, ext:2853405110594, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022142a5318be6c, ext:2853405110594, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"webserver-deployment-dd94f59b7-bhnsx.168120499a1d186c\" is forbidden: unable to create new content in namespace deployment-6688 because it is being terminated' (will not retry!)\nE0521 16:00:41.335981 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"webserver-deployment-795d758f88-jjqgn.168120499a20bb69\", GenerateName:\"\", Namespace:\"deployment-6688\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"deployment-6688\", Name:\"webserver-deployment-795d758f88-jjqgn\", UID:\"85271775-8da7-4d4c-bf2d-fdc7acb7ab51\", APIVersion:\"v1\", ResourceVersion:\"18474\", FieldPath:\"\"}, Reason:\"Scheduled\", Message:\"Successfully assigned deployment-6688/webserver-deployment-795d758f88-jjqgn to kali-worker\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022142a531c6169, ext:2853405348930, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022142a531c6169, ext:2853405348930, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"webserver-deployment-795d758f88-jjqgn.168120499a20bb69\" is forbidden: unable to create new content in namespace deployment-6688 because it is being terminated' (will not retry!)\nE0521 16:00:41.337239 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"webserver-deployment-dd94f59b7-7tk9n.168120499a20f4de\", GenerateName:\"\", Namespace:\"deployment-6688\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"deployment-6688\", Name:\"webserver-deployment-dd94f59b7-7tk9n\", UID:\"acfed715-c129-4403-9aab-9dea85a55cfc\", APIVersion:\"v1\", ResourceVersion:\"18468\", FieldPath:\"\"}, Reason:\"Scheduled\", Message:\"Successfully assigned deployment-6688/webserver-deployment-dd94f59b7-7tk9n to kali-worker\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022142a531c9ade, ext:2853405363660, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022142a531c9ade, ext:2853405363660, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"webserver-deployment-dd94f59b7-7tk9n.168120499a20f4de\" is forbidden: unable to create new content in namespace deployment-6688 because it is being terminated' (will not retry!)\nE0521 16:00:41.338489 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"webserver-deployment-795d758f88-cqmvn.168120499a2b9266\", GenerateName:\"\", Namespace:\"deployment-6688\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"deployment-6688\", Name:\"webserver-deployment-795d758f88-cqmvn\", UID:\"a45bdffd-6052-4d70-a588-7b60310ce7d9\", APIVersion:\"v1\", ResourceVersion:\"18475\", FieldPath:\"\"}, Reason:\"Scheduled\", Message:\"Successfully assigned deployment-6688/webserver-deployment-795d758f88-cqmvn to kali-worker2\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022142a53273866, ext:2853406059322, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022142a53273866, ext:2853406059322, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"webserver-deployment-795d758f88-cqmvn.168120499a2b9266\" is forbidden: unable to create new content in namespace deployment-6688 because it is being terminated' (will not retry!)\nE0521 16:00:41.339697 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"webserver-deployment-dd94f59b7-xwpmc.168120499a3555f9\", GenerateName:\"\", Namespace:\"deployment-6688\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"deployment-6688\", Name:\"webserver-deployment-dd94f59b7-xwpmc\", UID:\"8f6bb1c5-217f-440f-a31b-5208c6ccddda\", APIVersion:\"v1\", ResourceVersion:\"18476\", FieldPath:\"\"}, Reason:\"Scheduled\", Message:\"Successfully assigned deployment-6688/webserver-deployment-dd94f59b7-xwpmc to kali-worker2\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022142a5330fbf9, ext:2853406699235, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022142a5330fbf9, ext:2853406699235, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"webserver-deployment-dd94f59b7-xwpmc.168120499a3555f9\" is forbidden: unable to create new content in namespace deployment-6688 because it is being terminated' (will not retry!)\nE0521 16:00:41.341079 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"webserver-deployment-dd94f59b7-ksff8.168120499a3f7d7a\", GenerateName:\"\", Namespace:\"deployment-6688\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"deployment-6688\", Name:\"webserver-deployment-dd94f59b7-ksff8\", UID:\"48cbbce3-d5fb-4f16-b574-a7d69ed19095\", APIVersion:\"v1\", ResourceVersion:\"18477\", FieldPath:\"\"}, Reason:\"Scheduled\", Message:\"Successfully assigned deployment-6688/webserver-deployment-dd94f59b7-ksff8 to kali-worker\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022142a533b237a, ext:2853407364688, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022142a533b237a, ext:2853407364688, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"webserver-deployment-dd94f59b7-ksff8.168120499a3f7d7a\" is forbidden: unable to create new content in namespace deployment-6688 because it is being terminated' (will not retry!)\nE0521 16:00:41.342279 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"webserver-deployment-dd94f59b7-v67rv.168120499a49e36e\", GenerateName:\"\", Namespace:\"deployment-6688\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"deployment-6688\", Name:\"webserver-deployment-dd94f59b7-v67rv\", UID:\"102d38c7-c12c-46b5-ae4b-e22d10e22030\", APIVersion:\"v1\", ResourceVersion:\"18478\", FieldPath:\"\"}, Reason:\"Scheduled\", Message:\"Successfully assigned deployment-6688/webserver-deployment-dd94f59b7-v67rv to kali-worker2\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022142a5345896e, ext:2853408046150, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022142a5345896e, ext:2853408046150, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"webserver-deployment-dd94f59b7-v67rv.168120499a49e36e\" is forbidden: unable to create new content in namespace deployment-6688 because it is being terminated' (will not retry!)\nE0521 16:00:41.348030 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"webserver-deployment-795d758f88-cbfsj.168120499a4adaa4\", GenerateName:\"\", Namespace:\"deployment-6688\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"deployment-6688\", Name:\"webserver-deployment-795d758f88-cbfsj\", UID:\"25ddeb96-4dd9-4a37-a9e2-431847725f8f\", APIVersion:\"v1\", ResourceVersion:\"18482\", FieldPath:\"\"}, Reason:\"Scheduled\", Message:\"Successfully assigned deployment-6688/webserver-deployment-795d758f88-cbfsj to kali-worker\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022142a534680a4, ext:2853408109432, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022142a534680a4, ext:2853408109432, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"webserver-deployment-795d758f88-cbfsj.168120499a4adaa4\" is forbidden: unable to create new content in namespace deployment-6688 because it is being terminated' (will not retry!)\nE0521 16:00:41.351294 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"webserver-deployment-795d758f88-jzdxk.168120499a4d6bf2\", GenerateName:\"\", Namespace:\"deployment-6688\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"deployment-6688\", Name:\"webserver-deployment-795d758f88-jzdxk\", UID:\"c6b60220-f50d-48af-9de1-7634135975b4\", APIVersion:\"v1\", ResourceVersion:\"18481\", FieldPath:\"\"}, Reason:\"Scheduled\", Message:\"Successfully assigned deployment-6688/webserver-deployment-795d758f88-jzdxk to kali-worker2\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022142a534911f2, ext:2853408277702, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022142a534911f2, ext:2853408277702, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"webserver-deployment-795d758f88-jzdxk.168120499a4d6bf2\" is forbidden: unable to create new content in namespace deployment-6688 because it is being terminated' (will not retry!)\nE0521 16:00:41.354964 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"webserver-deployment-dd94f59b7-qkj95.168120499a4df762\", GenerateName:\"\", Namespace:\"deployment-6688\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"deployment-6688\", Name:\"webserver-deployment-dd94f59b7-qkj95\", UID:\"70e6910b-8991-4dcb-8f32-0d2d918406a7\", APIVersion:\"v1\", ResourceVersion:\"18480\", FieldPath:\"\"}, Reason:\"Scheduled\", Message:\"Successfully assigned deployment-6688/webserver-deployment-dd94f59b7-qkj95 to kali-worker\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022142a53499d62, ext:2853408313402, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022142a53499d62, ext:2853408313402, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"webserver-deployment-dd94f59b7-qkj95.168120499a4df762\" is forbidden: unable to create new content in namespace deployment-6688 because it is being terminated' (will not retry!)\nE0521 16:00:41.357204 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"webserver-deployment-dd94f59b7-v76kz.168120499a5ce93d\", GenerateName:\"\", Namespace:\"deployment-6688\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"deployment-6688\", Name:\"webserver-deployment-dd94f59b7-v76kz\", UID:\"40d4fea4-430a-4d32-a643-8349518ae80c\", APIVersion:\"v1\", ResourceVersion:\"18483\", FieldPath:\"\"}, Reason:\"Scheduled\", Message:\"Successfully assigned deployment-6688/webserver-deployment-dd94f59b7-v76kz to kali-worker2\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022142a53588f3d, ext:2853409292826, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022142a53588f3d, ext:2853409292826, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"webserver-deployment-dd94f59b7-v76kz.168120499a5ce93d\" is forbidden: unable to create new content in namespace deployment-6688 because it is being terminated' (will not retry!)\nE0521 16:00:41.360448 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"webserver-deployment-795d758f88-mtcq7.168120499a5f5de9\", GenerateName:\"\", Namespace:\"deployment-6688\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"deployment-6688\", Name:\"webserver-deployment-795d758f88-mtcq7\", UID:\"4625c734-ba33-47c5-9bea-800c53b77da2\", APIVersion:\"v1\", ResourceVersion:\"18493\", FieldPath:\"\"}, Reason:\"Scheduled\", Message:\"Successfully assigned deployment-6688/webserver-deployment-795d758f88-mtcq7 to kali-worker\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022142a535b03e9, ext:2853409453768, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022142a535b03e9, ext:2853409453768, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"webserver-deployment-795d758f88-mtcq7.168120499a5f5de9\" is forbidden: unable to create new content in namespace deployment-6688 because it is being terminated' (will not retry!)\nE0521 16:01:21.392261 1 framework.go:747] plugin \"DefaultBinder\" failed to bind pod \"kubelet-test-1393/bin-false9d7ba5d1-5e71-499e-b176-ebb1fe718f1e\": Operation cannot be fulfilled on pods/binding \"bin-false9d7ba5d1-5e71-499e-b176-ebb1fe718f1e\": pod bin-false9d7ba5d1-5e71-499e-b176-ebb1fe718f1e is being deleted, cannot be assigned to a host\nE0521 16:01:21.392397 1 factory.go:465] \"Error scheduling pod; retrying\" err=\"Binding rejected: plugin \\\"DefaultBinder\\\" failed to bind pod \\\"kubelet-test-1393/bin-false9d7ba5d1-5e71-499e-b176-ebb1fe718f1e\\\": Operation cannot be fulfilled on pods/binding \\\"bin-false9d7ba5d1-5e71-499e-b176-ebb1fe718f1e\\\": pod bin-false9d7ba5d1-5e71-499e-b176-ebb1fe718f1e is being deleted, cannot be assigned to a host\" pod=\"kubelet-test-1393/bin-false9d7ba5d1-5e71-499e-b176-ebb1fe718f1e\"\nE0521 16:01:21.395559 1 scheduler.go:344] Error updating pod kubelet-test-1393/bin-false9d7ba5d1-5e71-499e-b176-ebb1fe718f1e: pods \"bin-false9d7ba5d1-5e71-499e-b176-ebb1fe718f1e\" not found\nE0521 16:04:47.558460 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pfpod2.16812082eef35ca7\", GenerateName:\"\", Namespace:\"limitrange-6044\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"limitrange-6044\", Name:\"pfpod2\", UID:\"b418f8b2-3b48-48fd-841d-801f2bdda9b2\", APIVersion:\"v1\", ResourceVersion:\"28493\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: limitrange-6044/pfpod2\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0221467e130e6a7, ext:3099641574810, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0221467e130e6a7, ext:3099641574810, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pfpod2.16812082eef35ca7\" is forbidden: unable to create new content in namespace limitrange-6044 because it is being terminated' (will not retry!)\nE0521 16:08:23.156872 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"additional-pod.168120b52195f50f\", GenerateName:\"\", Namespace:\"sched-pred-5005\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"sched-pred-5005\", Name:\"additional-pod\", UID:\"27c4b298-20e0-48aa-ab18-fb993de0a5a7\", APIVersion:\"v1\", ResourceVersion:\"30678\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: sched-pred-5005/additional-pod\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022149dc9390f0f, ext:3315239456253, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022149dc9390f0f, ext:3315239456253, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"additional-pod.168120b52195f50f\" is forbidden: unable to create new content in namespace sched-pred-5005 because it is being terminated' (will not retry!)\nE0521 16:08:30.260095 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"restricted-pod.168120b6c8fc1b3a\", GenerateName:\"\", Namespace:\"sched-pred-9610\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"sched-pred-9610\", Name:\"restricted-pod\", UID:\"b96f11d2-3419-4ace-bdf4-ac8d20f91448\", APIVersion:\"v1\", ResourceVersion:\"30823\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: sched-pred-9610/restricted-pod\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022149f8f63af3a, ext:3322342913067, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022149f8f63af3a, ext:3322342913067, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"restricted-pod.168120b6c8fc1b3a\" is forbidden: unable to create new content in namespace sched-pred-9610 because it is being terminated' (will not retry!)\nE0521 16:16:58.637187 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pod5.1681212d269ed333\", GenerateName:\"\", Namespace:\"sched-pred-9957\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"sched-pred-9957\", Name:\"pod5\", UID:\"0815fa07-359c-401d-8663-7dd6b7967662\", APIVersion:\"v1\", ResourceVersion:\"33893\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: sched-pred-9957/pod5\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc022151ea5dd8f33, ext:3830719999036, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc022151ea5dd8f33, ext:3830719999036, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pod5.1681212d269ed333\" is forbidden: unable to create new content in namespace sched-pred-9957 because it is being terminated' (will not retry!)\nE0521 16:18:46.972238 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-pod1-q7zrn.168121465fd85c0b\", GenerateName:\"\", Namespace:\"sched-preemption-path-1776\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"sched-preemption-path-1776\", Name:\"rs-pod1-q7zrn\", UID:\"27f5d831-5865-4c4e-9c93-159f1eab4d00\", APIVersion:\"v1\", ResourceVersion:\"34574\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: sched-preemption-path-1776/rs-pod1-q7zrn\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0221539b9c9e00b, ext:3939054253335, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0221539b9c9e00b, ext:3939054253335, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-pod1-q7zrn.168121465fd85c0b\" is forbidden: unable to create new content in namespace sched-preemption-path-1776 because it is being terminated' (will not retry!)\nE0521 16:18:46.976903 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-pod2-dn8qt.16812146602bc9a7\", GenerateName:\"\", Namespace:\"sched-preemption-path-1776\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"sched-preemption-path-1776\", Name:\"rs-pod2-dn8qt\", UID:\"8ba9ad90-6d1b-4181-9cdb-6fcafec4b4fc\", APIVersion:\"v1\", ResourceVersion:\"34576\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: sched-preemption-path-1776/rs-pod2-dn8qt\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0221539ba1d4da7, ext:3939059720852, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0221539ba1d4da7, ext:3939059720852, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-pod2-dn8qt.16812146602bc9a7\" is forbidden: unable to create new content in namespace sched-preemption-path-1776 because it is being terminated' (will not retry!)\nE0521 16:30:15.544903 1 framework.go:747] plugin \"DefaultBinder\" failed to bind pod \"disruption-595/rs-bp6wb\": pods \"rs-bp6wb\" is forbidden: unable to create new content in namespace disruption-595 because it is being terminated\nE0521 16:30:15.545023 1 factory.go:465] \"Error scheduling pod; retrying\" err=\"Binding rejected: plugin \\\"DefaultBinder\\\" failed to bind pod \\\"disruption-595/rs-bp6wb\\\": pods \\\"rs-bp6wb\\\" is forbidden: unable to create new content in namespace disruption-595 because it is being terminated\" pod=\"disruption-595/rs-bp6wb\"\nE0521 16:30:15.547122 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-bp6wb.168121e6b22546e6\", GenerateName:\"\", Namespace:\"disruption-595\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-595\", Name:\"rs-bp6wb\", UID:\"b4cf8833-d110-48d7-9da4-65ec1c038450\", APIVersion:\"v1\", ResourceVersion:\"41638\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"Binding rejected: plugin \\\"DefaultBinder\\\" failed to bind pod \\\"disruption-595/rs-bp6wb\\\": pods \\\"rs-bp6wb\\\" is forbidden: unable to create new content in namespace disruption-595 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02215e5e07d20e6, ext:4627629793236, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02215e5e07d20e6, ext:4627629793236, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-bp6wb.168121e6b22546e6\" is forbidden: unable to create new content in namespace disruption-595 because it is being terminated' (will not retry!)\nE0521 16:30:15.552694 1 framework.go:747] plugin \"DefaultBinder\" failed to bind pod \"disruption-595/rs-bp6wb\": pods \"rs-bp6wb\" is forbidden: unable to create new content in namespace disruption-595 because it is being terminated\nE0521 16:30:15.552835 1 factory.go:465] \"Error scheduling pod; retrying\" err=\"Binding rejected: plugin \\\"DefaultBinder\\\" failed to bind pod \\\"disruption-595/rs-bp6wb\\\": pods \\\"rs-bp6wb\\\" is forbidden: unable to create new content in namespace disruption-595 because it is being terminated\" pod=\"disruption-595/rs-bp6wb\"\nE0521 16:30:15.557366 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-bp6wb.168121e6b22546e6\", GenerateName:\"\", Namespace:\"disruption-595\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-595\", Name:\"rs-bp6wb\", UID:\"b4cf8833-d110-48d7-9da4-65ec1c038450\", APIVersion:\"v1\", ResourceVersion:\"41643\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"Binding rejected: plugin \\\"DefaultBinder\\\" failed to bind pod \\\"disruption-595/rs-bp6wb\\\": pods \\\"rs-bp6wb\\\" is forbidden: unable to create new content in namespace disruption-595 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02215e5e07d20e6, ext:4627629793236, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02215e5e0f4fed1, ext:4627637648865, loc:(*time.Location)(0x2cfdb20)}}, Count:2, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-bp6wb.168121e6b22546e6\" is forbidden: unable to create new content in namespace disruption-595 because it is being terminated' (will not retry!)\nE0521 16:30:17.550064 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-k7r4m.168121e7298ec43b\", GenerateName:\"\", Namespace:\"disruption-645\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-645\", Name:\"rs-k7r4m\", UID:\"ec172176-c91d-488a-9ef1-99430d6ba006\", APIVersion:\"v1\", ResourceVersion:\"41749\", FieldPath:\"\"}, Reason:\"Scheduled\", Message:\"Successfully assigned disruption-645/rs-k7r4m to kali-worker2\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02215e660b10a3b, ext:4629633195325, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02215e660b10a3b, ext:4629633195325, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-k7r4m.168121e7298ec43b\" is forbidden: unable to create new content in namespace disruption-645 because it is being terminated' (will not retry!)\nE0521 16:30:17.867860 1 framework.go:747] plugin \"DefaultBinder\" failed to bind pod \"disruption-595/rs-bp6wb\": pods \"rs-bp6wb\" is forbidden: unable to create new content in namespace disruption-595 because it is being terminated\nE0521 16:30:17.868014 1 factory.go:465] \"Error scheduling pod; retrying\" err=\"Binding rejected: plugin \\\"DefaultBinder\\\" failed to bind pod \\\"disruption-595/rs-bp6wb\\\": pods \\\"rs-bp6wb\\\" is forbidden: unable to create new content in namespace disruption-595 because it is being terminated\" pod=\"disruption-595/rs-bp6wb\"\nE0521 16:30:17.873281 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-bp6wb.168121e6b22546e6\", GenerateName:\"\", Namespace:\"disruption-595\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-595\", Name:\"rs-bp6wb\", UID:\"b4cf8833-d110-48d7-9da4-65ec1c038450\", APIVersion:\"v1\", ResourceVersion:\"41643\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"Binding rejected: plugin \\\"DefaultBinder\\\" failed to bind pod \\\"disruption-595/rs-bp6wb\\\": pods \\\"rs-bp6wb\\\" is forbidden: unable to create new content in namespace disruption-595 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02215e5e07d20e6, ext:4627629793236, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02215e673be104b, ext:4629952815929, loc:(*time.Location)(0x2cfdb20)}}, Count:3, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-bp6wb.168121e6b22546e6\" is forbidden: unable to create new content in namespace disruption-595 because it is being terminated' (will not retry!)\nE0521 16:30:20.944452 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-bp6wb.168121e7f3de04f8\", GenerateName:\"\", Namespace:\"disruption-595\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-595\", Name:\"rs-bp6wb\", UID:\"b4cf8833-d110-48d7-9da4-65ec1c038450\", APIVersion:\"v1\", ResourceVersion:\"41954\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: disruption-595/rs-bp6wb\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02215e7382fecf8, ext:4633027386830, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02215e7382fecf8, ext:4633027386830, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-bp6wb.168121e7f3de04f8\" is forbidden: unable to create new content in namespace disruption-595 because it is being terminated' (will not retry!)\nE0521 16:30:23.718742 1 framework.go:747] plugin \"DefaultBinder\" failed to bind pod \"disruption-69/rs-tqgzx\": pods \"rs-tqgzx\" is forbidden: unable to create new content in namespace disruption-69 because it is being terminated\nE0521 16:30:23.718847 1 factory.go:465] \"Error scheduling pod; retrying\" err=\"Binding rejected: plugin \\\"DefaultBinder\\\" failed to bind pod \\\"disruption-69/rs-tqgzx\\\": pods \\\"rs-tqgzx\\\" is forbidden: unable to create new content in namespace disruption-69 because it is being terminated\" pod=\"disruption-69/rs-tqgzx\"\nE0521 16:30:23.721243 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-tqgzx.168121e89957dca3\", GenerateName:\"\", Namespace:\"disruption-69\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-69\", Name:\"rs-tqgzx\", UID:\"05591d3e-8066-4948-a599-26d4152d52af\", APIVersion:\"v1\", ResourceVersion:\"42136\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"Binding rejected: plugin \\\"DefaultBinder\\\" failed to bind pod \\\"disruption-69/rs-tqgzx\\\": pods \\\"rs-tqgzx\\\" is forbidden: unable to create new content in namespace disruption-69 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02215e7ead966a3, ext:4635803612561, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02215e7ead966a3, ext:4635803612561, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-tqgzx.168121e89957dca3\" is forbidden: unable to create new content in namespace disruption-69 because it is being terminated' (will not retry!)\nE0521 16:30:23.728956 1 framework.go:747] plugin \"DefaultBinder\" failed to bind pod \"disruption-69/rs-tqgzx\": pods \"rs-tqgzx\" is forbidden: unable to create new content in namespace disruption-69 because it is being terminated\nE0521 16:30:23.729029 1 factory.go:465] \"Error scheduling pod; retrying\" err=\"Binding rejected: plugin \\\"DefaultBinder\\\" failed to bind pod \\\"disruption-69/rs-tqgzx\\\": pods \\\"rs-tqgzx\\\" is forbidden: unable to create new content in namespace disruption-69 because it is being terminated\" pod=\"disruption-69/rs-tqgzx\"\nE0521 16:30:23.732793 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-tqgzx.168121e89957dca3\", GenerateName:\"\", Namespace:\"disruption-69\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-69\", Name:\"rs-tqgzx\", UID:\"05591d3e-8066-4948-a599-26d4152d52af\", APIVersion:\"v1\", ResourceVersion:\"42143\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"Binding rejected: plugin \\\"DefaultBinder\\\" failed to bind pod \\\"disruption-69/rs-tqgzx\\\": pods \\\"rs-tqgzx\\\" is forbidden: unable to create new content in namespace disruption-69 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02215e7ead966a3, ext:4635803612561, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02215e7eb750b0d, ext:4635813812731, loc:(*time.Location)(0x2cfdb20)}}, Count:2, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-tqgzx.168121e89957dca3\" is forbidden: unable to create new content in namespace disruption-69 because it is being terminated' (will not retry!)\nE0521 16:30:25.867095 1 framework.go:747] plugin \"DefaultBinder\" failed to bind pod \"disruption-69/rs-tqgzx\": pods \"rs-tqgzx\" is forbidden: unable to create new content in namespace disruption-69 because it is being terminated\nE0521 16:30:25.867236 1 factory.go:465] \"Error scheduling pod; retrying\" err=\"Binding rejected: plugin \\\"DefaultBinder\\\" failed to bind pod \\\"disruption-69/rs-tqgzx\\\": pods \\\"rs-tqgzx\\\" is forbidden: unable to create new content in namespace disruption-69 because it is being terminated\" pod=\"disruption-69/rs-tqgzx\"\nE0521 16:30:25.872811 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-tqgzx.168121e89957dca3\", GenerateName:\"\", Namespace:\"disruption-69\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-69\", Name:\"rs-tqgzx\", UID:\"05591d3e-8066-4948-a599-26d4152d52af\", APIVersion:\"v1\", ResourceVersion:\"42143\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"Binding rejected: plugin \\\"DefaultBinder\\\" failed to bind pod \\\"disruption-69/rs-tqgzx\\\": pods \\\"rs-tqgzx\\\" is forbidden: unable to create new content in namespace disruption-69 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02215e7ead966a3, ext:4635803612561, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02215e873b261ec, ext:4637952050412, loc:(*time.Location)(0x2cfdb20)}}, Count:3, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-tqgzx.168121e89957dca3\" is forbidden: unable to create new content in namespace disruption-69 because it is being terminated' (will not retry!)\nE0521 16:30:28.779643 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-tqgzx.168121e9c6e676c0\", GenerateName:\"\", Namespace:\"disruption-69\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-69\", Name:\"rs-tqgzx\", UID:\"05591d3e-8066-4948-a599-26d4152d52af\", APIVersion:\"v1\", ResourceVersion:\"42338\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: disruption-69/rs-tqgzx\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02215e92e620ec0, ext:4640862900143, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02215e92e620ec0, ext:4640862900143, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-tqgzx.168121e9c6e676c0\" is forbidden: unable to create new content in namespace disruption-69 because it is being terminated' (will not retry!)\nE0521 16:38:24.501770 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-bb72h.168122588a1d984e\", GenerateName:\"\", Namespace:\"disruption-4102\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-4102\", Name:\"rs-bb72h\", UID:\"fd461751-b390-4edd-b6dd-3d32d634ba2c\", APIVersion:\"v1\", ResourceVersion:\"48350\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: disruption-4102/rs-bb72h\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02216601dc9984e, ext:5116584472892, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02216601dc9984e, ext:5116584472892, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-bb72h.168122588a1d984e\" is forbidden: unable to create new content in namespace disruption-4102 because it is being terminated' (will not retry!)\nE0521 16:38:24.514077 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-h4rmq.168122588adf19d3\", GenerateName:\"\", Namespace:\"disruption-4102\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-4102\", Name:\"rs-h4rmq\", UID:\"2cd66e20-fe67-46bf-9853-098b2d43061c\", APIVersion:\"v1\", ResourceVersion:\"48358\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: disruption-4102/rs-h4rmq\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02216601e8b19d3, ext:5116597154514, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02216601e8b19d3, ext:5116597154514, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-h4rmq.168122588adf19d3\" is forbidden: unable to create new content in namespace disruption-4102 because it is being terminated' (will not retry!)\nE0521 16:38:24.521851 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-hkpgl.168122588b53b212\", GenerateName:\"\", Namespace:\"disruption-4102\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-4102\", Name:\"rs-hkpgl\", UID:\"b7af4a5b-2aec-40be-8a12-d3b0542f2283\", APIVersion:\"v1\", ResourceVersion:\"48362\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: disruption-4102/rs-hkpgl\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02216601effb212, ext:5116604795648, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02216601effb212, ext:5116604795648, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-hkpgl.168122588b53b212\" is forbidden: unable to create new content in namespace disruption-4102 because it is being terminated' (will not retry!)\nE0521 16:38:24.526492 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-l58ns.168122588b9f61f0\", GenerateName:\"\", Namespace:\"disruption-4102\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-4102\", Name:\"rs-l58ns\", UID:\"6ccd9083-22c2-417c-85db-6f19b99151a9\", APIVersion:\"v1\", ResourceVersion:\"48365\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: disruption-4102/rs-l58ns\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02216601f4b61f0, ext:5116609755864, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02216601f4b61f0, ext:5116609755864, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-l58ns.168122588b9f61f0\" is forbidden: unable to create new content in namespace disruption-4102 because it is being terminated' (will not retry!)\nE0521 16:38:24.531476 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-phfhk.168122588bea6f11\", GenerateName:\"\", Namespace:\"disruption-4102\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-4102\", Name:\"rs-phfhk\", UID:\"d79b91a4-5294-43c1-901e-e5724e233f9f\", APIVersion:\"v1\", ResourceVersion:\"48368\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: disruption-4102/rs-phfhk\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02216601f966f11, ext:5116614674423, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02216601f966f11, ext:5116614674423, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-phfhk.168122588bea6f11\" is forbidden: unable to create new content in namespace disruption-4102 because it is being terminated' (will not retry!)\nE0521 16:38:24.536108 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-q4c6k.168122588c2c434e\", GenerateName:\"\", Namespace:\"disruption-4102\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-4102\", Name:\"rs-q4c6k\", UID:\"30978822-b138-4deb-a3a8-441ffb214dc2\", APIVersion:\"v1\", ResourceVersion:\"48371\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: disruption-4102/rs-q4c6k\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02216601fd8434e, ext:5116618988591, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02216601fd8434e, ext:5116618988591, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-q4c6k.168122588c2c434e\" is forbidden: unable to create new content in namespace disruption-4102 because it is being terminated' (will not retry!)\nE0521 16:38:24.540388 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-z7fmb.168122588c6f9c3e\", GenerateName:\"\", Namespace:\"disruption-4102\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-4102\", Name:\"rs-z7fmb\", UID:\"1a77657f-d53d-42c2-8ac2-52b1a47f7d5f\", APIVersion:\"v1\", ResourceVersion:\"48374\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: disruption-4102/rs-z7fmb\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0221660201b9c3e, ext:5116623402283, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0221660201b9c3e, ext:5116623402283, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-z7fmb.168122588c6f9c3e\" is forbidden: unable to create new content in namespace disruption-4102 because it is being terminated' (will not retry!)\nE0521 16:38:24.545366 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-zsxk4.168122588cbdc210\", GenerateName:\"\", Namespace:\"disruption-4102\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-4102\", Name:\"rs-zsxk4\", UID:\"5416ef48-a451-47ea-9569-2b3db3afc85c\", APIVersion:\"v1\", ResourceVersion:\"48376\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: disruption-4102/rs-zsxk4\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02216602069c210, ext:5116628523798, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02216602069c210, ext:5116628523798, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-zsxk4.168122588cbdc210\" is forbidden: unable to create new content in namespace disruption-4102 because it is being terminated' (will not retry!)\nE0521 16:38:30.202712 1 framework.go:747] plugin \"DefaultBinder\" failed to bind pod \"disruption-3936/rs-jtc6v\": pods \"rs-jtc6v\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated\nE0521 16:38:30.202855 1 factory.go:465] \"Error scheduling pod; retrying\" err=\"Binding rejected: plugin \\\"DefaultBinder\\\" failed to bind pod \\\"disruption-3936/rs-jtc6v\\\": pods \\\"rs-jtc6v\\\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated\" pod=\"disruption-3936/rs-jtc6v\"\nE0521 16:38:30.205266 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-jtc6v.16812259de0ce286\", GenerateName:\"\", Namespace:\"disruption-3936\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-3936\", Name:\"rs-jtc6v\", UID:\"2e7121e7-8dfc-41d2-851b-891fa9616dbd\", APIVersion:\"v1\", ResourceVersion:\"48445\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"Binding rejected: plugin \\\"DefaultBinder\\\" failed to bind pod \\\"disruption-3936/rs-jtc6v\\\": pods \\\"rs-jtc6v\\\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02216618c182686, ext:5122287631220, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02216618c182686, ext:5122287631220, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-jtc6v.16812259de0ce286\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated' (will not retry!)\nE0521 16:38:30.206116 1 framework.go:747] plugin \"DefaultBinder\" failed to bind pod \"disruption-3936/rs-jpcl2\": pods \"rs-jpcl2\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated\nE0521 16:38:30.206223 1 factory.go:465] \"Error scheduling pod; retrying\" err=\"Binding rejected: plugin \\\"DefaultBinder\\\" failed to bind pod \\\"disruption-3936/rs-jpcl2\\\": pods \\\"rs-jpcl2\\\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated\" pod=\"disruption-3936/rs-jpcl2\"\nE0521 16:38:30.209633 1 framework.go:747] plugin \"DefaultBinder\" failed to bind pod \"disruption-3936/rs-l8vvz\": pods \"rs-l8vvz\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated\nE0521 16:38:30.209753 1 factory.go:465] \"Error scheduling pod; retrying\" err=\"Binding rejected: plugin \\\"DefaultBinder\\\" failed to bind pod \\\"disruption-3936/rs-l8vvz\\\": pods \\\"rs-l8vvz\\\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated\" pod=\"disruption-3936/rs-l8vvz\"\nE0521 16:38:30.211654 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-jpcl2.16812259de4060d3\", GenerateName:\"\", Namespace:\"disruption-3936\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-3936\", Name:\"rs-jpcl2\", UID:\"3d8b9253-f53a-462d-9369-e3a83f2ac1eb\", APIVersion:\"v1\", ResourceVersion:\"48458\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"Binding rejected: plugin \\\"DefaultBinder\\\" failed to bind pod \\\"disruption-3936/rs-jpcl2\\\": pods \\\"rs-jpcl2\\\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02216618c4ba4d3, ext:5122291005892, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02216618c4ba4d3, ext:5122291005892, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-jpcl2.16812259de4060d3\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated' (will not retry!)\nE0521 16:38:30.213377 1 framework.go:747] plugin \"DefaultBinder\" failed to bind pod \"disruption-3936/rs-nckrj\": pods \"rs-nckrj\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated\nE0521 16:38:30.213514 1 factory.go:465] \"Error scheduling pod; retrying\" err=\"Binding rejected: plugin \\\"DefaultBinder\\\" failed to bind pod \\\"disruption-3936/rs-nckrj\\\": pods \\\"rs-nckrj\\\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated\" pod=\"disruption-3936/rs-nckrj\"\nE0521 16:38:30.217399 1 framework.go:747] plugin \"DefaultBinder\" failed to bind pod \"disruption-3936/rs-52rz4\": pods \"rs-52rz4\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated\nE0521 16:38:30.217496 1 factory.go:465] \"Error scheduling pod; retrying\" err=\"Binding rejected: plugin \\\"DefaultBinder\\\" failed to bind pod \\\"disruption-3936/rs-52rz4\\\": pods \\\"rs-52rz4\\\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated\" pod=\"disruption-3936/rs-52rz4\"\nE0521 16:38:30.217516 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-l8vvz.16812259de773b61\", GenerateName:\"\", Namespace:\"disruption-3936\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-3936\", Name:\"rs-l8vvz\", UID:\"cc2c0adc-b0a2-479c-a785-687b995c7009\", APIVersion:\"v1\", ResourceVersion:\"48463\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"Binding rejected: plugin \\\"DefaultBinder\\\" failed to bind pod \\\"disruption-3936/rs-l8vvz\\\": pods \\\"rs-l8vvz\\\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02216618c827f61, ext:5122294600787, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02216618c827f61, ext:5122294600787, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-l8vvz.16812259de773b61\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated' (will not retry!)\nE0521 16:38:30.223619 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-nckrj.16812259deaf6e2c\", GenerateName:\"\", Namespace:\"disruption-3936\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-3936\", Name:\"rs-nckrj\", UID:\"8985902b-60e8-494e-b1ca-c9207b306630\", APIVersion:\"v1\", ResourceVersion:\"48467\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"Binding rejected: plugin \\\"DefaultBinder\\\" failed to bind pod \\\"disruption-3936/rs-nckrj\\\": pods \\\"rs-nckrj\\\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02216618cbab22c, ext:5122298283796, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02216618cbab22c, ext:5122298283796, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-nckrj.16812259deaf6e2c\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated' (will not retry!)\nE0521 16:38:30.229341 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-52rz4.16812259deec7a89\", GenerateName:\"\", Namespace:\"disruption-3936\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-3936\", Name:\"rs-52rz4\", UID:\"3a03c1a9-b635-45ac-bb09-a5fc09089de7\", APIVersion:\"v1\", ResourceVersion:\"48471\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"Binding rejected: plugin \\\"DefaultBinder\\\" failed to bind pod \\\"disruption-3936/rs-52rz4\\\": pods \\\"rs-52rz4\\\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02216618cf7be89, ext:5122302284700, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02216618cf7be89, ext:5122302284700, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-52rz4.16812259deec7a89\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated' (will not retry!)\nE0521 16:38:32.262385 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-jtc6v.1681225a58abee2d\", GenerateName:\"\", Namespace:\"disruption-3936\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-3936\", Name:\"rs-jtc6v\", UID:\"2e7121e7-8dfc-41d2-851b-891fa9616dbd\", APIVersion:\"v1\", ResourceVersion:\"48584\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: disruption-3936/rs-jtc6v\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02216620f819e2d, ext:5124344874839, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02216620f819e2d, ext:5124344874839, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-jtc6v.1681225a58abee2d\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated' (will not retry!)\nE0521 16:38:32.264400 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-vp26b.1681225a58b3fdd4\", GenerateName:\"\", Namespace:\"disruption-3936\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-3936\", Name:\"rs-vp26b\", UID:\"6f3e4660-c15b-4534-828b-de913ce2a8ce\", APIVersion:\"v1\", ResourceVersion:\"48585\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: disruption-3936/rs-vp26b\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02216620f89add4, ext:5124345403097, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02216620f89add4, ext:5124345403097, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-vp26b.1681225a58b3fdd4\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated' (will not retry!)\nE0521 16:38:32.266270 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-zf52p.1681225a58c4b504\", GenerateName:\"\", Namespace:\"disruption-3936\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-3936\", Name:\"rs-zf52p\", UID:\"bd6a1a17-a648-4191-bd91-8ac7471e8957\", APIVersion:\"v1\", ResourceVersion:\"48586\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: disruption-3936/rs-zf52p\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02216620f9a6504, ext:5124346498569, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02216620f9a6504, ext:5124346498569, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-zf52p.1681225a58c4b504\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated' (will not retry!)\nE0521 16:38:32.268146 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-45sdc.1681225a58c6529c\", GenerateName:\"\", Namespace:\"disruption-3936\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-3936\", Name:\"rs-45sdc\", UID:\"2b1eae64-6d07-44f0-b01a-993dc5402e15\", APIVersion:\"v1\", ResourceVersion:\"48587\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: disruption-3936/rs-45sdc\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02216620f9c029c, ext:5124346604466, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02216620f9c029c, ext:5124346604466, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-45sdc.1681225a58c6529c\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated' (will not retry!)\nE0521 16:38:32.269511 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-52rz4.1681225a58d1607b\", GenerateName:\"\", Namespace:\"disruption-3936\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-3936\", Name:\"rs-52rz4\", UID:\"3a03c1a9-b635-45ac-bb09-a5fc09089de7\", APIVersion:\"v1\", ResourceVersion:\"48588\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: disruption-3936/rs-52rz4\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02216620fa7107b, ext:5124347328873, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02216620fa7107b, ext:5124347328873, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-52rz4.1681225a58d1607b\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated' (will not retry!)\nE0521 16:38:32.270984 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-nckrj.1681225a58d65a68\", GenerateName:\"\", Namespace:\"disruption-3936\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-3936\", Name:\"rs-nckrj\", UID:\"8985902b-60e8-494e-b1ca-c9207b306630\", APIVersion:\"v1\", ResourceVersion:\"48589\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: disruption-3936/rs-nckrj\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02216620fac0a68, ext:5124347655005, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02216620fac0a68, ext:5124347655005, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-nckrj.1681225a58d65a68\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated' (will not retry!)\nE0521 16:38:32.272286 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-jpcl2.1681225a58edb219\", GenerateName:\"\", Namespace:\"disruption-3936\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-3936\", Name:\"rs-jpcl2\", UID:\"3d8b9253-f53a-462d-9369-e3a83f2ac1eb\", APIVersion:\"v1\", ResourceVersion:\"48590\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: disruption-3936/rs-jpcl2\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02216620fc36219, ext:5124349184799, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02216620fc36219, ext:5124349184799, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-jpcl2.1681225a58edb219\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated' (will not retry!)\nE0521 16:38:32.273636 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-l8vvz.1681225a58f6f5bc\", GenerateName:\"\", Namespace:\"disruption-3936\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-3936\", Name:\"rs-l8vvz\", UID:\"cc2c0adc-b0a2-479c-a785-687b995c7009\", APIVersion:\"v1\", ResourceVersion:\"48591\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: disruption-3936/rs-l8vvz\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02216620fcca5bc, ext:5124349791926, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02216620fcca5bc, ext:5124349791926, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-l8vvz.1681225a58f6f5bc\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated' (will not retry!)\nE0521 16:38:32.275197 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"rs-4xwjv.1681225a58ff3efd\", GenerateName:\"\", Namespace:\"disruption-3936\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"disruption-3936\", Name:\"rs-4xwjv\", UID:\"b1373eaf-99c7-4265-82c4-85d2b0ac0e64\", APIVersion:\"v1\", ResourceVersion:\"48592\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"skip schedule deleting pod: disruption-3936/rs-4xwjv\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc02216620fd4eefd, ext:5124350334949, loc:(*time.Location)(0x2cfdb20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc02216620fd4eefd, ext:5124350334949, loc:(*time.Location)(0x2cfdb20)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"rs-4xwjv.1681225a58ff3efd\" is forbidden: unable to create new content in namespace disruption-3936 because it is being terminated' (will not retry!)\n==== END logs for container kube-scheduler of pod kube-system/kube-scheduler-kali-control-plane ====\n==== START logs for container setsysctls of pod kube-system/tune-sysctls-8m4jc ====\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\n==== END logs for container setsysctls of pod kube-system/tune-sysctls-8m4jc ====\n==== START logs for container setsysctls of pod kube-system/tune-sysctls-m54ts ====\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\n==== END logs for container setsysctls of pod kube-system/tune-sysctls-m54ts ====\n==== START logs for container setsysctls of pod kube-system/tune-sysctls-zzq45 ====\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\nfs.inotify.max_user_watches = 524288\n==== END logs for container setsysctls of pod kube-system/tune-sysctls-zzq45 ====\n{\n \"kind\": \"EventList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/namespaces/kubectl-4212/events\",\n \"resourceVersion\": \"49046\"\n },\n \"items\": []\n}\n{\n \"kind\": \"ReplicationControllerList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/namespaces/kubectl-4212/replicationcontrollers\",\n \"resourceVersion\": \"49046\"\n },\n \"items\": []\n}\n{\n \"kind\": \"ServiceList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/namespaces/kubectl-4212/services\",\n \"resourceVersion\": \"49046\"\n },\n \"items\": []\n}\n{\n \"kind\": \"DaemonSetList\",\n \"apiVersion\": \"apps/v1\",\n \"metadata\": {\n \"selfLink\": \"/apis/apps/v1/namespaces/kubectl-4212/daemonsets\",\n \"resourceVersion\": \"49046\"\n },\n \"items\": []\n}\n{\n \"kind\": \"DeploymentList\",\n \"apiVersion\": \"apps/v1\",\n \"metadata\": {\n \"selfLink\": \"/apis/apps/v1/namespaces/kubectl-4212/deployments\",\n \"resourceVersion\": \"49046\"\n },\n \"items\": []\n}\n{\n \"kind\": \"ReplicaSetList\",\n \"apiVersion\": \"apps/v1\",\n \"metadata\": {\n \"selfLink\": \"/apis/apps/v1/namespaces/kubectl-4212/replicasets\",\n \"resourceVersion\": \"49046\"\n },\n \"items\": []\n}\n{\n \"kind\": \"PodList\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n \"selfLink\": \"/api/v1/namespaces/kubectl-4212/pods\",\n \"resourceVersion\": \"49046\"\n },\n \"items\": []\n}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:06.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4212" for this suite. •S ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds","total":-1,"completed":1,"skipped":133,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:06.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should reject quota with invalid scopes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1812 STEP: calling kubectl quota May 21 16:39:06.506: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-226 create quota scopes --hard=hard=pods=1000000 --scopes=Foo' May 21 16:39:06.602: INFO: rc: 1 [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:06.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-226" for this suite. •SS ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes","total":-1,"completed":2,"skipped":195,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:06.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create a quota with scopes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1785 STEP: calling kubectl quota May 21 16:39:07.013: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-4794 create quota scopes --hard=pods=1000000 --scopes=BestEffort,NotTerminating' May 21 16:39:07.121: INFO: stderr: "" May 21 16:39:07.121: INFO: stdout: "resourcequota/scopes created\n" STEP: verifying that the quota was created [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:07.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4794" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes","total":-1,"completed":3,"skipped":462,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:05.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl May 21 16:39:05.953: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 21 16:39:05.956: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl copy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1346 STEP: creating the pod May 21 16:39:05.959: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6321 create -f -' May 21 16:39:06.244: INFO: stderr: "" May 21 16:39:06.244: INFO: stdout: "pod/busybox1 created\n" May 21 16:39:06.244: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [busybox1] May 21 16:39:06.244: INFO: Waiting up to 5m0s for pod "busybox1" in namespace "kubectl-6321" to be "running and ready" May 21 16:39:06.247: INFO: Pod "busybox1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.503836ms May 21 16:39:08.251: INFO: Pod "busybox1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00662916s May 21 16:39:10.255: INFO: Pod "busybox1": Phase="Running", Reason="", readiness=true. Elapsed: 4.010288158s May 21 16:39:10.255: INFO: Pod "busybox1" satisfied condition "running and ready" May 21 16:39:10.255: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [busybox1] [It] should copy a file from a running Pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1361 STEP: specifying a remote filepath busybox1:/root/foo/bar/foo.bar on the pod May 21 16:39:10.255: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6321 cp busybox1:/root/foo/bar/foo.bar /tmp/copy-foobar537868518' May 21 16:39:10.533: INFO: stderr: "" May 21 16:39:10.533: INFO: stdout: "tar: removing leading '/' from member names\n" STEP: verifying that the contents of the remote file busybox1:/root/foo/bar/foo.bar have been copied to a local file /tmp/copy-foobar537868518 [AfterEach] Kubectl copy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1352 STEP: using delete to clean up resources May 21 16:39:10.533: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6321 delete --grace-period=0 --force -f -' May 21 16:39:10.665: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 21 16:39:10.665: INFO: stdout: "pod \"busybox1\" force deleted\n" May 21 16:39:10.666: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6321 get rc,svc -l app=busybox1 --no-headers' May 21 16:39:10.790: INFO: stderr: "No resources found in kubectl-6321 namespace.\n" May 21 16:39:10.790: INFO: stdout: "" May 21 16:39:10.790: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-6321 get pods -l app=busybox1 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 21 16:39:10.903: INFO: stderr: "" May 21 16:39:10.903: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:10.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6321" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod","total":-1,"completed":1,"skipped":518,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:11.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl describe prints relevant information for cronjob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1186 STEP: creating a cronjob May 21 16:39:11.436: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-917 create -f -' May 21 16:39:11.735: INFO: stderr: "" May 21 16:39:11.735: INFO: stdout: "cronjob.batch/cronjob-test created\n" STEP: waiting for cronjob to start. STEP: verifying kubectl describe prints May 21 16:39:11.738: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-917 describe cronjob cronjob-test' May 21 16:39:11.873: INFO: stderr: "" May 21 16:39:11.873: INFO: stdout: "Name: cronjob-test\nNamespace: kubectl-917\nLabels: \nAnnotations: \nSchedule: */1 * * * *\nConcurrency Policy: Allow\nSuspend: False\nSuccessful Job History Limit: 3\nFailed Job History Limit: 1\nStarting Deadline Seconds: 30s\nSelector: \nParallelism: \nCompletions: \nPod Template:\n Labels: \n Containers:\n test:\n Image: busybox\n Port: \n Host Port: \n Args:\n /bin/true\n Environment: \n Mounts: \n Volumes: \nLast Schedule Time: \nActive Jobs: \nEvents: \n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:11.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-917" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob","total":-1,"completed":2,"skipped":796,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:05.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl May 21 16:39:05.855: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 21 16:39:05.859: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:384 STEP: creating the pod from May 21 16:39:05.862: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-2538 create -f -' May 21 16:39:06.133: INFO: stderr: "" May 21 16:39:06.133: INFO: stdout: "pod/httpd created\n" May 21 16:39:06.133: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] May 21 16:39:06.134: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-2538" to be "running and ready" May 21 16:39:06.143: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.141632ms May 21 16:39:08.147: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2.013245542s May 21 16:39:10.151: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.017645607s May 21 16:39:12.154: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 6.020682317s May 21 16:39:12.154: INFO: Pod "httpd" satisfied condition "running and ready" May 21 16:39:12.154: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd] [It] should support exec /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:394 STEP: executing a command in the container May 21 16:39:12.154: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-2538 exec httpd echo running in container' May 21 16:39:12.393: INFO: stderr: "kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.\n" May 21 16:39:12.394: INFO: stdout: "running in container\n" STEP: executing a very long command in the container May 21 16:39:12.394: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-2538 exec httpd echo aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' May 21 16:39:12.664: INFO: stderr: "kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.\n" May 21 16:39:12.665: INFO: stdout: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\n" STEP: executing a command in the container with noninteractive stdin May 21 16:39:12.666: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-2538 exec -i httpd cat' May 21 16:39:12.897: INFO: stderr: "kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.\n" May 21 16:39:12.897: INFO: stdout: "abcd1234" STEP: executing a command in the container with pseudo-interactive stdin May 21 16:39:12.897: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-2538 exec -i httpd sh' May 21 16:39:13.154: INFO: stderr: "kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.\n" May 21 16:39:13.154: INFO: stdout: "hi\n" [AfterEach] Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:390 STEP: using delete to clean up resources May 21 16:39:13.154: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-2538 delete --grace-period=0 --force -f -' May 21 16:39:13.281: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 21 16:39:13.281: INFO: stdout: "pod \"httpd\" force deleted\n" May 21 16:39:13.281: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-2538 get rc,svc -l name=httpd --no-headers' May 21 16:39:13.440: INFO: stderr: "No resources found in kubectl-2538 namespace.\n" May 21 16:39:13.441: INFO: stdout: "" May 21 16:39:13.441: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-2538 get pods -l name=httpd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 21 16:39:13.558: INFO: stderr: "" May 21 16:39:13.558: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:13.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2538" for this suite. • [SLOW TEST:7.733 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:382 should support exec /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:394 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec","total":-1,"completed":1,"skipped":421,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl Port forwarding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:05.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename port-forwarding May 21 16:39:05.609: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 21 16:39:05.612: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should support a client that connects, sends NO DATA, and disconnects /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:454 STEP: Creating the target pod May 21 16:39:05.633: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true) May 21 16:39:07.637: INFO: The status of Pod pfpod is Running (Ready = false) May 21 16:39:09.637: INFO: The status of Pod pfpod is Running (Ready = false) May 21 16:39:11.636: INFO: The status of Pod pfpod is Running (Ready = true) STEP: Running 'kubectl port-forward' May 21 16:39:11.636: INFO: starting port-forward command and streaming output May 21 16:39:11.636: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=port-forwarding-4752 port-forward --namespace=port-forwarding-4752 pfpod :80' May 21 16:39:11.637: INFO: reading from `kubectl port-forward` command's stdout STEP: Dialing the local port STEP: Closing the connection to the local port STEP: Waiting for the target pod to stop running May 21 16:39:11.799: INFO: Waiting up to 5m0s for pod "pfpod" in namespace "port-forwarding-4752" to be "container terminated" May 21 16:39:11.802: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=true. Elapsed: 3.065112ms May 21 16:39:13.806: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=false. Elapsed: 2.006508036s May 21 16:39:13.806: INFO: Pod "pfpod" satisfied condition "container terminated" STEP: Verifying logs [AfterEach] [sig-cli] Kubectl Port forwarding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:13.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "port-forwarding-4752" for this suite. • [SLOW TEST:8.241 seconds] [sig-cli] Kubectl Port forwarding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 With a server listening on 0.0.0.0 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452 that expects a client request /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453 should support a client that connects, sends NO DATA, and disconnects /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:454 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":1,"skipped":237,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:13.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] apply set/view last-applied /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:827 STEP: deployment replicas number is 2 May 21 16:39:13.737: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8993 apply -f -' May 21 16:39:14.016: INFO: stderr: "" May 21 16:39:14.016: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: check the last-applied matches expectations annotations May 21 16:39:14.016: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8993 apply view-last-applied -f - -o json' May 21 16:39:14.135: INFO: stderr: "" May 21 16:39:14.135: INFO: stdout: "{\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"Deployment\",\n \"metadata\": {\n \"annotations\": {},\n \"name\": \"httpd-deployment\",\n \"namespace\": \"kubectl-8993\"\n },\n \"spec\": {\n \"replicas\": 2,\n \"selector\": {\n \"matchLabels\": {\n \"app\": \"httpd\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"labels\": {\n \"app\": \"httpd\"\n }\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.39-alpine\",\n \"name\": \"httpd\",\n \"ports\": [\n {\n \"containerPort\": 80\n }\n ]\n }\n ]\n }\n }\n }\n}\n" STEP: apply file doesn't have replicas May 21 16:39:14.135: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8993 apply set-last-applied -f -' May 21 16:39:14.260: INFO: stderr: "" May 21 16:39:14.260: INFO: stdout: "deployment.apps/httpd-deployment configured\n" STEP: check last-applied has been updated, annotations doesn't have replicas May 21 16:39:14.260: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8993 apply view-last-applied -f - -o json' May 21 16:39:14.375: INFO: stderr: "" May 21 16:39:14.375: INFO: stdout: "{\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"Deployment\",\n \"metadata\": {\n \"name\": \"httpd-deployment\",\n \"namespace\": \"kubectl-8993\"\n },\n \"spec\": {\n \"selector\": {\n \"matchLabels\": {\n \"app\": \"httpd\"\n }\n },\n \"template\": {\n \"metadata\": {\n \"labels\": {\n \"app\": \"httpd\"\n }\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.39-alpine\",\n \"name\": \"httpd\",\n \"ports\": [\n {\n \"containerPort\": 80\n }\n ]\n }\n ]\n }\n }\n }\n}\n" STEP: scale set replicas to 3 May 21 16:39:14.379: INFO: scanned /root for discovery docs: May 21 16:39:14.379: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8993 scale deployment httpd-deployment --replicas=3' May 21 16:39:14.519: INFO: stderr: "" May 21 16:39:14.519: INFO: stdout: "deployment.apps/httpd-deployment scaled\n" STEP: apply file doesn't have replicas but image changed May 21 16:39:14.519: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8993 apply -f -' May 21 16:39:14.804: INFO: stderr: "" May 21 16:39:14.804: INFO: stdout: "deployment.apps/httpd-deployment configured\n" STEP: verify replicas still is 3 and image has been updated May 21 16:39:14.804: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8993 get -f - -o json' May 21 16:39:14.920: INFO: stderr: "" May 21 16:39:14.920: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"items\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"kind\": \"Deployment\",\n \"metadata\": {\n \"annotations\": {\n \"deployment.kubernetes.io/revision\": \"2\",\n \"kubectl.kubernetes.io/last-applied-configuration\": \"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"Deployment\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"name\\\":\\\"httpd-deployment\\\",\\\"namespace\\\":\\\"kubectl-8993\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"httpd\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"httpd\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"image\\\":\\\"docker.io/library/httpd:2.4.38-alpine\\\",\\\"name\\\":\\\"httpd\\\",\\\"ports\\\":[{\\\"containerPort\\\":80}]}]}}}}\\n\"\n },\n \"creationTimestamp\": \"2021-05-21T16:39:14Z\",\n \"generation\": 4,\n \"managedFields\": [\n {\n \"apiVersion\": \"apps/v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \"f:deployment.kubernetes.io/revision\": {}\n }\n },\n \"f:status\": {\n \"f:conditions\": {\n \".\": {},\n \"k:{\\\"type\\\":\\\"Available\\\"}\": {\n \".\": {},\n \"f:lastTransitionTime\": {},\n \"f:lastUpdateTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Progressing\\\"}\": {\n \".\": {},\n \"f:lastTransitionTime\": {},\n \"f:lastUpdateTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:observedGeneration\": {},\n \"f:replicas\": {},\n \"f:unavailableReplicas\": {},\n \"f:updatedReplicas\": {}\n }\n },\n \"manager\": \"kube-controller-manager\",\n \"operation\": \"Update\",\n \"time\": \"2021-05-21T16:39:14Z\"\n },\n {\n \"apiVersion\": \"apps/v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:annotations\": {\n \".\": {},\n \"f:kubectl.kubernetes.io/last-applied-configuration\": {}\n }\n },\n \"f:spec\": {\n \"f:progressDeadlineSeconds\": {},\n \"f:replicas\": {},\n \"f:revisionHistoryLimit\": {},\n \"f:selector\": {\n \"f:matchLabels\": {\n \".\": {},\n \"f:app\": {}\n }\n },\n \"f:strategy\": {\n \"f:rollingUpdate\": {\n \".\": {},\n \"f:maxSurge\": {},\n \"f:maxUnavailable\": {}\n },\n \"f:type\": {}\n },\n \"f:template\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:app\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"httpd\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:ports\": {\n \".\": {},\n \"k:{\\\"containerPort\\\":80,\\\"protocol\\\":\\\"TCP\\\"}\": {\n \".\": {},\n \"f:containerPort\": {},\n \"f:protocol\": {}\n }\n },\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n }\n }\n },\n \"manager\": \"kubectl-client-side-apply\",\n \"operation\": \"Update\",\n \"time\": \"2021-05-21T16:39:14Z\"\n }\n ],\n \"name\": \"httpd-deployment\",\n \"namespace\": \"kubectl-8993\",\n \"resourceVersion\": \"49301\",\n \"selfLink\": \"/apis/apps/v1/namespaces/kubectl-8993/deployments/httpd-deployment\",\n \"uid\": \"78c68c7f-69db-4ee6-a3f0-4a5547aabd31\"\n },\n \"spec\": {\n \"progressDeadlineSeconds\": 600,\n \"replicas\": 3,\n \"revisionHistoryLimit\": 10,\n \"selector\": {\n \"matchLabels\": {\n \"app\": \"httpd\"\n }\n },\n \"strategy\": {\n \"rollingUpdate\": {\n \"maxSurge\": \"25%\",\n \"maxUnavailable\": \"25%\"\n },\n \"type\": \"RollingUpdate\"\n },\n \"template\": {\n \"metadata\": {\n \"creationTimestamp\": null,\n \"labels\": {\n \"app\": \"httpd\"\n }\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"httpd\",\n \"ports\": [\n {\n \"containerPort\": 80,\n \"protocol\": \"TCP\"\n }\n ],\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\"\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"terminationGracePeriodSeconds\": 30\n }\n }\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastTransitionTime\": \"2021-05-21T16:39:14Z\",\n \"lastUpdateTime\": \"2021-05-21T16:39:14Z\",\n \"message\": \"Deployment does not have minimum availability.\",\n \"reason\": \"MinimumReplicasUnavailable\",\n \"status\": \"False\",\n \"type\": \"Available\"\n },\n {\n \"lastTransitionTime\": \"2021-05-21T16:39:14Z\",\n \"lastUpdateTime\": \"2021-05-21T16:39:14Z\",\n \"message\": \"ReplicaSet \\\"httpd-deployment-86bff9b6d7\\\" is progressing.\",\n \"reason\": \"ReplicaSetUpdated\",\n \"status\": \"True\",\n \"type\": \"Progressing\"\n }\n ],\n \"observedGeneration\": 4,\n \"replicas\": 4,\n \"unavailableReplicas\": 4,\n \"updatedReplicas\": 1\n }\n }\n ],\n \"kind\": \"List\",\n \"metadata\": {\n \"resourceVersion\": \"\",\n \"selfLink\": \"\"\n }\n}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:14.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8993" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":2,"skipped":500,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:14.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should apply a new configuration to an existing RC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:793 STEP: creating Agnhost RC May 21 16:39:14.984: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-537 create -f -' May 21 16:39:15.268: INFO: stderr: "" May 21 16:39:15.268: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: applying a modified configuration May 21 16:39:15.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-537 apply -f -' May 21 16:39:15.543: INFO: stderr: "Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\n" May 21 16:39:15.543: INFO: stdout: "replicationcontroller/agnhost-primary configured\n" STEP: checking the result [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:15.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-537" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":3,"skipped":515,"failed":0} S ------------------------------ [BeforeEach] [sig-cli] Kubectl Port forwarding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:05.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename port-forwarding May 21 16:39:05.833: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 21 16:39:05.836: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should support a client that connects, sends DATA, and disconnects /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:457 STEP: Creating the target pod May 21 16:39:05.846: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true) May 21 16:39:07.850: INFO: The status of Pod pfpod is Running (Ready = false) May 21 16:39:09.850: INFO: The status of Pod pfpod is Running (Ready = false) May 21 16:39:11.850: INFO: The status of Pod pfpod is Running (Ready = true) STEP: Running 'kubectl port-forward' May 21 16:39:11.850: INFO: starting port-forward command and streaming output May 21 16:39:11.850: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=port-forwarding-5457 port-forward --namespace=port-forwarding-5457 pfpod :80' May 21 16:39:11.851: INFO: reading from `kubectl port-forward` command's stdout STEP: Dialing the local port STEP: Sending the expected data to the local port STEP: Reading data from the local port STEP: Closing the write half of the client's connection STEP: Waiting for the target pod to stop running May 21 16:39:13.927: INFO: Waiting up to 5m0s for pod "pfpod" in namespace "port-forwarding-5457" to be "container terminated" May 21 16:39:13.931: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=true. Elapsed: 3.481035ms May 21 16:39:15.934: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=false. Elapsed: 2.006767518s May 21 16:39:15.934: INFO: Pod "pfpod" satisfied condition "container terminated" STEP: Verifying logs STEP: Closing the connection to the local port [AfterEach] [sig-cli] Kubectl Port forwarding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:15.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "port-forwarding-5457" for this suite. • [SLOW TEST:10.150 seconds] [sig-cli] Kubectl Port forwarding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 With a server listening on 0.0.0.0 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452 that expects a client request /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453 should support a client that connects, sends DATA, and disconnects /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:457 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":1,"skipped":397,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:05.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl May 21 16:39:05.318: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 21 16:39:05.322: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create/apply a valid CR for CRD with validation schema /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1000 STEP: prepare CRD with validation schema May 21 16:39:05.324: INFO: >>> kubeConfig: /root/.kube/config STEP: sleep for 10s to wait for potential crd openapi publishing alpha feature STEP: successfully create CR May 21 16:39:15.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1834 create --validate=true -f -' May 21 16:39:16.119: INFO: stderr: "" May 21 16:39:16.119: INFO: stdout: "e2e-test-kubectl-3325-crd.kubectl.example.com/test-cr created\n" May 21 16:39:16.119: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1834 delete e2e-test-kubectl-3325-crds test-cr' May 21 16:39:16.251: INFO: stderr: "" May 21 16:39:16.251: INFO: stdout: "e2e-test-kubectl-3325-crd.kubectl.example.com \"test-cr\" deleted\n" STEP: successfully apply CR May 21 16:39:16.251: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1834 apply --validate=true -f -' May 21 16:39:16.521: INFO: stderr: "" May 21 16:39:16.521: INFO: stdout: "e2e-test-kubectl-3325-crd.kubectl.example.com/test-cr created\n" May 21 16:39:16.521: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1834 delete e2e-test-kubectl-3325-crds test-cr' May 21 16:39:16.647: INFO: stderr: "" May 21 16:39:16.647: INFO: stdout: "e2e-test-kubectl-3325-crd.kubectl.example.com \"test-cr\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:17.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1834" for this suite. • [SLOW TEST:11.866 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl client-side validation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:981 should create/apply a valid CR for CRD with validation schema /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1000 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR for CRD with validation schema","total":-1,"completed":1,"skipped":26,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:17.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should get componentstatuses /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:780 STEP: getting list of componentstatuses May 21 16:39:17.423: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-2209 get componentstatuses -o jsonpath={.items[*].metadata.name}' May 21 16:39:17.589: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n" May 21 16:39:17.589: INFO: stdout: "scheduler controller-manager etcd-0" STEP: getting details of componentstatuses STEP: getting status of scheduler May 21 16:39:17.590: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-2209 get componentstatuses scheduler' May 21 16:39:17.708: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n" May 21 16:39:17.708: INFO: stdout: "NAME STATUS MESSAGE ERROR\nscheduler Unhealthy Get \"http://127.0.0.1:10251/healthz\": dial tcp 127.0.0.1:10251: connect: connection refused \n" STEP: getting status of controller-manager May 21 16:39:17.708: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-2209 get componentstatuses controller-manager' May 21 16:39:17.830: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n" May 21 16:39:17.830: INFO: stdout: "NAME STATUS MESSAGE ERROR\ncontroller-manager Unhealthy Get \"http://127.0.0.1:10252/healthz\": dial tcp 127.0.0.1:10252: connect: connection refused \n" STEP: getting status of etcd-0 May 21 16:39:17.830: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-2209 get componentstatuses etcd-0' May 21 16:39:17.960: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n" May 21 16:39:17.960: INFO: stdout: "NAME STATUS MESSAGE ERROR\netcd-0 Healthy {\"health\":\"true\"} \n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:17.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2209" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","total":-1,"completed":2,"skipped":163,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:05.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl May 21 16:39:05.402: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 21 16:39:05.407: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:384 STEP: creating the pod from May 21 16:39:05.410: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-3300 create -f -' May 21 16:39:05.760: INFO: stderr: "" May 21 16:39:05.760: INFO: stdout: "pod/httpd created\n" May 21 16:39:05.760: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] May 21 16:39:05.760: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-3300" to be "running and ready" May 21 16:39:05.763: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.333892ms May 21 16:39:07.766: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2.005955736s May 21 16:39:09.770: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.009570422s May 21 16:39:11.773: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.012843702s May 21 16:39:13.776: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.016009279s May 21 16:39:15.780: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.019912026s May 21 16:39:17.784: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 12.023404968s May 21 16:39:17.784: INFO: Pod "httpd" satisfied condition "running and ready" May 21 16:39:17.784: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd] [It] should support port-forward /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:618 STEP: forwarding the container port to a local port May 21 16:39:17.784: INFO: starting port-forward command and streaming output May 21 16:39:17.784: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-3300 port-forward --namespace=kubectl-3300 httpd :80' May 21 16:39:17.785: INFO: reading from `kubectl port-forward` command's stdout STEP: curling local port output May 21 16:39:17.957: INFO: got:

It works!

[AfterEach] Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:390 STEP: using delete to clean up resources May 21 16:39:17.962: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-3300 delete --grace-period=0 --force -f -' May 21 16:39:18.085: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 21 16:39:18.085: INFO: stdout: "pod \"httpd\" force deleted\n" May 21 16:39:18.085: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-3300 get rc,svc -l name=httpd --no-headers' May 21 16:39:18.236: INFO: stderr: "No resources found in kubectl-3300 namespace.\n" May 21 16:39:18.236: INFO: stdout: "" May 21 16:39:18.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-3300 get pods -l name=httpd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 21 16:39:18.343: INFO: stderr: "" May 21 16:39:18.343: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:18.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3300" for this suite. • [SLOW TEST:12.973 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:382 should support port-forward /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:618 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Simple pod should support port-forward","total":-1,"completed":1,"skipped":71,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:05.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl May 21 16:39:05.311: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 21 16:39:05.318: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:384 STEP: creating the pod from May 21 16:39:05.320: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-7700 create -f -' May 21 16:39:05.757: INFO: stderr: "" May 21 16:39:05.757: INFO: stdout: "pod/httpd created\n" May 21 16:39:05.757: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] May 21 16:39:05.757: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-7700" to be "running and ready" May 21 16:39:05.760: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.250487ms May 21 16:39:07.763: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005914421s May 21 16:39:09.767: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.009651343s May 21 16:39:11.770: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.012864757s May 21 16:39:13.774: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.016429545s May 21 16:39:15.778: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.02011143s May 21 16:39:17.781: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 12.023708555s May 21 16:39:17.781: INFO: Pod "httpd" satisfied condition "running and ready" May 21 16:39:17.781: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd] [It] should support exec through an HTTP proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:442 STEP: Starting goproxy STEP: Running kubectl via an HTTP proxy using https_proxy May 21 16:39:17.782: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-7700 --namespace=kubectl-7700 exec httpd echo running in container' May 21 16:39:17.991: INFO: stderr: "kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.\n" May 21 16:39:17.991: INFO: stdout: "running in container\n" STEP: Running kubectl via an HTTP proxy using HTTPS_PROXY May 21 16:39:17.992: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-7700 --namespace=kubectl-7700 exec httpd echo running in container' May 21 16:39:18.228: INFO: stderr: "kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.\n" May 21 16:39:18.228: INFO: stdout: "running in container\n" [AfterEach] Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:390 STEP: using delete to clean up resources May 21 16:39:18.228: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-7700 delete --grace-period=0 --force -f -' May 21 16:39:18.356: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 21 16:39:18.356: INFO: stdout: "pod \"httpd\" force deleted\n" May 21 16:39:18.357: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-7700 get rc,svc -l name=httpd --no-headers' May 21 16:39:18.488: INFO: stderr: "No resources found in kubectl-7700 namespace.\n" May 21 16:39:18.488: INFO: stdout: "" May 21 16:39:18.488: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-7700 get pods -l name=httpd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 21 16:39:18.611: INFO: stderr: "" May 21 16:39:18.611: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:18.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7700" for this suite. • [SLOW TEST:13.335 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:382 should support exec through an HTTP proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:442 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy","total":-1,"completed":1,"skipped":23,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl Port forwarding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:12.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename port-forwarding STEP: Waiting for a default service account to be provisioned in namespace [It] should support forwarding over websockets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:468 May 21 16:39:12.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating the pod May 21 16:39:12.040: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true) May 21 16:39:14.042: INFO: The status of Pod pfpod is Running (Ready = false) May 21 16:39:16.045: INFO: The status of Pod pfpod is Running (Ready = false) May 21 16:39:18.043: INFO: The status of Pod pfpod is Running (Ready = false) May 21 16:39:20.043: INFO: The status of Pod pfpod is Running (Ready = true) STEP: Sending the expected data to the local port STEP: Reading data from the local port STEP: Verifying logs [AfterEach] [sig-cli] Kubectl Port forwarding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:20.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "port-forwarding-7902" for this suite. • [SLOW TEST:8.089 seconds] [sig-cli] Kubectl Port forwarding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 With a server listening on 0.0.0.0 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452 should support forwarding over websockets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:468 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets","total":-1,"completed":3,"skipped":867,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:07.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:384 STEP: creating the pod from May 21 16:39:07.182: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-4543 create -f -' May 21 16:39:07.547: INFO: stderr: "" May 21 16:39:07.547: INFO: stdout: "pod/httpd created\n" May 21 16:39:07.547: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] May 21 16:39:07.547: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-4543" to be "running and ready" May 21 16:39:07.551: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.423789ms May 21 16:39:09.556: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2.008585027s May 21 16:39:11.560: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.012447955s May 21 16:39:13.563: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.01532557s May 21 16:39:15.566: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.018817967s May 21 16:39:17.572: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.024203766s May 21 16:39:19.575: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.027596784s May 21 16:39:21.580: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 14.032194626s May 21 16:39:23.584: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 16.036149229s May 21 16:39:23.584: INFO: Pod "httpd" satisfied condition "running and ready" May 21 16:39:23.584: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd] [It] should support exec using resource/name /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:434 STEP: executing a command in the container May 21 16:39:23.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-4543 exec pod/httpd echo running in container' May 21 16:39:23.839: INFO: stderr: "kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.\n" May 21 16:39:23.839: INFO: stdout: "running in container\n" [AfterEach] Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:390 STEP: using delete to clean up resources May 21 16:39:23.840: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-4543 delete --grace-period=0 --force -f -' May 21 16:39:23.968: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 21 16:39:23.969: INFO: stdout: "pod \"httpd\" force deleted\n" May 21 16:39:23.969: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-4543 get rc,svc -l name=httpd --no-headers' May 21 16:39:24.099: INFO: stderr: "No resources found in kubectl-4543 namespace.\n" May 21 16:39:24.099: INFO: stdout: "" May 21 16:39:24.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-4543 get pods -l name=httpd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 21 16:39:24.232: INFO: stderr: "" May 21 16:39:24.232: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:24.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4543" for this suite. • [SLOW TEST:17.089 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:382 should support exec using resource/name /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:434 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":4,"skipped":472,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:24.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create a quota without scopes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1757 STEP: calling kubectl quota May 21 16:39:24.634: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-5400 create quota million --hard=pods=1000000,services=1000000' May 21 16:39:24.762: INFO: stderr: "" May 21 16:39:24.762: INFO: stdout: "resourcequota/million created\n" STEP: verifying that the quota was created [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:24.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5400" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":5,"skipped":697,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:13.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:384 STEP: creating the pod from May 21 16:39:13.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-284 create -f -' May 21 16:39:14.227: INFO: stderr: "" May 21 16:39:14.227: INFO: stdout: "pod/httpd created\n" May 21 16:39:14.227: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] May 21 16:39:14.227: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-284" to be "running and ready" May 21 16:39:14.231: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.424037ms May 21 16:39:16.234: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2.006816112s May 21 16:39:18.238: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.010210505s May 21 16:39:20.241: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.013286001s May 21 16:39:22.244: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.016693659s May 21 16:39:24.246: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.019138619s May 21 16:39:26.250: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.022423578s May 21 16:39:28.253: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 14.026082505s May 21 16:39:28.253: INFO: Pod "httpd" satisfied condition "running and ready" May 21 16:39:28.253: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd] [It] should support exec through kubectl proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:476 STEP: Starting kubectl proxy May 21 16:39:28.254: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-284 proxy -p 0 --disable-filter' STEP: Running kubectl via kubectl proxy using --server=http://127.0.0.1:43347 May 21 16:39:28.344: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-284 --server=http://127.0.0.1:43347 --namespace=kubectl-284 exec httpd echo running in container' May 21 16:39:28.650: INFO: stderr: "kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.\n" May 21 16:39:28.650: INFO: stdout: "running in container\n" [AfterEach] Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:390 STEP: using delete to clean up resources May 21 16:39:28.650: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-284 delete --grace-period=0 --force -f -' May 21 16:39:28.781: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 21 16:39:28.781: INFO: stdout: "pod \"httpd\" force deleted\n" May 21 16:39:28.781: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-284 get rc,svc -l name=httpd --no-headers' May 21 16:39:28.928: INFO: stderr: "No resources found in kubectl-284 namespace.\n" May 21 16:39:28.928: INFO: stdout: "" May 21 16:39:28.928: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-284 get pods -l name=httpd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 21 16:39:29.046: INFO: stderr: "" May 21 16:39:29.046: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:29.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-284" for this suite. • [SLOW TEST:15.130 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:382 should support exec through kubectl proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:476 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy","total":-1,"completed":2,"skipped":295,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl Port forwarding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:19.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename port-forwarding STEP: Waiting for a default service account to be provisioned in namespace [It] should support a client that connects, sends NO DATA, and disconnects /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:476 STEP: Creating the target pod May 21 16:39:19.081: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true) May 21 16:39:21.084: INFO: The status of Pod pfpod is Running (Ready = false) May 21 16:39:23.085: INFO: The status of Pod pfpod is Running (Ready = false) May 21 16:39:25.085: INFO: The status of Pod pfpod is Running (Ready = false) May 21 16:39:27.085: INFO: The status of Pod pfpod is Running (Ready = true) STEP: Running 'kubectl port-forward' May 21 16:39:27.085: INFO: starting port-forward command and streaming output May 21 16:39:27.085: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=port-forwarding-1347 port-forward --namespace=port-forwarding-1347 pfpod :80' May 21 16:39:27.086: INFO: reading from `kubectl port-forward` command's stdout STEP: Dialing the local port STEP: Closing the connection to the local port STEP: Waiting for the target pod to stop running May 21 16:39:27.250: INFO: Waiting up to 5m0s for pod "pfpod" in namespace "port-forwarding-1347" to be "container terminated" May 21 16:39:27.253: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=true. Elapsed: 3.04626ms May 21 16:39:29.256: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=false. Elapsed: 2.006044941s May 21 16:39:29.256: INFO: Pod "pfpod" satisfied condition "container terminated" STEP: Verifying logs [AfterEach] [sig-cli] Kubectl Port forwarding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:29.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "port-forwarding-1347" for this suite. • [SLOW TEST:10.234 seconds] [sig-cli] Kubectl Port forwarding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 With a server listening on localhost /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474 that expects a client request /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:475 should support a client that connects, sends NO DATA, and disconnects /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:476 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":2,"skipped":287,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:18.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1026 STEP: prepare CRD with partially-specified validation schema May 21 16:39:18.161: INFO: >>> kubeConfig: /root/.kube/config STEP: sleep for 10s to wait for potential crd openapi publishing alpha feature STEP: successfully create CR May 21 16:39:28.857: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1245 create --validate=true -f -' May 21 16:39:29.319: INFO: stderr: "" May 21 16:39:29.319: INFO: stdout: "e2e-test-kubectl-5269-crd.kubectl.example.com/test-cr created\n" May 21 16:39:29.319: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1245 delete e2e-test-kubectl-5269-crds test-cr' May 21 16:39:29.450: INFO: stderr: "" May 21 16:39:29.450: INFO: stdout: "e2e-test-kubectl-5269-crd.kubectl.example.com \"test-cr\" deleted\n" STEP: successfully apply CR May 21 16:39:29.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1245 apply --validate=true -f -' May 21 16:39:29.733: INFO: stderr: "" May 21 16:39:29.733: INFO: stdout: "e2e-test-kubectl-5269-crd.kubectl.example.com/test-cr created\n" May 21 16:39:29.733: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1245 delete e2e-test-kubectl-5269-crds test-cr' May 21 16:39:29.866: INFO: stderr: "" May 21 16:39:29.866: INFO: stdout: "e2e-test-kubectl-5269-crd.kubectl.example.com \"test-cr\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:30.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1245" for this suite. • [SLOW TEST:12.249 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl client-side validation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:981 should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1026 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema","total":-1,"completed":3,"skipped":274,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 21 16:39:30.480: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-cli] Kubectl Port forwarding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:18.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename port-forwarding STEP: Waiting for a default service account to be provisioned in namespace [It] should support a client that connects, sends DATA, and disconnects /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:463 STEP: Creating the target pod May 21 16:39:18.580: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true) May 21 16:39:20.583: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true) May 21 16:39:22.584: INFO: The status of Pod pfpod is Running (Ready = false) May 21 16:39:24.584: INFO: The status of Pod pfpod is Running (Ready = false) May 21 16:39:26.584: INFO: The status of Pod pfpod is Running (Ready = true) STEP: Running 'kubectl port-forward' May 21 16:39:26.584: INFO: starting port-forward command and streaming output May 21 16:39:26.584: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=port-forwarding-391 port-forward --namespace=port-forwarding-391 pfpod :80' May 21 16:39:26.590: INFO: reading from `kubectl port-forward` command's stdout STEP: Dialing the local port STEP: Reading data from the local port STEP: Waiting for the target pod to stop running May 21 16:39:28.672: INFO: Waiting up to 5m0s for pod "pfpod" in namespace "port-forwarding-391" to be "container terminated" May 21 16:39:28.675: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=true. Elapsed: 3.514832ms May 21 16:39:30.679: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=false. Elapsed: 2.007067678s May 21 16:39:30.679: INFO: Pod "pfpod" satisfied condition "container terminated" STEP: Verifying logs STEP: Closing the connection to the local port [AfterEach] [sig-cli] Kubectl Port forwarding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:30.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "port-forwarding-391" for this suite. • [SLOW TEST:12.160 seconds] [sig-cli] Kubectl Port forwarding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 With a server listening on 0.0.0.0 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452 that expects NO client request /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:462 should support a client that connects, sends DATA, and disconnects /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:463 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":2,"skipped":194,"failed":0} May 21 16:39:30.704: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:20.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create/apply a CR with unknown fields for CRD with no validation schema /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:982 STEP: create CRD with no validation schema May 21 16:39:20.168: INFO: >>> kubeConfig: /root/.kube/config STEP: sleep for 10s to wait for potential crd openapi publishing alpha feature STEP: successfully create CR May 21 16:39:30.684: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-3809 create --validate=true -f -' May 21 16:39:30.998: INFO: stderr: "" May 21 16:39:30.998: INFO: stdout: "e2e-test-kubectl-2684-crd.kubectl.example.com/test-cr created\n" May 21 16:39:30.998: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-3809 delete e2e-test-kubectl-2684-crds test-cr' May 21 16:39:31.121: INFO: stderr: "" May 21 16:39:31.121: INFO: stdout: "e2e-test-kubectl-2684-crd.kubectl.example.com \"test-cr\" deleted\n" STEP: successfully apply CR May 21 16:39:31.121: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-3809 apply --validate=true -f -' May 21 16:39:31.391: INFO: stderr: "" May 21 16:39:31.391: INFO: stdout: "e2e-test-kubectl-2684-crd.kubectl.example.com/test-cr created\n" May 21 16:39:31.391: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-3809 delete e2e-test-kubectl-2684-crds test-cr' May 21 16:39:31.515: INFO: stderr: "" May 21 16:39:31.515: INFO: stdout: "e2e-test-kubectl-2684-crd.kubectl.example.com \"test-cr\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:32.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3809" for this suite. • [SLOW TEST:11.892 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl client-side validation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:981 should create/apply a CR with unknown fields for CRD with no validation schema /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:982 ------------------------------ [BeforeEach] [sig-cli] Kubectl Port forwarding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:29.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename port-forwarding STEP: Waiting for a default service account to be provisioned in namespace [It] should support forwarding over websockets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:490 May 21 16:39:29.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating the pod May 21 16:39:29.234: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true) May 21 16:39:31.237: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true) May 21 16:39:33.238: INFO: The status of Pod pfpod is Running (Ready = false) May 21 16:39:35.238: INFO: The status of Pod pfpod is Running (Ready = false) May 21 16:39:37.239: INFO: The status of Pod pfpod is Running (Ready = true) STEP: Sending the expected data to the local port STEP: Reading data from the local port STEP: Verifying logs [AfterEach] [sig-cli] Kubectl Port forwarding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:37.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "port-forwarding-4899" for this suite. • [SLOW TEST:8.098 seconds] [sig-cli] Kubectl Port forwarding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 With a server listening on localhost /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474 should support forwarding over websockets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:490 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets","total":-1,"completed":3,"skipped":377,"failed":0} May 21 16:39:37.293: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-cli] Kubectl Port forwarding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:25.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename port-forwarding STEP: Waiting for a default service account to be provisioned in namespace [It] should support a client that connects, sends DATA, and disconnects /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:485 STEP: Creating the target pod May 21 16:39:25.994: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true) May 21 16:39:27.999: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true) May 21 16:39:29.998: INFO: The status of Pod pfpod is Running (Ready = false) May 21 16:39:31.998: INFO: The status of Pod pfpod is Running (Ready = false) May 21 16:39:33.998: INFO: The status of Pod pfpod is Running (Ready = true) STEP: Running 'kubectl port-forward' May 21 16:39:33.998: INFO: starting port-forward command and streaming output May 21 16:39:33.998: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=port-forwarding-8136 port-forward --namespace=port-forwarding-8136 pfpod :80' May 21 16:39:33.998: INFO: reading from `kubectl port-forward` command's stdout STEP: Dialing the local port STEP: Reading data from the local port STEP: Waiting for the target pod to stop running May 21 16:39:36.072: INFO: Waiting up to 5m0s for pod "pfpod" in namespace "port-forwarding-8136" to be "container terminated" May 21 16:39:36.075: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=true. Elapsed: 3.27941ms May 21 16:39:38.079: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=false. Elapsed: 2.007036885s May 21 16:39:38.079: INFO: Pod "pfpod" satisfied condition "container terminated" STEP: Verifying logs STEP: Closing the connection to the local port [AfterEach] [sig-cli] Kubectl Port forwarding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:38.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "port-forwarding-8136" for this suite. • [SLOW TEST:12.144 seconds] [sig-cli] Kubectl Port forwarding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 With a server listening on localhost /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474 that expects NO client request /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:484 should support a client that connects, sends DATA, and disconnects /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:485 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":6,"skipped":1407,"failed":0} May 21 16:39:38.102: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:06.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:384 STEP: creating the pod from May 21 16:39:06.777: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-3388 create -f -' May 21 16:39:07.056: INFO: stderr: "" May 21 16:39:07.056: INFO: stdout: "pod/httpd created\n" May 21 16:39:07.056: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] May 21 16:39:07.056: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-3388" to be "running and ready" May 21 16:39:07.059: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.081557ms May 21 16:39:09.063: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2.007160998s May 21 16:39:11.068: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.012235355s May 21 16:39:13.072: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.01600279s May 21 16:39:15.075: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.019611727s May 21 16:39:17.079: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.022908634s May 21 16:39:19.082: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 12.026288561s May 21 16:39:19.082: INFO: Pod "httpd" satisfied condition "running and ready" May 21 16:39:19.082: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd] [It] should support inline execution and attach /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:551 STEP: executing a command with run and attach with stdin May 21 16:39:19.082: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-3388 run run-test --image=docker.io/library/busybox:1.29 --restart=OnFailure --attach=true --stdin -- sh -c while [ -z "$s" ]; do read s; sleep 1; done; echo read:$s && cat && echo 'stdin closed'' May 21 16:39:25.048: INFO: stderr: "If you don't see a command prompt, try pressing enter.\n" May 21 16:39:25.048: INFO: stdout: "read:value\nabcd1234stdin closed\n" STEP: executing a command with run and attach without stdin May 21 16:39:25.056: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-3388 run run-test-2 --image=docker.io/library/busybox:1.29 --restart=OnFailure --attach=true --leave-stdin-open=true -- sh -c sleep 10; cat && echo 'stdin closed'' May 21 16:39:36.022: INFO: stderr: "If you don't see a command prompt, try pressing enter.\n" May 21 16:39:36.022: INFO: stdout: "stdin closed\n" STEP: executing a command with run and attach with stdin with open stdin should remain running May 21 16:39:36.028: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-3388 run run-test-3 --image=docker.io/library/busybox:1.29 --restart=OnFailure --attach=true --leave-stdin-open=true --stdin -- sh -c cat && echo 'stdin closed'' May 21 16:39:38.004: INFO: stderr: "If you don't see a command prompt, try pressing enter.\n" May 21 16:39:38.004: INFO: stdout: "" May 21 16:39:38.008: INFO: Waiting up to 1m0s for 1 pods to be running and ready: [run-test-3] May 21 16:39:38.008: INFO: Waiting up to 1m0s for pod "run-test-3" in namespace "kubectl-3388" to be "running and ready" May 21 16:39:38.012: INFO: Pod "run-test-3": Phase="Running", Reason="", readiness=true. Elapsed: 3.377081ms May 21 16:39:38.012: INFO: Pod "run-test-3" satisfied condition "running and ready" May 21 16:39:38.012: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [run-test-3] May 21 16:39:38.012: INFO: Waiting up to 1s for 1 pods to be running and ready: [run-test-3] May 21 16:39:38.012: INFO: Waiting up to 1s for pod "run-test-3" in namespace "kubectl-3388" to be "running and ready" May 21 16:39:38.015: INFO: Pod "run-test-3": Phase="Running", Reason="", readiness=true. Elapsed: 3.349063ms May 21 16:39:38.015: INFO: Pod "run-test-3" satisfied condition "running and ready" May 21 16:39:38.015: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [run-test-3] May 21 16:39:38.015: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-3388 logs run-test-3' May 21 16:39:38.164: INFO: stderr: "" May 21 16:39:38.164: INFO: stdout: "abcd1234\n" [AfterEach] Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:390 STEP: using delete to clean up resources May 21 16:39:38.169: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-3388 delete --grace-period=0 --force -f -' May 21 16:39:38.298: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 21 16:39:38.298: INFO: stdout: "pod \"httpd\" force deleted\n" May 21 16:39:38.298: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-3388 get rc,svc -l name=httpd --no-headers' May 21 16:39:38.426: INFO: stderr: "No resources found in kubectl-3388 namespace.\n" May 21 16:39:38.426: INFO: stdout: "" May 21 16:39:38.426: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-3388 get pods -l name=httpd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 21 16:39:38.542: INFO: stderr: "" May 21 16:39:38.542: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:38.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3388" for this suite. • [SLOW TEST:31.805 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:382 should support inline execution and attach /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:551 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Simple pod should support inline execution and attach","total":-1,"completed":2,"skipped":442,"failed":0} May 21 16:39:38.553: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:16.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:384 STEP: creating the pod from May 21 16:39:16.895: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1290 create -f -' May 21 16:39:17.168: INFO: stderr: "" May 21 16:39:17.168: INFO: stdout: "pod/httpd created\n" May 21 16:39:17.168: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] May 21 16:39:17.168: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-1290" to be "running and ready" May 21 16:39:17.171: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.709669ms May 21 16:39:19.174: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005122572s May 21 16:39:21.177: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.008350676s May 21 16:39:23.180: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.01189941s May 21 16:39:25.183: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.014819885s May 21 16:39:27.188: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.019079337s May 21 16:39:29.191: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.022621148s May 21 16:39:31.195: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 14.026019936s May 21 16:39:33.198: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 16.029614917s May 21 16:39:33.198: INFO: Pod "httpd" satisfied condition "running and ready" May 21 16:39:33.198: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd] [It] should handle in-cluster config /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:635 STEP: adding rbac permissions May 21 16:39:33.207: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: overriding icc with values provided by flags May 21 16:39:33.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1290 exec httpd -- /bin/sh -x -c printenv KUBERNETES_SERVICE_HOST' May 21 16:39:33.531: INFO: stderr: "+ printenv KUBERNETES_SERVICE_HOST\n" May 21 16:39:33.531: INFO: stdout: "10.96.0.1\n" May 21 16:39:33.531: INFO: stdout: 10.96.0.1 May 21 16:39:33.531: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1290 exec httpd -- /bin/sh -x -c printenv KUBERNETES_SERVICE_PORT' May 21 16:39:33.766: INFO: stderr: "+ printenv KUBERNETES_SERVICE_PORT\n" May 21 16:39:33.766: INFO: stdout: "443\n" May 21 16:39:33.766: INFO: stdout: 443 May 21 16:39:33.766: INFO: copying /usr/local/bin/kubectl to the httpd pod May 21 16:39:33.766: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1290 cp /usr/local/bin/kubectl kubectl-1290/httpd:/tmp/' May 21 16:39:34.201: INFO: stderr: "" May 21 16:39:34.201: INFO: stdout: "" May 21 16:39:34.202: INFO: copying override kubeconfig to the httpd pod May 21 16:39:34.202: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1290 cp /tmp/icc-override170884081/icc-override.kubeconfig kubectl-1290/httpd:/tmp/' May 21 16:39:34.565: INFO: stderr: "" May 21 16:39:34.565: INFO: stdout: "" May 21 16:39:34.565: INFO: copying configmap manifests to the httpd pod May 21 16:39:34.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1290 cp /tmp/icc-override170884081/invalid-configmap-with-namespace.yaml kubectl-1290/httpd:/tmp/' May 21 16:39:34.884: INFO: stderr: "" May 21 16:39:34.885: INFO: stdout: "" May 21 16:39:34.885: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1290 cp /tmp/icc-override170884081/invalid-configmap-without-namespace.yaml kubectl-1290/httpd:/tmp/' May 21 16:39:35.205: INFO: stderr: "" May 21 16:39:35.205: INFO: stdout: "" STEP: getting pods with in-cluster configs May 21 16:39:35.205: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1290 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --v=6 2>&1' May 21 16:39:35.641: INFO: stderr: "+ /tmp/kubectl get pods '--v=6'\n" May 21 16:39:35.641: INFO: stdout: "I0521 16:39:35.531806 153 merged_client_builder.go:163] Using in-cluster namespace\nI0521 16:39:35.532011 153 merged_client_builder.go:121] Using in-cluster configuration\nI0521 16:39:35.541886 153 round_trippers.go:444] GET https://10.96.0.1:443/api?timeout=32s 200 OK in 9 milliseconds\nI0521 16:39:35.545292 153 round_trippers.go:444] GET https://10.96.0.1:443/apis?timeout=32s 200 OK in 1 milliseconds\nI0521 16:39:35.550444 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/extensions/v1beta1?timeout=32s 200 OK in 1 milliseconds\nI0521 16:39:35.550463 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/events.k8s.io/v1beta1?timeout=32s 200 OK in 1 milliseconds\nI0521 16:39:35.550481 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/authentication.k8s.io/v1beta1?timeout=32s 200 OK in 1 milliseconds\nI0521 16:39:35.551438 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/admissionregistration.k8s.io/v1?timeout=32s 200 OK in 1 milliseconds\nI0521 16:39:35.551455 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/apiregistration.k8s.io/v1?timeout=32s 200 OK in 2 milliseconds\nI0521 16:39:35.551770 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/projectcontour.io/v1?timeout=32s 200 OK in 1 milliseconds\nI0521 16:39:35.551782 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s 200 OK in 2 milliseconds\nI0521 16:39:35.551802 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/storage.k8s.io/v1?timeout=32s 200 OK in 2 milliseconds\nI0521 16:39:35.551799 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/rbac.authorization.k8s.io/v1?timeout=32s 200 OK in 1 milliseconds\nI0521 16:39:35.552016 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/policy/v1beta1?timeout=32s 200 OK in 2 milliseconds\nI0521 16:39:35.552045 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/coordination.k8s.io/v1?timeout=32s 200 OK in 2 milliseconds\nI0521 16:39:35.552519 153 round_trippers.go:444] GET https://10.96.0.1:443/api/v1?timeout=32s 200 OK in 3 milliseconds\nI0521 16:39:35.555355 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/autoscaling/v1?timeout=32s 200 OK in 5 milliseconds\nI0521 16:39:35.555477 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds\nI0521 16:39:35.555593 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/scheduling.k8s.io/v1?timeout=32s 200 OK in 6 milliseconds\nI0521 16:39:35.555939 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/scheduling.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds\nI0521 16:39:35.555943 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/discovery.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds\nI0521 16:39:35.555947 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/events.k8s.io/v1?timeout=32s 200 OK in 6 milliseconds\nI0521 16:39:35.556165 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/projectcontour.io/v1alpha1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:35.556212 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/apps/v1?timeout=32s 200 OK in 6 milliseconds\nI0521 16:39:35.556202 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/k8s.cni.cncf.io/v1?timeout=32s 200 OK in 6 milliseconds\nI0521 16:39:35.556221 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/apiregistration.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds\nI0521 16:39:35.556887 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/storage.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:35.557065 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/batch/v1beta1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:35.557231 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/authentication.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:35.557708 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/node.k8s.io/v1beta1?timeout=32s 200 OK in 8 milliseconds\nI0521 16:39:35.558014 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/apiextensions.k8s.io/v1beta1?timeout=32s 200 OK in 8 milliseconds\nI0521 16:39:35.558490 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/networking.k8s.io/v1?timeout=32s 200 OK in 8 milliseconds\nI0521 16:39:35.558802 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/authorization.k8s.io/v1?timeout=32s 200 OK in 9 milliseconds\nI0521 16:39:35.558832 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/certificates.k8s.io/v1?timeout=32s 200 OK in 9 milliseconds\nI0521 16:39:35.559348 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/autoscaling/v2beta1?timeout=32s 200 OK in 9 milliseconds\nI0521 16:39:35.559515 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/networking.k8s.io/v1beta1?timeout=32s 200 OK in 9 milliseconds\nI0521 16:39:35.559620 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/authorization.k8s.io/v1beta1?timeout=32s 200 OK in 9 milliseconds\nI0521 16:39:35.559818 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/autoscaling/v2beta2?timeout=32s 200 OK in 10 milliseconds\nI0521 16:39:35.559914 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/apiextensions.k8s.io/v1?timeout=32s 200 OK in 9 milliseconds\nI0521 16:39:35.559943 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/batch/v1?timeout=32s 200 OK in 10 milliseconds\nI0521 16:39:35.560237 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/coordination.k8s.io/v1beta1?timeout=32s 200 OK in 10 milliseconds\nI0521 16:39:35.560393 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/certificates.k8s.io/v1beta1?timeout=32s 200 OK in 10 milliseconds\nI0521 16:39:35.614829 153 merged_client_builder.go:121] Using in-cluster configuration\nI0521 16:39:35.619310 153 merged_client_builder.go:121] Using in-cluster configuration\nI0521 16:39:35.622938 153 round_trippers.go:444] GET https://10.96.0.1:443/api/v1/namespaces/kubectl-1290/pods?limit=500 200 OK in 3 milliseconds\nNAME READY STATUS RESTARTS AGE\nhttpd 1/1 Running 0 18s\n" May 21 16:39:35.642: INFO: stdout: I0521 16:39:35.531806 153 merged_client_builder.go:163] Using in-cluster namespace I0521 16:39:35.532011 153 merged_client_builder.go:121] Using in-cluster configuration I0521 16:39:35.541886 153 round_trippers.go:444] GET https://10.96.0.1:443/api?timeout=32s 200 OK in 9 milliseconds I0521 16:39:35.545292 153 round_trippers.go:444] GET https://10.96.0.1:443/apis?timeout=32s 200 OK in 1 milliseconds I0521 16:39:35.550444 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/extensions/v1beta1?timeout=32s 200 OK in 1 milliseconds I0521 16:39:35.550463 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/events.k8s.io/v1beta1?timeout=32s 200 OK in 1 milliseconds I0521 16:39:35.550481 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/authentication.k8s.io/v1beta1?timeout=32s 200 OK in 1 milliseconds I0521 16:39:35.551438 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/admissionregistration.k8s.io/v1?timeout=32s 200 OK in 1 milliseconds I0521 16:39:35.551455 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/apiregistration.k8s.io/v1?timeout=32s 200 OK in 2 milliseconds I0521 16:39:35.551770 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/projectcontour.io/v1?timeout=32s 200 OK in 1 milliseconds I0521 16:39:35.551782 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s 200 OK in 2 milliseconds I0521 16:39:35.551802 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/storage.k8s.io/v1?timeout=32s 200 OK in 2 milliseconds I0521 16:39:35.551799 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/rbac.authorization.k8s.io/v1?timeout=32s 200 OK in 1 milliseconds I0521 16:39:35.552016 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/policy/v1beta1?timeout=32s 200 OK in 2 milliseconds I0521 16:39:35.552045 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/coordination.k8s.io/v1?timeout=32s 200 OK in 2 milliseconds I0521 16:39:35.552519 153 round_trippers.go:444] GET https://10.96.0.1:443/api/v1?timeout=32s 200 OK in 3 milliseconds I0521 16:39:35.555355 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/autoscaling/v1?timeout=32s 200 OK in 5 milliseconds I0521 16:39:35.555477 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds I0521 16:39:35.555593 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/scheduling.k8s.io/v1?timeout=32s 200 OK in 6 milliseconds I0521 16:39:35.555939 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/scheduling.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds I0521 16:39:35.555943 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/discovery.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds I0521 16:39:35.555947 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/events.k8s.io/v1?timeout=32s 200 OK in 6 milliseconds I0521 16:39:35.556165 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/projectcontour.io/v1alpha1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:35.556212 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/apps/v1?timeout=32s 200 OK in 6 milliseconds I0521 16:39:35.556202 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/k8s.cni.cncf.io/v1?timeout=32s 200 OK in 6 milliseconds I0521 16:39:35.556221 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/apiregistration.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds I0521 16:39:35.556887 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/storage.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:35.557065 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/batch/v1beta1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:35.557231 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/authentication.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:35.557708 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/node.k8s.io/v1beta1?timeout=32s 200 OK in 8 milliseconds I0521 16:39:35.558014 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/apiextensions.k8s.io/v1beta1?timeout=32s 200 OK in 8 milliseconds I0521 16:39:35.558490 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/networking.k8s.io/v1?timeout=32s 200 OK in 8 milliseconds I0521 16:39:35.558802 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/authorization.k8s.io/v1?timeout=32s 200 OK in 9 milliseconds I0521 16:39:35.558832 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/certificates.k8s.io/v1?timeout=32s 200 OK in 9 milliseconds I0521 16:39:35.559348 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/autoscaling/v2beta1?timeout=32s 200 OK in 9 milliseconds I0521 16:39:35.559515 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/networking.k8s.io/v1beta1?timeout=32s 200 OK in 9 milliseconds I0521 16:39:35.559620 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/authorization.k8s.io/v1beta1?timeout=32s 200 OK in 9 milliseconds I0521 16:39:35.559818 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/autoscaling/v2beta2?timeout=32s 200 OK in 10 milliseconds I0521 16:39:35.559914 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/apiextensions.k8s.io/v1?timeout=32s 200 OK in 9 milliseconds I0521 16:39:35.559943 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/batch/v1?timeout=32s 200 OK in 10 milliseconds I0521 16:39:35.560237 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/coordination.k8s.io/v1beta1?timeout=32s 200 OK in 10 milliseconds I0521 16:39:35.560393 153 round_trippers.go:444] GET https://10.96.0.1:443/apis/certificates.k8s.io/v1beta1?timeout=32s 200 OK in 10 milliseconds I0521 16:39:35.614829 153 merged_client_builder.go:121] Using in-cluster configuration I0521 16:39:35.619310 153 merged_client_builder.go:121] Using in-cluster configuration I0521 16:39:35.622938 153 round_trippers.go:444] GET https://10.96.0.1:443/api/v1/namespaces/kubectl-1290/pods?limit=500 200 OK in 3 milliseconds NAME READY STATUS RESTARTS AGE httpd 1/1 Running 0 18s STEP: creating an object containing a namespace with in-cluster config May 21 16:39:35.642: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1290 exec httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-with-namespace.yaml --v=6 2>&1' May 21 16:39:36.203: INFO: rc: 255 STEP: creating an object not containing a namespace with in-cluster config May 21 16:39:36.203: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1290 exec httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-without-namespace.yaml --v=6 2>&1' May 21 16:39:36.755: INFO: rc: 255 STEP: trying to use kubectl with invalid token May 21 16:39:36.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1290 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1' May 21 16:39:37.074: INFO: rc: 255 May 21 16:39:37.074: INFO: got err error running /usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1290 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1: Command stdout: I0521 16:39:37.040607 252 merged_client_builder.go:163] Using in-cluster namespace I0521 16:39:37.041022 252 merged_client_builder.go:121] Using in-cluster configuration I0521 16:39:37.044373 252 merged_client_builder.go:121] Using in-cluster configuration I0521 16:39:37.051525 252 merged_client_builder.go:121] Using in-cluster configuration I0521 16:39:37.052012 252 round_trippers.go:421] GET https://10.96.0.1:443/api/v1/namespaces/kubectl-1290/pods?limit=500 I0521 16:39:37.052040 252 round_trippers.go:428] Request Headers: I0521 16:39:37.052053 252 round_trippers.go:432] Authorization: Bearer I0521 16:39:37.052063 252 round_trippers.go:432] Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json I0521 16:39:37.052074 252 round_trippers.go:432] User-Agent: kubectl/v1.19.11 (linux/amd64) kubernetes/c6a2f08 I0521 16:39:37.061030 252 round_trippers.go:447] Response Status: 401 Unauthorized in 8 milliseconds I0521 16:39:37.061501 252 helpers.go:216] server response object: [{ "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "Unauthorized", "reason": "Unauthorized", "code": 401 }] F0521 16:39:37.061539 252 helpers.go:115] error: You must be logged in to the server (Unauthorized) goroutine 1 [running]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00000e001, 0xc000c2c1c0, 0x68, 0x1af) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:996 +0xb9 k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x2d03b80, 0xc000000003, 0x0, 0x0, 0xc0002d0e00, 0x2ae3039, 0xa, 0x73, 0x40b300) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:945 +0x191 k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x2d03b80, 0x3, 0x0, 0x0, 0x2, 0xc000eb3ac8, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:718 +0x165 k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1442 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0001283c0, 0x3a, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x1f0 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x1e5d900, 0xc000952ba0, 0x1d06430) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:177 +0x8b5 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc0004878c0, 0xc000862e40, 0x1, 0x3) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get/get.go:167 +0x159 k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0004878c0, 0xc000862e10, 0x3, 0x3, 0xc0004878c0, 0xc000862e10) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:846 +0x2c2 k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc00004adc0, 0xc000154180, 0xc00003a0a0, 0x5) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:950 +0x375 k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:887 main.main() _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:49 +0x21d goroutine 6 [chan receive]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x2d03b80) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:416 +0xd8 goroutine 115 [select]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x1d06368, 0x1e5b360, 0xc000ac4000, 0x1, 0xc000046120) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x1d06368, 0x12a05f200, 0x0, 0x1, 0xc000046120) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x1d06368, 0x12a05f200, 0xc000046120) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d created by k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs.InitLogs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs/logs.go:51 +0x96 goroutine 245 [IO wait]: internal/poll.runtime_pollWait(0x7f7552373f50, 0x72, 0x1e5e540) /usr/local/go/src/runtime/netpoll.go:222 +0x55 internal/poll.(*pollDesc).wait(0xc0011a4c98, 0x72, 0x1e5e500, 0x2b0e610, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45 internal/poll.(*pollDesc).waitRead(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:92 internal/poll.(*FD).Read(0xc0011a4c80, 0xc00028e900, 0x8ec, 0x8ec, 0x0, 0x0, 0x0) /usr/local/go/src/internal/poll/fd_unix.go:159 +0x1a5 net.(*netFD).Read(0xc0011a4c80, 0xc00028e900, 0x8ec, 0x8ec, 0x203000, 0x65449b, 0xc00104e4e0) /usr/local/go/src/net/fd_posix.go:55 +0x4f net.(*conn).Read(0xc000f9c170, 0xc00028e900, 0x8ec, 0x8ec, 0x0, 0x0, 0x0) /usr/local/go/src/net/net.go:182 +0x8e crypto/tls.(*atLeastReader).Read(0xc000952400, 0xc00028e900, 0x8ec, 0x8ec, 0x9b, 0x8e7, 0xc0001db708) /usr/local/go/src/crypto/tls/conn.go:779 +0x62 bytes.(*Buffer).ReadFrom(0xc00104e600, 0x1e59d60, 0xc000952400, 0x40b685, 0x1a185e0, 0x1b94b00) /usr/local/go/src/bytes/buffer.go:204 +0xb1 crypto/tls.(*Conn).readFromUntil(0xc00104e380, 0x1e5c8a0, 0xc000f9c170, 0x5, 0xc000f9c170, 0x8a) /usr/local/go/src/crypto/tls/conn.go:801 +0xf3 crypto/tls.(*Conn).readRecordOrCCS(0xc00104e380, 0x0, 0x0, 0xc000faf920) /usr/local/go/src/crypto/tls/conn.go:608 +0x115 crypto/tls.(*Conn).readRecord(...) /usr/local/go/src/crypto/tls/conn.go:576 crypto/tls.(*Conn).Read(0xc00104e380, 0xc000d24000, 0x1000, 0x1000, 0x0, 0x0, 0x0) /usr/local/go/src/crypto/tls/conn.go:1252 +0x15f bufio.(*Reader).Read(0xc000f9f440, 0xc000624818, 0x9, 0x9, 0xc000f9f4a0, 0xc0001dbd18, 0x1d07100) /usr/local/go/src/bufio/bufio.go:227 +0x222 io.ReadAtLeast(0x1e59b80, 0xc000f9f440, 0xc000624818, 0x9, 0x9, 0x9, 0xc0005e3190, 0xc000148060, 0x0) /usr/local/go/src/io/io.go:314 +0x87 io.ReadFull(...) /usr/local/go/src/io/io.go:333 k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader(0xc000624818, 0x9, 0x9, 0x1e59b80, 0xc000f9f440, 0x0, 0x0, 0xc00094e330, 0xc0001dbdd0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x89 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0006247e0, 0xc00094e330, 0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0001dbfa8, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1819 +0xd8 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc0004dc000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1741 +0x6f created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:705 +0x6c5 stderr: + /tmp/kubectl get pods '--token=invalid' '--v=7' command terminated with exit code 255 error: exit status 255 STEP: trying to use kubectl with invalid server May 21 16:39:37.074: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1290 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1' May 21 16:39:37.451: INFO: rc: 255 May 21 16:39:37.451: INFO: got err error running /usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1290 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1: Command stdout: I0521 16:39:37.388312 283 merged_client_builder.go:163] Using in-cluster namespace I0521 16:39:37.409386 283 round_trippers.go:444] GET http://invalid/api?timeout=32s in 20 milliseconds I0521 16:39:37.409470 283 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 10.96.0.10:53: server misbehaving I0521 16:39:37.428525 283 round_trippers.go:444] GET http://invalid/api?timeout=32s in 18 milliseconds I0521 16:39:37.428604 283 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 10.96.0.10:53: server misbehaving I0521 16:39:37.428657 283 shortcut.go:89] Error loading discovery information: Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 10.96.0.10:53: server misbehaving I0521 16:39:37.432074 283 round_trippers.go:444] GET http://invalid/api?timeout=32s in 3 milliseconds I0521 16:39:37.432146 283 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 10.96.0.10:53: server misbehaving I0521 16:39:37.435629 283 round_trippers.go:444] GET http://invalid/api?timeout=32s in 3 milliseconds I0521 16:39:37.435704 283 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 10.96.0.10:53: server misbehaving I0521 16:39:37.439132 283 round_trippers.go:444] GET http://invalid/api?timeout=32s in 3 milliseconds I0521 16:39:37.439213 283 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 10.96.0.10:53: server misbehaving I0521 16:39:37.439308 283 helpers.go:234] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.96.0.10:53: server misbehaving F0521 16:39:37.439346 283 helpers.go:115] Unable to connect to the server: dial tcp: lookup invalid on 10.96.0.10:53: server misbehaving goroutine 1 [running]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00000e001, 0xc0001c7500, 0x8d, 0x1bd) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:996 +0xb9 k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x2d03b80, 0xc000000003, 0x0, 0x0, 0xc000270e70, 0x2ae3039, 0xa, 0x73, 0x40b300) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:945 +0x191 k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x2d03b80, 0x3, 0x0, 0x0, 0x2, 0xc0005c9ac8, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:718 +0x165 k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1442 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc000286780, 0x5e, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x1f0 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x1e5cca0, 0xc0001f51d0, 0x1d06430) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:188 +0x945 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115 k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc0004078c0, 0xc000b5aed0, 0x1, 0x3) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get/get.go:167 +0x159 k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0004078c0, 0xc000b5aea0, 0x3, 0x3, 0xc0004078c0, 0xc000b5aea0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:846 +0x2c2 k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000473b80, 0xc000154180, 0xc00003a0a0, 0x5) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:950 +0x375 k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:887 main.main() _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:49 +0x21d goroutine 6 [chan receive]: k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x2d03b80) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:416 +0xd8 goroutine 113 [select]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x1d06368, 0x1e5b360, 0xc000c96000, 0x1, 0xc000046120) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x1d06368, 0x12a05f200, 0x0, 0x1, 0xc000046120) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x1d06368, 0x12a05f200, 0xc000046120) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d created by k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs.InitLogs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs/logs.go:51 +0x96 stderr: + /tmp/kubectl get pods '--server=invalid' '--v=6' command terminated with exit code 255 error: exit status 255 STEP: trying to use kubectl with invalid namespace May 21 16:39:37.451: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1290 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --namespace=invalid --v=6 2>&1' May 21 16:39:37.762: INFO: stderr: "+ /tmp/kubectl get pods '--namespace=invalid' '--v=6'\n" May 21 16:39:37.762: INFO: stdout: "I0521 16:39:37.728830 310 merged_client_builder.go:121] Using in-cluster configuration\nI0521 16:39:37.733186 310 merged_client_builder.go:121] Using in-cluster configuration\nI0521 16:39:37.737778 310 merged_client_builder.go:121] Using in-cluster configuration\nI0521 16:39:37.749486 310 round_trippers.go:444] GET https://10.96.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 11 milliseconds\nNo resources found in invalid namespace.\n" May 21 16:39:37.762: INFO: stdout: I0521 16:39:37.728830 310 merged_client_builder.go:121] Using in-cluster configuration I0521 16:39:37.733186 310 merged_client_builder.go:121] Using in-cluster configuration I0521 16:39:37.737778 310 merged_client_builder.go:121] Using in-cluster configuration I0521 16:39:37.749486 310 round_trippers.go:444] GET https://10.96.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 11 milliseconds No resources found in invalid namespace. STEP: trying to use kubectl with kubeconfig May 21 16:39:37.762: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1290 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --kubeconfig=/tmp/icc-override.kubeconfig --v=6 2>&1' May 21 16:39:38.198: INFO: stderr: "+ /tmp/kubectl get pods '--kubeconfig=/tmp/icc-override.kubeconfig' '--v=6'\n" May 21 16:39:38.198: INFO: stdout: "I0521 16:39:38.084068 340 loader.go:375] Config loaded from file: /tmp/icc-override.kubeconfig\nI0521 16:39:38.097754 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/api?timeout=32s 200 OK in 12 milliseconds\nI0521 16:39:38.101370 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis?timeout=32s 200 OK in 1 milliseconds\nI0521 16:39:38.107746 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/autoscaling/v2beta2?timeout=32s 200 OK in 2 milliseconds\nI0521 16:39:38.109693 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/projectcontour.io/v1alpha1?timeout=32s 200 OK in 4 milliseconds\nI0521 16:39:38.112717 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/autoscaling/v2beta1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:38.113155 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/authentication.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:38.113223 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/discovery.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:38.113265 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/coordination.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds\nI0521 16:39:38.113284 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/storage.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds\nI0521 16:39:38.113247 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/storage.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:38.113314 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds\nI0521 16:39:38.113350 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/node.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds\nI0521 16:39:38.113355 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/networking.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:38.113375 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/batch/v1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:38.113410 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/networking.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:38.113325 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/rbac.authorization.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:38.113439 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/authorization.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:38.113452 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/authorization.k8s.io/v1beta1?timeout=32s 200 OK in 8 milliseconds\nI0521 16:39:38.113412 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/extensions/v1beta1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:38.113397 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/k8s.cni.cncf.io/v1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:38.113484 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/autoscaling/v1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:38.113587 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/certificates.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:38.113599 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/admissionregistration.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:38.113614 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/apiregistration.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:38.113714 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/certificates.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:38.113723 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/apiextensions.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:38.113779 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/apps/v1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:38.113781 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/events.k8s.io/v1beta1?timeout=32s 200 OK in 8 milliseconds\nI0521 16:39:38.113909 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/scheduling.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:38.113911 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/scheduling.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:38.114066 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/apiregistration.k8s.io/v1?timeout=32s 200 OK in 8 milliseconds\nI0521 16:39:38.114098 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/coordination.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:38.114124 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/events.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:38.114136 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/batch/v1beta1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:38.114144 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/api/v1?timeout=32s 200 OK in 8 milliseconds\nI0521 16:39:38.114180 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/apiextensions.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:38.114271 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/policy/v1beta1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:38.114406 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/authentication.k8s.io/v1beta1?timeout=32s 200 OK in 8 milliseconds\nI0521 16:39:38.114441 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/projectcontour.io/v1?timeout=32s 200 OK in 8 milliseconds\nI0521 16:39:38.114525 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds\nI0521 16:39:38.183246 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/api/v1/namespaces/default/pods?limit=500 200 OK in 3 milliseconds\nNo resources found in default namespace.\n" May 21 16:39:38.199: INFO: stdout: I0521 16:39:38.084068 340 loader.go:375] Config loaded from file: /tmp/icc-override.kubeconfig I0521 16:39:38.097754 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/api?timeout=32s 200 OK in 12 milliseconds I0521 16:39:38.101370 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis?timeout=32s 200 OK in 1 milliseconds I0521 16:39:38.107746 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/autoscaling/v2beta2?timeout=32s 200 OK in 2 milliseconds I0521 16:39:38.109693 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/projectcontour.io/v1alpha1?timeout=32s 200 OK in 4 milliseconds I0521 16:39:38.112717 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/autoscaling/v2beta1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:38.113155 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/authentication.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:38.113223 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/discovery.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:38.113265 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/coordination.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds I0521 16:39:38.113284 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/storage.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds I0521 16:39:38.113247 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/storage.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:38.113314 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds I0521 16:39:38.113350 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/node.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds I0521 16:39:38.113355 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/networking.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:38.113375 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/batch/v1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:38.113410 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/networking.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:38.113325 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/rbac.authorization.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:38.113439 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/authorization.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:38.113452 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/authorization.k8s.io/v1beta1?timeout=32s 200 OK in 8 milliseconds I0521 16:39:38.113412 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/extensions/v1beta1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:38.113397 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/k8s.cni.cncf.io/v1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:38.113484 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/autoscaling/v1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:38.113587 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/certificates.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:38.113599 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/admissionregistration.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:38.113614 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/apiregistration.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:38.113714 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/certificates.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:38.113723 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/apiextensions.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:38.113779 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/apps/v1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:38.113781 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/events.k8s.io/v1beta1?timeout=32s 200 OK in 8 milliseconds I0521 16:39:38.113909 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/scheduling.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:38.113911 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/scheduling.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:38.114066 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/apiregistration.k8s.io/v1?timeout=32s 200 OK in 8 milliseconds I0521 16:39:38.114098 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/coordination.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:38.114124 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/events.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:38.114136 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/batch/v1beta1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:38.114144 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/api/v1?timeout=32s 200 OK in 8 milliseconds I0521 16:39:38.114180 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/apiextensions.k8s.io/v1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:38.114271 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/policy/v1beta1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:38.114406 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/authentication.k8s.io/v1beta1?timeout=32s 200 OK in 8 milliseconds I0521 16:39:38.114441 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/projectcontour.io/v1?timeout=32s 200 OK in 8 milliseconds I0521 16:39:38.114525 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s 200 OK in 7 milliseconds I0521 16:39:38.183246 340 round_trippers.go:444] GET https://kubernetes.default.svc:443/api/v1/namespaces/default/pods?limit=500 200 OK in 3 milliseconds No resources found in default namespace. [AfterEach] Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:390 STEP: using delete to clean up resources May 21 16:39:38.199: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1290 delete --grace-period=0 --force -f -' May 21 16:39:38.321: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 21 16:39:38.321: INFO: stdout: "pod \"httpd\" force deleted\n" May 21 16:39:38.321: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1290 get rc,svc -l name=httpd --no-headers' May 21 16:39:38.453: INFO: stderr: "No resources found in kubectl-1290 namespace.\n" May 21 16:39:38.453: INFO: stdout: "" May 21 16:39:38.453: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-1290 get pods -l name=httpd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 21 16:39:38.569: INFO: stderr: "" May 21 16:39:38.569: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:38.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1290" for this suite. • [SLOW TEST:21.712 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:382 should handle in-cluster config /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:635 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Simple pod should handle in-cluster config","total":-1,"completed":2,"skipped":934,"failed":0} May 21 16:39:38.579: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:15.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:384 STEP: creating the pod from apiVersion: v1 kind: Pod metadata: name: httpd labels: name: httpd spec: containers: - name: httpd image: docker.io/library/httpd:2.4.38-alpine ports: - containerPort: 80 readinessProbe: httpGet: path: / port: 80 initialDelaySeconds: 5 timeoutSeconds: 5 May 21 16:39:15.588: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8882 create -f -' May 21 16:39:15.869: INFO: stderr: "" May 21 16:39:15.869: INFO: stdout: "pod/httpd created\n" May 21 16:39:15.869: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] May 21 16:39:15.869: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-8882" to be "running and ready" May 21 16:39:15.872: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.525484ms May 21 16:39:17.875: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2.00582049s May 21 16:39:19.879: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.009391152s May 21 16:39:21.882: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.012768703s May 21 16:39:23.886: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.016230499s May 21 16:39:25.889: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 10.019807898s May 21 16:39:25.889: INFO: Pod "httpd" satisfied condition "running and ready" May 21 16:39:25.889: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd] [It] should contain last line of the log /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:604 STEP: executing a command with run May 21 16:39:25.890: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8882 run run-log-test --image=docker.io/library/busybox:1.29 --restart=OnFailure -- sh -c sleep 10; seq 100 | while read i; do echo $i; sleep 0.01; done; echo EOF' May 21 16:39:26.028: INFO: stderr: "" May 21 16:39:26.028: INFO: stdout: "pod/run-log-test created\n" May 21 16:39:26.028: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [run-log-test] May 21 16:39:26.028: INFO: Waiting up to 5m0s for pod "run-log-test" in namespace "kubectl-8882" to be "running and ready, or succeeded" May 21 16:39:26.030: INFO: Pod "run-log-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.337631ms May 21 16:39:28.034: INFO: Pod "run-log-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006519228s May 21 16:39:30.038: INFO: Pod "run-log-test": Phase="Running", Reason="", readiness=true. Elapsed: 4.009876417s May 21 16:39:30.038: INFO: Pod "run-log-test" satisfied condition "running and ready, or succeeded" May 21 16:39:30.038: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [run-log-test] May 21 16:39:30.038: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8882 logs -f run-log-test' May 21 16:39:39.265: INFO: stderr: "" May 21 16:39:39.265: INFO: stdout: "1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47\n48\n49\n50\n51\n52\n53\n54\n55\n56\n57\n58\n59\n60\n61\n62\n63\n64\n65\n66\n67\n68\n69\n70\n71\n72\n73\n74\n75\n76\n77\n78\n79\n80\n81\n82\n83\n84\n85\n86\n87\n88\n89\n90\n91\n92\n93\n94\n95\n96\n97\n98\n99\n100\nEOF\n" [AfterEach] Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:390 STEP: using delete to clean up resources May 21 16:39:39.265: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8882 delete --grace-period=0 --force -f -' May 21 16:39:39.418: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 21 16:39:39.418: INFO: stdout: "pod \"httpd\" force deleted\n" May 21 16:39:39.418: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8882 get rc,svc -l name=httpd --no-headers' May 21 16:39:39.545: INFO: stderr: "No resources found in kubectl-8882 namespace.\n" May 21 16:39:39.545: INFO: stdout: "" May 21 16:39:39.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-8882 get pods -l name=httpd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 21 16:39:39.669: INFO: stderr: "" May 21 16:39:39.669: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:39.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8882" for this suite. • [SLOW TEST:24.117 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:382 should contain last line of the log /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:604 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Simple pod should contain last line of the log","total":-1,"completed":4,"skipped":516,"failed":0} May 21 16:39:39.679: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-cli] Kubectl Port forwarding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:29.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename port-forwarding STEP: Waiting for a default service account to be provisioned in namespace [It] should support a client that connects, sends DATA, and disconnects /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:479 STEP: Creating the target pod May 21 16:39:29.345: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true) May 21 16:39:31.349: INFO: The status of Pod pfpod is Pending, waiting for it to be Running (with Ready = true) May 21 16:39:33.349: INFO: The status of Pod pfpod is Running (Ready = false) May 21 16:39:35.349: INFO: The status of Pod pfpod is Running (Ready = false) May 21 16:39:37.348: INFO: The status of Pod pfpod is Running (Ready = true) STEP: Running 'kubectl port-forward' May 21 16:39:37.349: INFO: starting port-forward command and streaming output May 21 16:39:37.349: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=port-forwarding-444 port-forward --namespace=port-forwarding-444 pfpod :80' May 21 16:39:37.349: INFO: reading from `kubectl port-forward` command's stdout STEP: Dialing the local port STEP: Sending the expected data to the local port STEP: Reading data from the local port STEP: Closing the write half of the client's connection STEP: Waiting for the target pod to stop running May 21 16:39:39.417: INFO: Waiting up to 5m0s for pod "pfpod" in namespace "port-forwarding-444" to be "container terminated" May 21 16:39:39.420: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=true. Elapsed: 3.585853ms May 21 16:39:41.424: INFO: Pod "pfpod": Phase="Running", Reason="", readiness=false. Elapsed: 2.007446445s May 21 16:39:41.424: INFO: Pod "pfpod" satisfied condition "container terminated" STEP: Verifying logs STEP: Closing the connection to the local port [AfterEach] [sig-cli] Kubectl Port forwarding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:39:41.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "port-forwarding-444" for this suite. • [SLOW TEST:12.135 seconds] [sig-cli] Kubectl Port forwarding /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 With a server listening on localhost /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474 that expects a client request /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:475 should support a client that connects, sends DATA, and disconnects /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:479 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":3,"skipped":303,"failed":0} May 21 16:39:41.449: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:39:05.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl May 21 16:39:05.424: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 21 16:39:05.427: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:384 STEP: creating the pod from May 21 16:39:05.430: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-9354 create -f -' May 21 16:39:05.762: INFO: stderr: "" May 21 16:39:05.762: INFO: stdout: "pod/httpd created\n" May 21 16:39:05.762: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [httpd] May 21 16:39:05.762: INFO: Waiting up to 5m0s for pod "httpd" in namespace "kubectl-9354" to be "running and ready" May 21 16:39:05.765: INFO: Pod "httpd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.101519ms May 21 16:39:07.769: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 2.006572854s May 21 16:39:09.773: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 4.01014978s May 21 16:39:11.776: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 6.013164089s May 21 16:39:13.779: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 8.016558163s May 21 16:39:15.782: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 10.01994466s May 21 16:39:17.785: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 12.022935591s May 21 16:39:19.789: INFO: Pod "httpd": Phase="Running", Reason="", readiness=false. Elapsed: 14.026225888s May 21 16:39:21.792: INFO: Pod "httpd": Phase="Running", Reason="", readiness=true. Elapsed: 16.029829159s May 21 16:39:21.792: INFO: Pod "httpd" satisfied condition "running and ready" May 21 16:39:21.792: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [httpd] [It] should return command exit codes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:502 STEP: execing into a container with a successful command May 21 16:39:21.792: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-9354 exec httpd -- /bin/sh -c exit 0' May 21 16:39:22.057: INFO: stderr: "" May 21 16:39:22.058: INFO: stdout: "" STEP: execing into a container with a failing command May 21 16:39:22.058: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-9354 exec httpd -- /bin/sh -c exit 42' May 21 16:39:22.315: INFO: rc: 42 STEP: running a successful command May 21 16:39:22.315: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-9354 run -i --image=docker.io/library/busybox:1.29 --restart=Never success -- /bin/sh -c exit 0' May 21 16:39:24.012: INFO: stderr: "" May 21 16:39:24.012: INFO: stdout: "" STEP: running a failing command May 21 16:39:24.012: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-9354 run -i --image=docker.io/library/busybox:1.29 --restart=Never failure-1 -- /bin/sh -c exit 42' May 21 16:39:26.211: INFO: rc: 42 STEP: running a failing command without --restart=Never May 21 16:39:26.212: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-9354 run -i --image=docker.io/library/busybox:1.29 --restart=OnFailure failure-2 -- /bin/sh -c cat && exit 42' May 21 16:40:28.655: INFO: rc: 1 STEP: running a failing command without --restart=Never, but with --rm May 21 16:40:28.655: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-9354 run -i --image=docker.io/library/busybox:1.29 --restart=OnFailure --rm failure-3 -- /bin/sh -c cat && exit 42' May 21 16:41:30.131: INFO: rc: 1 May 21 16:41:30.132: INFO: Waiting for pod failure-3 to disappear May 21 16:41:30.148: INFO: Pod failure-3 still exists May 21 16:41:32.148: INFO: Waiting for pod failure-3 to disappear May 21 16:41:32.153: INFO: Pod failure-3 still exists May 21 16:41:34.148: INFO: Waiting for pod failure-3 to disappear May 21 16:41:34.153: INFO: Pod failure-3 still exists May 21 16:41:36.148: INFO: Waiting for pod failure-3 to disappear May 21 16:41:36.153: INFO: Pod failure-3 still exists May 21 16:41:38.148: INFO: Waiting for pod failure-3 to disappear May 21 16:41:38.153: INFO: Pod failure-3 still exists May 21 16:41:40.148: INFO: Waiting for pod failure-3 to disappear May 21 16:41:40.153: INFO: Pod failure-3 still exists May 21 16:41:42.148: INFO: Waiting for pod failure-3 to disappear May 21 16:41:42.153: INFO: Pod failure-3 still exists May 21 16:41:44.148: INFO: Waiting for pod failure-3 to disappear May 21 16:41:44.153: INFO: Pod failure-3 still exists May 21 16:41:46.148: INFO: Waiting for pod failure-3 to disappear May 21 16:41:46.153: INFO: Pod failure-3 still exists May 21 16:41:48.148: INFO: Waiting for pod failure-3 to disappear May 21 16:41:48.152: INFO: Pod failure-3 still exists May 21 16:41:50.148: INFO: Waiting for pod failure-3 to disappear May 21 16:41:50.153: INFO: Pod failure-3 still exists May 21 16:41:52.148: INFO: Waiting for pod failure-3 to disappear May 21 16:41:52.153: INFO: Pod failure-3 still exists May 21 16:41:54.148: INFO: Waiting for pod failure-3 to disappear May 21 16:41:54.153: INFO: Pod failure-3 still exists May 21 16:41:56.148: INFO: Waiting for pod failure-3 to disappear May 21 16:41:56.153: INFO: Pod failure-3 still exists May 21 16:41:58.148: INFO: Waiting for pod failure-3 to disappear May 21 16:41:58.153: INFO: Pod failure-3 still exists May 21 16:42:00.148: INFO: Waiting for pod failure-3 to disappear May 21 16:42:00.153: INFO: Pod failure-3 still exists May 21 16:42:00.153: INFO: Waiting for pod failure-3 to disappear May 21 16:42:00.158: INFO: Pod failure-3 still exists STEP: running a failing command with --leave-stdin-open May 21 16:42:00.158: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-9354 run -i --image=docker.io/library/busybox:1.29 --restart=Never failure-4 --leave-stdin-open -- /bin/sh -c exit 42' May 21 16:42:01.277: INFO: stderr: "" May 21 16:42:01.277: INFO: stdout: "" [AfterEach] Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:390 STEP: using delete to clean up resources May 21 16:42:01.277: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-9354 delete --grace-period=0 --force -f -' May 21 16:42:01.406: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 21 16:42:01.406: INFO: stdout: "pod \"httpd\" force deleted\n" May 21 16:42:01.406: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-9354 get rc,svc -l name=httpd --no-headers' May 21 16:42:01.533: INFO: stderr: "No resources found in kubectl-9354 namespace.\n" May 21 16:42:01.533: INFO: stdout: "" May 21 16:42:01.533: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.13.89:46681 --kubeconfig=/root/.kube/config --namespace=kubectl-9354 get pods -l name=httpd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 21 16:42:01.650: INFO: stderr: "" May 21 16:42:01.650: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:42:01.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9354" for this suite. • [SLOW TEST:176.262 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Simple pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:382 should return command exit codes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:502 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes","total":-1,"completed":1,"skipped":58,"failed":0} May 21 16:42:01.665: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a CR with unknown fields for CRD with no validation schema","total":-1,"completed":4,"skipped":896,"failed":0} May 21 16:39:32.036: INFO: Running AfterSuite actions on all nodes May 21 16:42:01.720: INFO: Running AfterSuite actions on node 1 May 21 16:42:01.720: INFO: Skipping dumping logs from cluster Ran 30 of 5484 Specs in 176.713 seconds SUCCESS! -- 30 Passed | 0 Failed | 0 Pending | 5454 Skipped Ginkgo ran 1 suite in 2m58.378661475s Test Suite Passed