I0529 01:35:10.398690 22 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0529 01:35:10.398837 22 e2e.go:129] Starting e2e run "8f586a12-18b5-4d49-b658-545c2e2b3766" on Ginkgo node 1 {"msg":"Test Suite starting","total":17,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1622252109 - Will randomize all specs Will run 17 of 5484 specs May 29 01:35:10.474: INFO: >>> kubeConfig: /root/.kube/config May 29 01:35:10.479: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 29 01:35:10.507: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 29 01:35:10.563: INFO: The status of Pod cmk-init-discover-node1-rvqxm is Succeeded, skipping waiting May 29 01:35:10.563: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 29 01:35:10.563: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 29 01:35:10.563: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 29 01:35:10.581: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 29 01:35:10.581: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 29 01:35:10.581: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 29 01:35:10.581: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 29 01:35:10.581: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 29 01:35:10.581: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 29 01:35:10.581: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 29 01:35:10.581: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 29 01:35:10.581: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 29 01:35:10.581: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 29 01:35:10.581: INFO: e2e test version: v1.19.11 May 29 01:35:10.581: INFO: kube-apiserver version: v1.19.8 May 29 01:35:10.581: INFO: >>> kubeConfig: /root/.kube/config May 29 01:35:10.587: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:35:10.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test May 29 01:35:10.612: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 29 01:35:10.615: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 May 29 01:35:14.648: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-1767 PodName:hostexec-node1-5qxhh ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:35:14.648: INFO: >>> kubeConfig: /root/.kube/config May 29 01:35:14.775: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 29 01:35:14.775: INFO: exec node1: stdout: "0\n" May 29 01:35:14.775: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" May 29 01:35:14.775: INFO: exec node1: exit code: 0 May 29 01:35:14.775: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:35:14.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1767" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.200 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Two pods mounting a local volume one after the other [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:517 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:35:14.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:441 STEP: Setting up 10 local volumes on node "node1" STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-c9e0a4b5-ba17-44e2-86e1-31d9a91318c1" May 29 01:35:18.842: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-c9e0a4b5-ba17-44e2-86e1-31d9a91318c1" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-c9e0a4b5-ba17-44e2-86e1-31d9a91318c1" "/tmp/local-volume-test-c9e0a4b5-ba17-44e2-86e1-31d9a91318c1"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:35:18.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-36dd74d2-6ec5-4dce-b6b3-ae7df1d43959" May 29 01:35:18.972: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-36dd74d2-6ec5-4dce-b6b3-ae7df1d43959" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-36dd74d2-6ec5-4dce-b6b3-ae7df1d43959" "/tmp/local-volume-test-36dd74d2-6ec5-4dce-b6b3-ae7df1d43959"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:35:18.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-bd483e73-00fb-476b-a7ba-5788fecbbd13" May 29 01:35:19.110: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-bd483e73-00fb-476b-a7ba-5788fecbbd13" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-bd483e73-00fb-476b-a7ba-5788fecbbd13" "/tmp/local-volume-test-bd483e73-00fb-476b-a7ba-5788fecbbd13"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:35:19.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-579c107d-656a-4cc1-ac1d-4b2cad5969fd" May 29 01:35:19.229: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-579c107d-656a-4cc1-ac1d-4b2cad5969fd" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-579c107d-656a-4cc1-ac1d-4b2cad5969fd" "/tmp/local-volume-test-579c107d-656a-4cc1-ac1d-4b2cad5969fd"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:35:19.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-af305ac0-8c51-4c36-a5f6-092b5866899e" May 29 01:35:19.347: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-af305ac0-8c51-4c36-a5f6-092b5866899e" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-af305ac0-8c51-4c36-a5f6-092b5866899e" "/tmp/local-volume-test-af305ac0-8c51-4c36-a5f6-092b5866899e"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:35:19.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-7280077e-a108-4067-90c7-7d118bf73404" May 29 01:35:19.488: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-7280077e-a108-4067-90c7-7d118bf73404" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-7280077e-a108-4067-90c7-7d118bf73404" "/tmp/local-volume-test-7280077e-a108-4067-90c7-7d118bf73404"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:35:19.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-27748a1e-3596-47f4-9599-54c5b2a0a87a" May 29 01:35:19.609: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-27748a1e-3596-47f4-9599-54c5b2a0a87a" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-27748a1e-3596-47f4-9599-54c5b2a0a87a" "/tmp/local-volume-test-27748a1e-3596-47f4-9599-54c5b2a0a87a"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:35:19.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-631df74f-b2ec-4377-946b-af7cc079a096" May 29 01:35:19.743: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-631df74f-b2ec-4377-946b-af7cc079a096" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-631df74f-b2ec-4377-946b-af7cc079a096" "/tmp/local-volume-test-631df74f-b2ec-4377-946b-af7cc079a096"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:35:19.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-dd2df023-2bf7-4028-8a28-1a53aaeba0ea" May 29 01:35:19.862: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-dd2df023-2bf7-4028-8a28-1a53aaeba0ea" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-dd2df023-2bf7-4028-8a28-1a53aaeba0ea" "/tmp/local-volume-test-dd2df023-2bf7-4028-8a28-1a53aaeba0ea"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:35:19.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-67f4f630-7684-46d6-8932-4caf5b9328c8" May 29 01:35:19.992: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-67f4f630-7684-46d6-8932-4caf5b9328c8" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-67f4f630-7684-46d6-8932-4caf5b9328c8" "/tmp/local-volume-test-67f4f630-7684-46d6-8932-4caf5b9328c8"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:35:19.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Setting up 10 local volumes on node "node2" STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-e7fd2c2b-36a3-47d9-9730-4553faeeedde" May 29 01:35:24.147: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-e7fd2c2b-36a3-47d9-9730-4553faeeedde" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-e7fd2c2b-36a3-47d9-9730-4553faeeedde" "/tmp/local-volume-test-e7fd2c2b-36a3-47d9-9730-4553faeeedde"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:35:24.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-9c415ac8-8a12-4bf7-a019-437132d4d98f" May 29 01:35:24.265: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-9c415ac8-8a12-4bf7-a019-437132d4d98f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-9c415ac8-8a12-4bf7-a019-437132d4d98f" "/tmp/local-volume-test-9c415ac8-8a12-4bf7-a019-437132d4d98f"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:35:24.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-009f3f99-0a9d-454c-8365-d000b2086873" May 29 01:35:24.379: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-009f3f99-0a9d-454c-8365-d000b2086873" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-009f3f99-0a9d-454c-8365-d000b2086873" "/tmp/local-volume-test-009f3f99-0a9d-454c-8365-d000b2086873"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:35:24.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-0d4ec059-c6a4-47e7-b4a4-1309b42c6652" May 29 01:35:24.489: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-0d4ec059-c6a4-47e7-b4a4-1309b42c6652" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-0d4ec059-c6a4-47e7-b4a4-1309b42c6652" "/tmp/local-volume-test-0d4ec059-c6a4-47e7-b4a4-1309b42c6652"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:35:24.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-831fe72e-9b72-4651-a0c6-54fd24d43430" May 29 01:35:24.625: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-831fe72e-9b72-4651-a0c6-54fd24d43430" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-831fe72e-9b72-4651-a0c6-54fd24d43430" "/tmp/local-volume-test-831fe72e-9b72-4651-a0c6-54fd24d43430"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:35:24.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-6790c528-7e1f-40f0-b036-97cbcda54545" May 29 01:35:24.737: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-6790c528-7e1f-40f0-b036-97cbcda54545" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-6790c528-7e1f-40f0-b036-97cbcda54545" "/tmp/local-volume-test-6790c528-7e1f-40f0-b036-97cbcda54545"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:35:24.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-4014d660-253c-4c8a-857c-7c3bf2c3a4da" May 29 01:35:24.871: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-4014d660-253c-4c8a-857c-7c3bf2c3a4da" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-4014d660-253c-4c8a-857c-7c3bf2c3a4da" "/tmp/local-volume-test-4014d660-253c-4c8a-857c-7c3bf2c3a4da"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:35:24.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-dc496239-2c55-42cb-ad08-925c1679f1c5" May 29 01:35:24.983: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-dc496239-2c55-42cb-ad08-925c1679f1c5" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-dc496239-2c55-42cb-ad08-925c1679f1c5" "/tmp/local-volume-test-dc496239-2c55-42cb-ad08-925c1679f1c5"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:35:24.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-7d6109f0-019c-4b95-93ee-5664cc1b02b6" May 29 01:35:25.109: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-7d6109f0-019c-4b95-93ee-5664cc1b02b6" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-7d6109f0-019c-4b95-93ee-5664cc1b02b6" "/tmp/local-volume-test-7d6109f0-019c-4b95-93ee-5664cc1b02b6"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:35:25.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-9801db33-dade-474b-a1b7-7cb334914676" May 29 01:35:25.222: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-9801db33-dade-474b-a1b7-7cb334914676" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-9801db33-dade-474b-a1b7-7cb334914676" "/tmp/local-volume-test-9801db33-dade-474b-a1b7-7cb334914676"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:35:25.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Create 20 PVs STEP: Start a goroutine to recycle unbound PVs [It] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:517 STEP: Creating 7 pods periodically STEP: Waiting for all pods to complete successfully May 29 01:40:25.538: FAIL: some pods failed to complete within 5m0s Unexpected error: <*errors.errorString | 0xc0002c4200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func20.6.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:610 +0x42a k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002c43200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc002c43200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc002c43200, 0x4de5140) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 May 29 01:40:25.539: INFO: Deleting pod pod-34dc51e6-50d7-4027-82aa-f4195c47036e May 29 01:40:25.545: INFO: Deleting PersistentVolumeClaim "pvc-w4kzq" May 29 01:40:25.550: INFO: Deleting PersistentVolumeClaim "pvc-zc65l" May 29 01:40:25.554: INFO: Deleting PersistentVolumeClaim "pvc-ct7gz" May 29 01:40:25.558: INFO: Deleting pod pod-1ee3a3ea-40f6-4c4b-adfd-89acd87c1ece May 29 01:40:25.562: INFO: Deleting PersistentVolumeClaim "pvc-r7kp6" May 29 01:40:25.566: INFO: Deleting PersistentVolumeClaim "pvc-psscd" May 29 01:40:25.570: INFO: Deleting PersistentVolumeClaim "pvc-r7h9j" May 29 01:40:25.574: INFO: Deleting pod pod-7c583c8b-0277-449d-aef3-20d42f43d89a May 29 01:40:25.578: INFO: Deleting PersistentVolumeClaim "pvc-fc422" May 29 01:40:25.581: INFO: Deleting PersistentVolumeClaim "pvc-kvcn2" May 29 01:40:25.584: INFO: Deleting PersistentVolumeClaim "pvc-5cx7l" May 29 01:40:25.588: INFO: Deleting pod pod-9acf1471-4f88-4fb9-aa6c-8cabcdf71a98 May 29 01:40:25.593: INFO: Deleting PersistentVolumeClaim "pvc-s4swb" May 29 01:40:25.596: INFO: Deleting PersistentVolumeClaim "pvc-vmcpb" May 29 01:40:25.600: INFO: Deleting PersistentVolumeClaim "pvc-mfhjt" May 29 01:40:25.603: INFO: Deleting pod pod-28697259-b113-4ff6-9f1a-59d6207ec6e8 May 29 01:40:25.608: INFO: Deleting PersistentVolumeClaim "pvc-dtwxz" May 29 01:40:25.611: INFO: Deleting PersistentVolumeClaim "pvc-xmsb2" May 29 01:40:25.615: INFO: Deleting PersistentVolumeClaim "pvc-79567" May 29 01:40:25.619: INFO: Deleting pod pod-d6a0d198-2e6c-457f-8ad0-69a41b306133 May 29 01:40:25.623: INFO: Deleting PersistentVolumeClaim "pvc-hfg8s" May 29 01:40:25.626: INFO: Deleting PersistentVolumeClaim "pvc-fv97m" May 29 01:40:25.630: INFO: Deleting PersistentVolumeClaim "pvc-4w7t2" May 29 01:40:25.633: INFO: Deleting pod pod-610af462-321c-4ced-a88f-27eedf1a7e17 May 29 01:40:25.642: INFO: Deleting PersistentVolumeClaim "pvc-pp6vd" May 29 01:40:25.646: INFO: Deleting PersistentVolumeClaim "pvc-rvb9j" May 29 01:40:25.649: INFO: Deleting PersistentVolumeClaim "pvc-vz4b4" [AfterEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:505 STEP: Stop and wait for recycle goroutine to finish STEP: Clean all PVs STEP: Cleaning up 10 local volumes on node "node1" STEP: Cleaning up PVC and PV May 29 01:40:25.654: INFO: pvc is nil May 29 01:40:25.654: INFO: Deleting PersistentVolume "local-pv4xdg9" STEP: Cleaning up PVC and PV May 29 01:40:25.657: INFO: pvc is nil May 29 01:40:25.657: INFO: Deleting PersistentVolume "local-pvzxhs2" STEP: Cleaning up PVC and PV May 29 01:40:25.661: INFO: pvc is nil May 29 01:40:25.661: INFO: Deleting PersistentVolume "local-pvnqxkp" STEP: Cleaning up PVC and PV May 29 01:40:25.664: INFO: pvc is nil May 29 01:40:25.664: INFO: Deleting PersistentVolume "local-pvxtng6" STEP: Cleaning up PVC and PV May 29 01:40:25.667: INFO: pvc is nil May 29 01:40:25.667: INFO: Deleting PersistentVolume "local-pvghpfm" STEP: Cleaning up PVC and PV May 29 01:40:25.671: INFO: pvc is nil May 29 01:40:25.671: INFO: Deleting PersistentVolume "local-pvs7m8s" STEP: Cleaning up PVC and PV May 29 01:40:25.675: INFO: pvc is nil May 29 01:40:25.675: INFO: Deleting PersistentVolume "local-pv67hcb" STEP: Cleaning up PVC and PV May 29 01:40:25.678: INFO: pvc is nil May 29 01:40:25.678: INFO: Deleting PersistentVolume "local-pvqr5p4" STEP: Cleaning up PVC and PV May 29 01:40:25.682: INFO: pvc is nil May 29 01:40:25.682: INFO: Deleting PersistentVolume "local-pv4pp6l" STEP: Cleaning up PVC and PV May 29 01:40:25.685: INFO: pvc is nil May 29 01:40:25.685: INFO: Deleting PersistentVolume "local-pv455xl" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-c9e0a4b5-ba17-44e2-86e1-31d9a91318c1" May 29 01:40:25.689: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-c9e0a4b5-ba17-44e2-86e1-31d9a91318c1"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:25.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 29 01:40:25.828: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c9e0a4b5-ba17-44e2-86e1-31d9a91318c1] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:25.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-36dd74d2-6ec5-4dce-b6b3-ae7df1d43959" May 29 01:40:25.949: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-36dd74d2-6ec5-4dce-b6b3-ae7df1d43959"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:25.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 29 01:40:26.077: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-36dd74d2-6ec5-4dce-b6b3-ae7df1d43959] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:26.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-bd483e73-00fb-476b-a7ba-5788fecbbd13" May 29 01:40:26.187: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-bd483e73-00fb-476b-a7ba-5788fecbbd13"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:26.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 29 01:40:26.320: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-bd483e73-00fb-476b-a7ba-5788fecbbd13] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:26.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-579c107d-656a-4cc1-ac1d-4b2cad5969fd" May 29 01:40:26.604: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-579c107d-656a-4cc1-ac1d-4b2cad5969fd"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:26.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 29 01:40:26.986: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-579c107d-656a-4cc1-ac1d-4b2cad5969fd] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:26.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-af305ac0-8c51-4c36-a5f6-092b5866899e" May 29 01:40:27.157: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-af305ac0-8c51-4c36-a5f6-092b5866899e"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:27.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 29 01:40:27.356: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-af305ac0-8c51-4c36-a5f6-092b5866899e] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:27.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-7280077e-a108-4067-90c7-7d118bf73404" May 29 01:40:27.500: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-7280077e-a108-4067-90c7-7d118bf73404"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:27.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 29 01:40:27.874: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7280077e-a108-4067-90c7-7d118bf73404] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:27.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-27748a1e-3596-47f4-9599-54c5b2a0a87a" May 29 01:40:28.100: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-27748a1e-3596-47f4-9599-54c5b2a0a87a"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:28.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 29 01:40:28.366: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-27748a1e-3596-47f4-9599-54c5b2a0a87a] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:28.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-631df74f-b2ec-4377-946b-af7cc079a096" May 29 01:40:28.477: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-631df74f-b2ec-4377-946b-af7cc079a096"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:28.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 29 01:40:28.600: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-631df74f-b2ec-4377-946b-af7cc079a096] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:28.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-dd2df023-2bf7-4028-8a28-1a53aaeba0ea" May 29 01:40:28.705: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-dd2df023-2bf7-4028-8a28-1a53aaeba0ea"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:28.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 29 01:40:28.855: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-dd2df023-2bf7-4028-8a28-1a53aaeba0ea] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:28.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-67f4f630-7684-46d6-8932-4caf5b9328c8" May 29 01:40:28.966: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-67f4f630-7684-46d6-8932-4caf5b9328c8"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:28.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 29 01:40:29.086: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-67f4f630-7684-46d6-8932-4caf5b9328c8] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node1-clnxk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:29.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up 10 local volumes on node "node2" STEP: Cleaning up PVC and PV May 29 01:40:29.225: INFO: pvc is nil May 29 01:40:29.225: INFO: Deleting PersistentVolume "local-pvngdgf" STEP: Cleaning up PVC and PV May 29 01:40:29.231: INFO: pvc is nil May 29 01:40:29.231: INFO: Deleting PersistentVolume "local-pvcpfpp" STEP: Cleaning up PVC and PV May 29 01:40:29.234: INFO: pvc is nil May 29 01:40:29.234: INFO: Deleting PersistentVolume "local-pvqmtk5" STEP: Cleaning up PVC and PV May 29 01:40:29.238: INFO: pvc is nil May 29 01:40:29.238: INFO: Deleting PersistentVolume "local-pvz42xl" STEP: Cleaning up PVC and PV May 29 01:40:29.241: INFO: pvc is nil May 29 01:40:29.241: INFO: Deleting PersistentVolume "local-pvjqw7l" STEP: Cleaning up PVC and PV May 29 01:40:29.245: INFO: pvc is nil May 29 01:40:29.245: INFO: Deleting PersistentVolume "local-pvxkg2t" STEP: Cleaning up PVC and PV May 29 01:40:29.249: INFO: pvc is nil May 29 01:40:29.249: INFO: Deleting PersistentVolume "local-pv5r9sd" STEP: Cleaning up PVC and PV May 29 01:40:29.252: INFO: pvc is nil May 29 01:40:29.252: INFO: Deleting PersistentVolume "local-pvvxgrn" STEP: Cleaning up PVC and PV May 29 01:40:29.255: INFO: pvc is nil May 29 01:40:29.255: INFO: Deleting PersistentVolume "local-pvfmdw5" STEP: Cleaning up PVC and PV May 29 01:40:29.259: INFO: pvc is nil May 29 01:40:29.259: INFO: Deleting PersistentVolume "local-pvsxkkp" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-e7fd2c2b-36a3-47d9-9730-4553faeeedde" May 29 01:40:29.262: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-e7fd2c2b-36a3-47d9-9730-4553faeeedde"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:29.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 29 01:40:29.386: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e7fd2c2b-36a3-47d9-9730-4553faeeedde] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:29.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-9c415ac8-8a12-4bf7-a019-437132d4d98f" May 29 01:40:29.490: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-9c415ac8-8a12-4bf7-a019-437132d4d98f"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:29.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 29 01:40:29.597: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9c415ac8-8a12-4bf7-a019-437132d4d98f] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:29.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-009f3f99-0a9d-454c-8365-d000b2086873" May 29 01:40:29.717: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-009f3f99-0a9d-454c-8365-d000b2086873"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:29.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 29 01:40:29.828: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-009f3f99-0a9d-454c-8365-d000b2086873] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:29.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-0d4ec059-c6a4-47e7-b4a4-1309b42c6652" May 29 01:40:29.933: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-0d4ec059-c6a4-47e7-b4a4-1309b42c6652"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:29.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 29 01:40:30.048: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0d4ec059-c6a4-47e7-b4a4-1309b42c6652] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:30.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-831fe72e-9b72-4651-a0c6-54fd24d43430" May 29 01:40:30.160: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-831fe72e-9b72-4651-a0c6-54fd24d43430"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:30.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 29 01:40:30.289: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-831fe72e-9b72-4651-a0c6-54fd24d43430] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:30.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-6790c528-7e1f-40f0-b036-97cbcda54545" May 29 01:40:30.392: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-6790c528-7e1f-40f0-b036-97cbcda54545"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:30.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 29 01:40:30.507: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6790c528-7e1f-40f0-b036-97cbcda54545] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:30.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-4014d660-253c-4c8a-857c-7c3bf2c3a4da" May 29 01:40:30.610: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-4014d660-253c-4c8a-857c-7c3bf2c3a4da"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:30.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 29 01:40:30.724: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4014d660-253c-4c8a-857c-7c3bf2c3a4da] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:30.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-dc496239-2c55-42cb-ad08-925c1679f1c5" May 29 01:40:30.840: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-dc496239-2c55-42cb-ad08-925c1679f1c5"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:30.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 29 01:40:30.949: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-dc496239-2c55-42cb-ad08-925c1679f1c5] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:30.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-7d6109f0-019c-4b95-93ee-5664cc1b02b6" May 29 01:40:31.052: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-7d6109f0-019c-4b95-93ee-5664cc1b02b6"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:31.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 29 01:40:31.163: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7d6109f0-019c-4b95-93ee-5664cc1b02b6] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:31.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-9801db33-dade-474b-a1b7-7cb334914676" May 29 01:40:31.275: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-9801db33-dade-474b-a1b7-7cb334914676"] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:31.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 29 01:40:31.394: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-9801db33-dade-474b-a1b7-7cb334914676] Namespace:persistent-local-volumes-test-6967 PodName:hostexec-node2-424x9 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:31.394: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "persistent-local-volumes-test-6967". STEP: Found 75 events. May 29 01:40:31.505: INFO: At 2021-05-29 01:35:14 +0000 UTC - event for hostexec-node1-clnxk: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6967/hostexec-node1-clnxk to node1 May 29 01:40:31.505: INFO: At 2021-05-29 01:35:15 +0000 UTC - event for hostexec-node1-clnxk: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 29 01:40:31.505: INFO: At 2021-05-29 01:35:15 +0000 UTC - event for hostexec-node1-clnxk: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 507.823854ms May 29 01:40:31.505: INFO: At 2021-05-29 01:35:16 +0000 UTC - event for hostexec-node1-clnxk: {kubelet node1} Started: Started container agnhost-container May 29 01:40:31.505: INFO: At 2021-05-29 01:35:16 +0000 UTC - event for hostexec-node1-clnxk: {kubelet node1} Created: Created container agnhost-container May 29 01:40:31.505: INFO: At 2021-05-29 01:35:20 +0000 UTC - event for hostexec-node2-424x9: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 29 01:40:31.505: INFO: At 2021-05-29 01:35:20 +0000 UTC - event for hostexec-node2-424x9: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6967/hostexec-node2-424x9 to node2 May 29 01:40:31.505: INFO: At 2021-05-29 01:35:21 +0000 UTC - event for hostexec-node2-424x9: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 494.523133ms May 29 01:40:31.505: INFO: At 2021-05-29 01:35:21 +0000 UTC - event for hostexec-node2-424x9: {kubelet node2} Created: Created container agnhost-container May 29 01:40:31.505: INFO: At 2021-05-29 01:35:21 +0000 UTC - event for hostexec-node2-424x9: {kubelet node2} Started: Started container agnhost-container May 29 01:40:31.505: INFO: At 2021-05-29 01:35:25 +0000 UTC - event for pod-610af462-321c-4ced-a88f-27eedf1a7e17: {default-scheduler } FailedScheduling: 0/5 nodes are available: 2 node(s) didn't find available persistent volumes to bind, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. May 29 01:40:31.505: INFO: At 2021-05-29 01:35:25 +0000 UTC - event for pvc-ct7gz: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding May 29 01:40:31.505: INFO: At 2021-05-29 01:35:25 +0000 UTC - event for pvc-psscd: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding May 29 01:40:31.505: INFO: At 2021-05-29 01:35:25 +0000 UTC - event for pvc-r7h9j: {persistentvolume-controller } WaitForPodScheduled: waiting for pod pod-1ee3a3ea-40f6-4c4b-adfd-89acd87c1ece to be scheduled May 29 01:40:31.505: INFO: At 2021-05-29 01:35:25 +0000 UTC - event for pvc-r7kp6: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding May 29 01:40:31.505: INFO: At 2021-05-29 01:35:25 +0000 UTC - event for pvc-w4kzq: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding May 29 01:40:31.505: INFO: At 2021-05-29 01:35:25 +0000 UTC - event for pvc-zc65l: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding May 29 01:40:31.505: INFO: At 2021-05-29 01:35:26 +0000 UTC - event for pod-34dc51e6-50d7-4027-82aa-f4195c47036e: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6967/pod-34dc51e6-50d7-4027-82aa-f4195c47036e to node2 May 29 01:40:31.505: INFO: At 2021-05-29 01:35:26 +0000 UTC - event for pod-7c583c8b-0277-449d-aef3-20d42f43d89a: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6967/pod-7c583c8b-0277-449d-aef3-20d42f43d89a to node2 May 29 01:40:31.505: INFO: At 2021-05-29 01:35:27 +0000 UTC - event for pod-1ee3a3ea-40f6-4c4b-adfd-89acd87c1ece: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6967/pod-1ee3a3ea-40f6-4c4b-adfd-89acd87c1ece to node2 May 29 01:40:31.505: INFO: At 2021-05-29 01:35:27 +0000 UTC - event for pod-28697259-b113-4ff6-9f1a-59d6207ec6e8: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6967/pod-28697259-b113-4ff6-9f1a-59d6207ec6e8 to node1 May 29 01:40:31.505: INFO: At 2021-05-29 01:35:27 +0000 UTC - event for pod-9acf1471-4f88-4fb9-aa6c-8cabcdf71a98: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6967/pod-9acf1471-4f88-4fb9-aa6c-8cabcdf71a98 to node1 May 29 01:40:31.505: INFO: At 2021-05-29 01:35:27 +0000 UTC - event for pvc-pp6vd: {persistentvolume-controller } WaitForPodScheduled: waiting for pod pod-610af462-321c-4ced-a88f-27eedf1a7e17 to be scheduled May 29 01:40:31.505: INFO: At 2021-05-29 01:35:27 +0000 UTC - event for pvc-rvb9j: {persistentvolume-controller } WaitForPodScheduled: waiting for pod pod-610af462-321c-4ced-a88f-27eedf1a7e17 to be scheduled May 29 01:40:31.505: INFO: At 2021-05-29 01:35:27 +0000 UTC - event for pvc-vz4b4: {persistentvolume-controller } WaitForPodScheduled: waiting for pod pod-610af462-321c-4ced-a88f-27eedf1a7e17 to be scheduled May 29 01:40:31.505: INFO: At 2021-05-29 01:35:28 +0000 UTC - event for pod-7c583c8b-0277-449d-aef3-20d42f43d89a: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:40:31.505: INFO: At 2021-05-29 01:35:28 +0000 UTC - event for pod-7c583c8b-0277-449d-aef3-20d42f43d89a: {multus } AddedInterface: Add eth0 [10.244.3.106/24] May 29 01:40:31.505: INFO: At 2021-05-29 01:35:28 +0000 UTC - event for pod-d6a0d198-2e6c-457f-8ad0-69a41b306133: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6967/pod-d6a0d198-2e6c-457f-8ad0-69a41b306133 to node1 May 29 01:40:31.505: INFO: At 2021-05-29 01:35:29 +0000 UTC - event for pod-34dc51e6-50d7-4027-82aa-f4195c47036e: {multus } AddedInterface: Add eth0 [10.244.3.107/24] May 29 01:40:31.505: INFO: At 2021-05-29 01:35:29 +0000 UTC - event for pod-34dc51e6-50d7-4027-82aa-f4195c47036e: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:40:31.505: INFO: At 2021-05-29 01:35:29 +0000 UTC - event for pod-7c583c8b-0277-449d-aef3-20d42f43d89a: {kubelet node2} Failed: Error: ErrImagePull May 29 01:40:31.505: INFO: At 2021-05-29 01:35:29 +0000 UTC - event for pod-7c583c8b-0277-449d-aef3-20d42f43d89a: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:40:31.505: INFO: At 2021-05-29 01:35:29 +0000 UTC - event for pod-9acf1471-4f88-4fb9-aa6c-8cabcdf71a98: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:40:31.505: INFO: At 2021-05-29 01:35:29 +0000 UTC - event for pod-9acf1471-4f88-4fb9-aa6c-8cabcdf71a98: {multus } AddedInterface: Add eth0 [10.244.4.191/24] May 29 01:40:31.505: INFO: At 2021-05-29 01:35:30 +0000 UTC - event for pod-7c583c8b-0277-449d-aef3-20d42f43d89a: {kubelet node2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 29 01:40:31.505: INFO: At 2021-05-29 01:35:31 +0000 UTC - event for pod-1ee3a3ea-40f6-4c4b-adfd-89acd87c1ece: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:40:31.505: INFO: At 2021-05-29 01:35:31 +0000 UTC - event for pod-1ee3a3ea-40f6-4c4b-adfd-89acd87c1ece: {multus } AddedInterface: Add eth0 [10.244.3.108/24] May 29 01:40:31.505: INFO: At 2021-05-29 01:35:31 +0000 UTC - event for pod-28697259-b113-4ff6-9f1a-59d6207ec6e8: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:40:31.505: INFO: At 2021-05-29 01:35:31 +0000 UTC - event for pod-28697259-b113-4ff6-9f1a-59d6207ec6e8: {multus } AddedInterface: Add eth0 [10.244.4.192/24] May 29 01:40:31.505: INFO: At 2021-05-29 01:35:32 +0000 UTC - event for pod-1ee3a3ea-40f6-4c4b-adfd-89acd87c1ece: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:40:31.505: INFO: At 2021-05-29 01:35:32 +0000 UTC - event for pod-1ee3a3ea-40f6-4c4b-adfd-89acd87c1ece: {kubelet node2} Failed: Error: ErrImagePull May 29 01:40:31.505: INFO: At 2021-05-29 01:35:32 +0000 UTC - event for pod-34dc51e6-50d7-4027-82aa-f4195c47036e: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:40:31.505: INFO: At 2021-05-29 01:35:32 +0000 UTC - event for pod-34dc51e6-50d7-4027-82aa-f4195c47036e: {kubelet node2} Failed: Error: ImagePullBackOff May 29 01:40:31.505: INFO: At 2021-05-29 01:35:32 +0000 UTC - event for pod-34dc51e6-50d7-4027-82aa-f4195c47036e: {kubelet node2} Failed: Error: ErrImagePull May 29 01:40:31.505: INFO: At 2021-05-29 01:35:32 +0000 UTC - event for pod-34dc51e6-50d7-4027-82aa-f4195c47036e: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:40:31.505: INFO: At 2021-05-29 01:35:32 +0000 UTC - event for pod-7c583c8b-0277-449d-aef3-20d42f43d89a: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:40:31.505: INFO: At 2021-05-29 01:35:32 +0000 UTC - event for pod-7c583c8b-0277-449d-aef3-20d42f43d89a: {kubelet node2} Failed: Error: ImagePullBackOff May 29 01:40:31.505: INFO: At 2021-05-29 01:35:32 +0000 UTC - event for pod-7c583c8b-0277-449d-aef3-20d42f43d89a: {multus } AddedInterface: Add eth0 [10.244.3.109/24] May 29 01:40:31.505: INFO: At 2021-05-29 01:35:33 +0000 UTC - event for pod-1ee3a3ea-40f6-4c4b-adfd-89acd87c1ece: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:40:31.505: INFO: At 2021-05-29 01:35:33 +0000 UTC - event for pod-1ee3a3ea-40f6-4c4b-adfd-89acd87c1ece: {kubelet node2} Failed: Error: ImagePullBackOff May 29 01:40:31.505: INFO: At 2021-05-29 01:35:33 +0000 UTC - event for pod-d6a0d198-2e6c-457f-8ad0-69a41b306133: {multus } AddedInterface: Add eth0 [10.244.4.193/24] May 29 01:40:31.505: INFO: At 2021-05-29 01:35:33 +0000 UTC - event for pod-d6a0d198-2e6c-457f-8ad0-69a41b306133: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:40:31.505: INFO: At 2021-05-29 01:35:35 +0000 UTC - event for pod-7c583c8b-0277-449d-aef3-20d42f43d89a: {multus } AddedInterface: Add eth0 [10.244.3.110/24] May 29 01:40:31.506: INFO: At 2021-05-29 01:35:35 +0000 UTC - event for pod-9acf1471-4f88-4fb9-aa6c-8cabcdf71a98: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:40:31.506: INFO: At 2021-05-29 01:35:35 +0000 UTC - event for pod-9acf1471-4f88-4fb9-aa6c-8cabcdf71a98: {kubelet node1} Failed: Error: ErrImagePull May 29 01:40:31.506: INFO: At 2021-05-29 01:35:36 +0000 UTC - event for pod-28697259-b113-4ff6-9f1a-59d6207ec6e8: {kubelet node1} Failed: Error: ErrImagePull May 29 01:40:31.506: INFO: At 2021-05-29 01:35:36 +0000 UTC - event for pod-28697259-b113-4ff6-9f1a-59d6207ec6e8: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:40:31.506: INFO: At 2021-05-29 01:35:36 +0000 UTC - event for pod-9acf1471-4f88-4fb9-aa6c-8cabcdf71a98: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:40:31.506: INFO: At 2021-05-29 01:35:36 +0000 UTC - event for pod-9acf1471-4f88-4fb9-aa6c-8cabcdf71a98: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:40:31.506: INFO: At 2021-05-29 01:35:37 +0000 UTC - event for pod-28697259-b113-4ff6-9f1a-59d6207ec6e8: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 29 01:40:31.506: INFO: At 2021-05-29 01:35:37 +0000 UTC - event for pod-d6a0d198-2e6c-457f-8ad0-69a41b306133: {kubelet node1} Failed: Error: ErrImagePull May 29 01:40:31.506: INFO: At 2021-05-29 01:35:37 +0000 UTC - event for pod-d6a0d198-2e6c-457f-8ad0-69a41b306133: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:40:31.506: INFO: At 2021-05-29 01:35:38 +0000 UTC - event for pod-d6a0d198-2e6c-457f-8ad0-69a41b306133: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 29 01:40:31.506: INFO: At 2021-05-29 01:35:39 +0000 UTC - event for pod-28697259-b113-4ff6-9f1a-59d6207ec6e8: {multus } AddedInterface: Add eth0 [10.244.4.194/24] May 29 01:40:31.506: INFO: At 2021-05-29 01:35:39 +0000 UTC - event for pod-28697259-b113-4ff6-9f1a-59d6207ec6e8: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:40:31.506: INFO: At 2021-05-29 01:35:39 +0000 UTC - event for pod-28697259-b113-4ff6-9f1a-59d6207ec6e8: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:40:31.506: INFO: At 2021-05-29 01:35:40 +0000 UTC - event for pod-d6a0d198-2e6c-457f-8ad0-69a41b306133: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:40:31.506: INFO: At 2021-05-29 01:35:40 +0000 UTC - event for pod-d6a0d198-2e6c-457f-8ad0-69a41b306133: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:40:31.506: INFO: At 2021-05-29 01:35:40 +0000 UTC - event for pod-d6a0d198-2e6c-457f-8ad0-69a41b306133: {multus } AddedInterface: Add eth0 [10.244.4.195/24] May 29 01:40:31.506: INFO: At 2021-05-29 01:35:43 +0000 UTC - event for pod-d6a0d198-2e6c-457f-8ad0-69a41b306133: {multus } AddedInterface: Add eth0 [10.244.4.196/24] May 29 01:40:31.506: INFO: At 2021-05-29 01:36:52 +0000 UTC - event for pod-34dc51e6-50d7-4027-82aa-f4195c47036e: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:40:31.506: INFO: At 2021-05-29 01:40:25 +0000 UTC - event for pod-610af462-321c-4ced-a88f-27eedf1a7e17: {default-scheduler } FailedScheduling: skip schedule deleting pod: persistent-local-volumes-test-6967/pod-610af462-321c-4ced-a88f-27eedf1a7e17 May 29 01:40:31.506: INFO: At 2021-05-29 01:40:25 +0000 UTC - event for pvc-pp6vd: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding May 29 01:40:31.506: INFO: At 2021-05-29 01:40:25 +0000 UTC - event for pvc-rvb9j: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding May 29 01:40:31.506: INFO: At 2021-05-29 01:40:25 +0000 UTC - event for pvc-vz4b4: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding May 29 01:40:31.508: INFO: POD NODE PHASE GRACE CONDITIONS May 29 01:40:31.508: INFO: hostexec-node1-clnxk node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:35:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:35:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:35:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:35:14 +0000 UTC }] May 29 01:40:31.508: INFO: hostexec-node2-424x9 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:35:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:35:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:35:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:35:20 +0000 UTC }] May 29 01:40:31.509: INFO: May 29 01:40:31.512: INFO: Logging node info for node master1 May 29 01:40:31.515: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 0aa78934-442a-44a3-8c5c-f827e18dd3d7 163583 0 2021-05-28 19:56:25 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"0a:41:0b:9d:15:5a"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-28 19:56:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-28 19:56:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-28 19:59:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:02:03 +0000 UTC,LastTransitionTime:2021-05-28 20:02:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-29 01:40:28 +0000 UTC,LastTransitionTime:2021-05-28 19:56:25 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-29 01:40:28 +0000 UTC,LastTransitionTime:2021-05-28 19:56:25 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-29 01:40:28 +0000 UTC,LastTransitionTime:2021-05-28 19:56:25 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-29 01:40:28 +0000 UTC,LastTransitionTime:2021-05-28 19:59:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f7fb2c462cae4b9c990ab2e5c72f7816,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:24c06694-15ae-4da4-9143-144d98afdd8d,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:7f3d9945acdf5d86edd89b2b16fe1f6d63ba8bdb4cab50e66f9bce162df9e388 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:9af6075c93013910787a4e97973da6e0739a86dee1186d7965a5d00b1ac35636 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 29 01:40:31.516: INFO: Logging kubelet events for node master1 May 29 01:40:31.520: INFO: Logging pods the kubelet thinks is on node master1 May 29 01:40:31.533: INFO: kube-proxy-994p2 started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.533: INFO: Container kube-proxy ready: true, restart count 1 May 29 01:40:31.533: INFO: kube-flannel-d54gm started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 29 01:40:31.533: INFO: Init container install-cni ready: true, restart count 0 May 29 01:40:31.533: INFO: Container kube-flannel ready: true, restart count 2 May 29 01:40:31.533: INFO: node-exporter-9b7pq started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 29 01:40:31.533: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 01:40:31.533: INFO: Container node-exporter ready: true, restart count 0 May 29 01:40:31.533: INFO: kube-apiserver-master1 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.533: INFO: Container kube-apiserver ready: true, restart count 0 May 29 01:40:31.533: INFO: kube-controller-manager-master1 started at 2021-05-28 19:57:39 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.533: INFO: Container kube-controller-manager ready: true, restart count 2 May 29 01:40:31.533: INFO: kube-multus-ds-amd64-n9j8k started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.533: INFO: Container kube-multus ready: true, restart count 1 May 29 01:40:31.533: INFO: docker-registry-docker-registry-56cbc7bc58-rbghz started at 2021-05-28 20:02:55 +0000 UTC (0+2 container statuses recorded) May 29 01:40:31.533: INFO: Container docker-registry ready: true, restart count 0 May 29 01:40:31.533: INFO: Container nginx ready: true, restart count 0 May 29 01:40:31.533: INFO: prometheus-operator-5bb8cb9d8f-7wdtq started at 2021-05-28 20:10:02 +0000 UTC (0+2 container statuses recorded) May 29 01:40:31.533: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 01:40:31.533: INFO: Container prometheus-operator ready: true, restart count 0 May 29 01:40:31.533: INFO: kube-scheduler-master1 started at 2021-05-28 19:57:39 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.533: INFO: Container kube-scheduler ready: true, restart count 0 W0529 01:40:31.545902 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 29 01:40:31.572: INFO: Latency metrics for node master1 May 29 01:40:31.572: INFO: Logging node info for node master2 May 29 01:40:31.575: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 b80f32b6-a396-4f09-a110-345a08d762ee 163472 0 2021-05-28 19:57:04 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"b2:be:c9:d8:cf:bb"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-28 19:57:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-28 19:57:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-28 19:59:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-28 20:06:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:00:49 +0000 UTC,LastTransitionTime:2021-05-28 20:00:49 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-29 01:40:22 +0000 UTC,LastTransitionTime:2021-05-28 19:57:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-29 01:40:22 +0000 UTC,LastTransitionTime:2021-05-28 19:57:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-29 01:40:22 +0000 UTC,LastTransitionTime:2021-05-28 19:57:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-29 01:40:22 +0000 UTC,LastTransitionTime:2021-05-28 20:00:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2746caf91c53460599f165aa716150cd,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:b63b522f-706f-4e28-a104-c73edcd04319,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 29 01:40:31.575: INFO: Logging kubelet events for node master2 May 29 01:40:31.580: INFO: Logging pods the kubelet thinks is on node master2 May 29 01:40:31.595: INFO: kube-proxy-jkbl8 started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.595: INFO: Container kube-proxy ready: true, restart count 1 May 29 01:40:31.595: INFO: dns-autoscaler-5b7b5c9b6f-r797x started at 2021-05-28 19:59:31 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.595: INFO: Container autoscaler ready: true, restart count 1 May 29 01:40:31.595: INFO: node-exporter-frch9 started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 29 01:40:31.595: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 01:40:31.595: INFO: Container node-exporter ready: true, restart count 0 May 29 01:40:31.595: INFO: kube-scheduler-master2 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.595: INFO: Container kube-scheduler ready: true, restart count 3 May 29 01:40:31.595: INFO: kube-controller-manager-master2 started at 2021-05-28 20:05:41 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.595: INFO: Container kube-controller-manager ready: true, restart count 3 May 29 01:40:31.595: INFO: kube-flannel-xvtkj started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 29 01:40:31.595: INFO: Init container install-cni ready: true, restart count 0 May 29 01:40:31.595: INFO: Container kube-flannel ready: true, restart count 2 May 29 01:40:31.595: INFO: kube-multus-ds-amd64-qjwcz started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.595: INFO: Container kube-multus ready: true, restart count 1 May 29 01:40:31.595: INFO: node-feature-discovery-controller-5bf5c49849-n9ncl started at 2021-05-28 20:05:52 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.595: INFO: Container nfd-controller ready: true, restart count 0 May 29 01:40:31.595: INFO: coredns-7677f9bb54-x2ckq started at 2021-05-29 00:53:57 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.595: INFO: Container coredns ready: true, restart count 0 May 29 01:40:31.595: INFO: kube-apiserver-master2 started at 2021-05-28 20:05:31 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.595: INFO: Container kube-apiserver ready: true, restart count 0 W0529 01:40:31.608735 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 29 01:40:31.637: INFO: Latency metrics for node master2 May 29 01:40:31.637: INFO: Logging node info for node master3 May 29 01:40:31.641: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 301b0b5b-fc42-4c78-adb7-75baf6e0cc7e 163479 0 2021-05-28 19:57:14 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"52:fa:ab:49:88:02"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-28 19:57:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-28 19:57:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-28 19:59:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:02:12 +0000 UTC,LastTransitionTime:2021-05-28 20:02:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-29 01:40:25 +0000 UTC,LastTransitionTime:2021-05-28 19:57:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-29 01:40:25 +0000 UTC,LastTransitionTime:2021-05-28 19:57:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-29 01:40:25 +0000 UTC,LastTransitionTime:2021-05-28 19:57:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-29 01:40:25 +0000 UTC,LastTransitionTime:2021-05-28 19:59:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a0cb6c0eb1d842469076fff344213c13,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:c6adcff4-8bf7-40d7-9d14-54b1c6a87bc8,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 29 01:40:31.643: INFO: Logging kubelet events for node master3 May 29 01:40:31.647: INFO: Logging pods the kubelet thinks is on node master3 May 29 01:40:31.663: INFO: kube-scheduler-master3 started at 2021-05-28 20:01:23 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.663: INFO: Container kube-scheduler ready: true, restart count 1 May 29 01:40:31.663: INFO: kube-proxy-t5bh6 started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.663: INFO: Container kube-proxy ready: true, restart count 1 May 29 01:40:31.663: INFO: kube-multus-ds-amd64-wqgf7 started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.663: INFO: Container kube-multus ready: true, restart count 1 May 29 01:40:31.663: INFO: coredns-7677f9bb54-sj78s started at 2021-05-29 00:53:57 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.663: INFO: Container coredns ready: true, restart count 0 May 29 01:40:31.663: INFO: node-exporter-w42s5 started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 29 01:40:31.663: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 01:40:31.663: INFO: Container node-exporter ready: true, restart count 0 May 29 01:40:31.663: INFO: kube-controller-manager-master3 started at 2021-05-28 20:06:02 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.663: INFO: Container kube-controller-manager ready: true, restart count 1 May 29 01:40:31.663: INFO: kube-flannel-zrskq started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 29 01:40:31.663: INFO: Init container install-cni ready: true, restart count 0 May 29 01:40:31.663: INFO: Container kube-flannel ready: true, restart count 1 May 29 01:40:31.663: INFO: kube-apiserver-master3 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.663: INFO: Container kube-apiserver ready: true, restart count 0 W0529 01:40:31.675259 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 29 01:40:31.712: INFO: Latency metrics for node master3 May 29 01:40:31.712: INFO: Logging node info for node node1 May 29 01:40:31.715: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 43e51cb4-5acb-42b5-8f26-cd5e977f3829 163563 0 2021-05-28 19:58:22 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-2661":"csi-mock-csi-mock-volumes-2661","csi-mock-csi-mock-volumes-2991":"csi-mock-csi-mock-volumes-2991","csi-mock-csi-mock-volumes-4403":"csi-mock-csi-mock-volumes-4403","csi-mock-csi-mock-volumes-5716":"csi-mock-csi-mock-volumes-5716","csi-mock-csi-mock-volumes-617":"csi-mock-csi-mock-volumes-617","csi-mock-csi-mock-volumes-6185":"csi-mock-csi-mock-volumes-6185","csi-mock-csi-mock-volumes-6201":"csi-mock-csi-mock-volumes-6201"} flannel.alpha.coreos.com/backend-data:{"VtepMAC":"d2:9d:b7:73:58:07"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-05-28 19:58:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-28 20:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-28 20:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-05-29 01:14:54 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{}}}}} {kube-controller-manager Update v1 2021-05-29 01:21:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubelet Update v1 2021-05-29 01:22:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:01:58 +0000 UTC,LastTransitionTime:2021-05-28 20:01:58 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-29 01:40:27 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-29 01:40:27 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-29 01:40:27 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-29 01:40:27 +0000 UTC,LastTransitionTime:2021-05-28 19:59:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:abe6e95dbfa24a9abd34d8fa2abe7655,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:17719d1f-7df5-4d95-81f3-7d3ac5110ba2,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:d731a0fc49b9ad6125b8d5dcb29da2b60bc940b48eacb6f5a9eb2a55c10598db localhost:30500/barometer-collectd:stable],SizeBytes:1464395058,},ContainerImage{Names:[@ :],SizeBytes:1002495332,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:97953d03767e4c2eb5d156394aeaf4bb0b74f3fd1ad08c303cb7561e272a00ff cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:aa24a0a337084e0747e7c8e97e1131270ae38150e691314f1fa19f4b2f9093c0 golang:alpine3.12],SizeBytes:301156062,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:df13abe416e37eb3db4722840dd479b00ba193ac6606e7902331dcea50f4f1f2 nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:48281550,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:46041582,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2bec7a43da8efe70cb7cb14020a6b10aecd02c87e020d394de84e6807e2cf620 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392623,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:7f3d9945acdf5d86edd89b2b16fe1f6d63ba8bdb4cab50e66f9bce162df9e388 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:9af6075c93013910787a4e97973da6e0739a86dee1186d7965a5d00b1ac35636 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:0b4273abac4c241fa3d70aaf52b0d79a133d2737081f4a5c5dea4949f6c45dc3 k8s.gcr.io/sig-storage/mock-driver:v3.1.0],SizeBytes:18687618,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:16322467,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 29 01:40:31.715: INFO: Logging kubelet events for node node1 May 29 01:40:31.718: INFO: Logging pods the kubelet thinks is on node node1 May 29 01:40:31.738: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zk2pt started at 2021-05-28 20:06:47 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.739: INFO: Container kube-sriovdp ready: true, restart count 0 May 29 01:40:31.739: INFO: cmk-jhzjr started at 2021-05-28 20:09:15 +0000 UTC (0+2 container statuses recorded) May 29 01:40:31.739: INFO: Container nodereport ready: true, restart count 0 May 29 01:40:31.739: INFO: Container reconcile ready: true, restart count 0 May 29 01:40:31.739: INFO: node-exporter-khdpg started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 29 01:40:31.739: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 01:40:31.739: INFO: Container node-exporter ready: true, restart count 0 May 29 01:40:31.739: INFO: kube-flannel-2tjjt started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 29 01:40:31.739: INFO: Init container install-cni ready: true, restart count 0 May 29 01:40:31.739: INFO: Container kube-flannel ready: true, restart count 2 May 29 01:40:31.739: INFO: collectd-qw9nd started at 2021-05-28 20:16:29 +0000 UTC (0+3 container statuses recorded) May 29 01:40:31.739: INFO: Container collectd ready: true, restart count 0 May 29 01:40:31.739: INFO: Container collectd-exporter ready: true, restart count 0 May 29 01:40:31.739: INFO: Container rbac-proxy ready: true, restart count 0 May 29 01:40:31.739: INFO: cmk-webhook-6c9d5f8578-kt8bp started at 2021-05-29 00:29:43 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.739: INFO: Container cmk-webhook ready: true, restart count 0 May 29 01:40:31.739: INFO: kubernetes-dashboard-86c6f9df5b-c5sbq started at 2021-05-28 19:59:33 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.739: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 29 01:40:31.739: INFO: node-feature-discovery-worker-5x4qg started at 2021-05-28 20:05:52 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.739: INFO: Container nfd-worker ready: true, restart count 0 May 29 01:40:31.739: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28 started at 2021-05-29 00:29:43 +0000 UTC (0+2 container statuses recorded) May 29 01:40:31.739: INFO: Container tas-controller ready: true, restart count 0 May 29 01:40:31.739: INFO: Container tas-extender ready: true, restart count 0 May 29 01:40:31.739: INFO: cmk-init-discover-node1-rvqxm started at 2021-05-28 20:08:32 +0000 UTC (0+3 container statuses recorded) May 29 01:40:31.739: INFO: Container discover ready: false, restart count 0 May 29 01:40:31.739: INFO: Container init ready: false, restart count 0 May 29 01:40:31.739: INFO: Container install ready: false, restart count 0 May 29 01:40:31.739: INFO: prometheus-k8s-0 started at 2021-05-28 20:10:26 +0000 UTC (0+5 container statuses recorded) May 29 01:40:31.739: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 29 01:40:31.739: INFO: Container grafana ready: true, restart count 0 May 29 01:40:31.739: INFO: Container prometheus ready: true, restart count 1 May 29 01:40:31.739: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 29 01:40:31.739: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 29 01:40:31.739: INFO: kube-proxy-lsngv started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.739: INFO: Container kube-proxy ready: true, restart count 2 May 29 01:40:31.739: INFO: kube-multus-ds-amd64-x7826 started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.739: INFO: Container kube-multus ready: true, restart count 1 May 29 01:40:31.739: INFO: nginx-proxy-node1 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.739: INFO: Container nginx-proxy ready: true, restart count 1 May 29 01:40:31.739: INFO: kubernetes-metrics-scraper-678c97765c-wblkm started at 2021-05-28 19:59:33 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.739: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 29 01:40:31.739: INFO: hostexec-node1-clnxk started at 2021-05-29 01:35:14 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.739: INFO: Container agnhost-container ready: true, restart count 0 W0529 01:40:31.749218 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 29 01:40:31.789: INFO: Latency metrics for node node1 May 29 01:40:31.789: INFO: Logging node info for node node2 May 29 01:40:31.791: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 3cc89580-b568-4c82-bd1f-200d0823da3b 163621 0 2021-05-28 19:58:22 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1742":"csi-mock-csi-mock-volumes-1742","csi-mock-csi-mock-volumes-3056":"csi-mock-csi-mock-volumes-3056","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4234":"csi-mock-csi-mock-volumes-4234","csi-mock-csi-mock-volumes-4289":"csi-mock-csi-mock-volumes-4289","csi-mock-csi-mock-volumes-6106":"csi-mock-csi-mock-volumes-6106","csi-mock-csi-mock-volumes-6742":"csi-mock-csi-mock-volumes-6742","csi-mock-csi-mock-volumes-7637":"csi-mock-csi-mock-volumes-7637","csi-mock-csi-mock-volumes-7787":"csi-mock-csi-mock-volumes-7787","csi-mock-csi-mock-volumes-8094":"csi-mock-csi-mock-volumes-8094","csi-mock-csi-mock-volumes-9667":"csi-mock-csi-mock-volumes-9667"} flannel.alpha.coreos.com/backend-data:{"VtepMAC":"62:22:2c:ae:14:ae"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-05-28 19:58:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-28 20:06:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-28 20:08:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-05-29 01:15:15 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}}} {kube-controller-manager Update v1 2021-05-29 01:23:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubelet Update v1 2021-05-29 01:23:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:01:05 +0000 UTC,LastTransitionTime:2021-05-28 20:01:05 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-29 01:40:29 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-29 01:40:29 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-29 01:40:29 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-29 01:40:29 +0000 UTC,LastTransitionTime:2021-05-28 19:59:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b2730c4b09814ab9a78e7bc62c820fbb,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:f1459072-d21d-46de-a5d9-46ec9349aae0,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:d731a0fc49b9ad6125b8d5dcb29da2b60bc940b48eacb6f5a9eb2a55c10598db localhost:30500/barometer-collectd:stable],SizeBytes:1464395058,},ContainerImage{Names:[localhost:30500/cmk@sha256:97953d03767e4c2eb5d156394aeaf4bb0b74f3fd1ad08c303cb7561e272a00ff localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[nginx@sha256:df13abe416e37eb3db4722840dd479b00ba193ac6606e7902331dcea50f4f1f2 nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:48281550,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:46041582,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2bec7a43da8efe70cb7cb14020a6b10aecd02c87e020d394de84e6807e2cf620 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392623,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:7f3d9945acdf5d86edd89b2b16fe1f6d63ba8bdb4cab50e66f9bce162df9e388 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:9af6075c93013910787a4e97973da6e0739a86dee1186d7965a5d00b1ac35636 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64@sha256:3b36bd80b97c532a774e7f6246797b8575d97037982f353476c703ba6686c75c gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64:1.0],SizeBytes:19227369,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:0b4273abac4c241fa3d70aaf52b0d79a133d2737081f4a5c5dea4949f6c45dc3 k8s.gcr.io/sig-storage/mock-driver:v3.1.0],SizeBytes:18687618,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:16322467,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 29 01:40:31.791: INFO: Logging kubelet events for node node2 May 29 01:40:31.798: INFO: Logging pods the kubelet thinks is on node node2 May 29 01:40:31.813: INFO: cmk-lbg6n started at 2021-05-29 00:29:50 +0000 UTC (0+2 container statuses recorded) May 29 01:40:31.813: INFO: Container nodereport ready: true, restart count 0 May 29 01:40:31.813: INFO: Container reconcile ready: true, restart count 0 May 29 01:40:31.813: INFO: hostexec-node2-424x9 started at 2021-05-29 01:35:20 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.813: INFO: Container agnhost-container ready: true, restart count 0 May 29 01:40:31.813: INFO: node-exporter-nsrbd started at 2021-05-29 00:29:50 +0000 UTC (0+2 container statuses recorded) May 29 01:40:31.813: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 01:40:31.813: INFO: Container node-exporter ready: true, restart count 0 May 29 01:40:31.813: INFO: collectd-k6rzg started at 2021-05-29 00:30:20 +0000 UTC (0+3 container statuses recorded) May 29 01:40:31.813: INFO: Container collectd ready: true, restart count 0 May 29 01:40:31.813: INFO: Container collectd-exporter ready: true, restart count 0 May 29 01:40:31.813: INFO: Container rbac-proxy ready: true, restart count 0 May 29 01:40:31.813: INFO: nginx-proxy-node2 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.813: INFO: Container nginx-proxy ready: true, restart count 2 May 29 01:40:31.813: INFO: kube-proxy-z5czn started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.813: INFO: Container kube-proxy ready: true, restart count 2 May 29 01:40:31.813: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-mkc6p started at 2021-05-29 00:29:50 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.813: INFO: Container kube-sriovdp ready: true, restart count 0 May 29 01:40:31.813: INFO: node-feature-discovery-worker-2qfpd started at 2021-05-29 00:29:50 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.813: INFO: Container nfd-worker ready: true, restart count 0 May 29 01:40:31.813: INFO: kube-flannel-d9wsg started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 29 01:40:31.813: INFO: Init container install-cni ready: true, restart count 2 May 29 01:40:31.813: INFO: Container kube-flannel ready: true, restart count 2 May 29 01:40:31.813: INFO: kube-multus-ds-amd64-c9cj2 started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 29 01:40:31.813: INFO: Container kube-multus ready: true, restart count 1 W0529 01:40:31.826322 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 29 01:40:31.867: INFO: Latency metrics for node node2 May 29 01:40:31.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6967" for this suite. • Failure [317.079 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:427 should be able to process many pods and reuse local volumes [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:517 May 29 01:40:25.538: some pods failed to complete within 5m0s Unexpected error: <*errors.errorString | 0xc0002c4200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:610 ------------------------------ {"msg":"FAILED [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","total":17,"completed":0,"skipped":776,"failed":1,"failures":["[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:40:31.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 May 29 01:40:35.920: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-9808 PodName:hostexec-node1-22hpj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:35.920: INFO: >>> kubeConfig: /root/.kube/config May 29 01:40:36.042: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 29 01:40:36.042: INFO: exec node1: stdout: "0\n" May 29 01:40:36.042: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" May 29 01:40:36.042: INFO: exec node1: exit code: 0 May 29 01:40:36.042: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:40:36.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9808" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.177 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:282 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:40:36.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 May 29 01:40:40.105: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-1758 PodName:hostexec-node1-f44lp ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:40.105: INFO: >>> kubeConfig: /root/.kube/config May 29 01:40:40.253: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 29 01:40:40.253: INFO: exec node1: stdout: "0\n" May 29 01:40:40.253: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" May 29 01:40:40.253: INFO: exec node1: exit code: 0 May 29 01:40:40.253: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:40:40.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1758" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.210 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:256 should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:282 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create metrics for total time taken in volume operations in P/V Controller /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:260 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:40:40.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 29 01:40:40.296: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:40:40.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-661" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.028 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total time taken in volume operations in P/V Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:260 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create volume metrics in Volume Manager /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:291 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:40:40.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 29 01:40:40.323: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:40:40.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-4991" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.027 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics in Volume Manager [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:291 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:40:40.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 May 29 01:40:42.376: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-9183 PodName:hostexec-node1-hsf2t ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:42.376: INFO: >>> kubeConfig: /root/.kube/config May 29 01:40:42.498: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 29 01:40:42.498: INFO: exec node1: stdout: "0\n" May 29 01:40:42.498: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" May 29 01:40:42.498: INFO: exec node1: exit code: 0 May 29 01:40:42.498: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:40:42.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9183" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.176 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Two pods mounting a local volume at the same time [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create prometheus metrics for volume provisioning and attach/detach /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:100 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:40:42.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 29 01:40:42.534: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:40:42.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-7503" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.028 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning and attach/detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:100 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create metrics for total number of volumes in A/D Controller /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:321 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:40:42.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 29 01:40:42.566: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:40:42.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-9619" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.028 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total number of volumes in A/D Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:321 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:40:42.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 May 29 01:40:44.619: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-3412 PodName:hostexec-node1-vhh2w ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 29 01:40:44.619: INFO: >>> kubeConfig: /root/.kube/config May 29 01:40:44.739: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 29 01:40:44.739: INFO: exec node1: stdout: "0\n" May 29 01:40:44.739: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" May 29 01:40:44.739: INFO: exec node1: exit code: 0 May 29 01:40:44.739: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:40:44.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3412" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.173 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Pod Disks [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:457 [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:40:44.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 [It] [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:457 May 29 01:40:44.785: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:40:44.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-2094" for this suite. S [SKIPPING] [0.042 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Serial] attach on previously attached volumes should work [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:457 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:477 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:40:44.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 29 01:40:44.826: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:40:44.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-9377" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:477 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:642 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:40:44.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:619 [It] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:642 STEP: Create a PVC STEP: Create 50 pods to use this PVC STEP: Wait for all pods are running May 29 01:45:45.396: FAIL: Some pods are not running within 5m0s Unexpected error: <*errors.errorString | 0xc0002c4200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func20.7.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:683 +0x748 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002c43200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc002c43200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc002c43200, 0x4de5140) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:633 STEP: Clean PV local-pvr4mwr [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "persistent-local-volumes-test-955". STEP: Found 375 events. May 29 01:45:45.422: INFO: At 2021-05-29 01:40:44 +0000 UTC - event for pod-002e9a2b-7575-4a65-b81c-12d287982b53: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-002e9a2b-7575-4a65-b81c-12d287982b53 to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:44 +0000 UTC - event for pod-1a64eff4-9c80-4e84-9739-89448e57613f: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-1a64eff4-9c80-4e84-9739-89448e57613f to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:44 +0000 UTC - event for pod-31b85f20-03f4-4f2d-848d-c83aaed0548d: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-31b85f20-03f4-4f2d-848d-c83aaed0548d to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:44 +0000 UTC - event for pod-3aa63228-9a6c-41b0-bdc2-90cb33daa37a: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-3aa63228-9a6c-41b0-bdc2-90cb33daa37a to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:44 +0000 UTC - event for pod-455cc666-8b4b-47ca-92bb-f5a8ae64c66a: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-455cc666-8b4b-47ca-92bb-f5a8ae64c66a to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:44 +0000 UTC - event for pod-4889ccc1-4d5f-4407-ac68-c20658f1e0c2: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-4889ccc1-4d5f-4407-ac68-c20658f1e0c2 to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:44 +0000 UTC - event for pod-48998c73-fb1b-4f97-923f-26a8c7fa70a5: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-48998c73-fb1b-4f97-923f-26a8c7fa70a5 to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:44 +0000 UTC - event for pod-7e57ebb4-8477-43ac-bb12-c8b440fa9899: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-7e57ebb4-8477-43ac-bb12-c8b440fa9899 to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:44 +0000 UTC - event for pod-9498afb7-3237-482d-afb1-2fd37f376683: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-9498afb7-3237-482d-afb1-2fd37f376683 to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:44 +0000 UTC - event for pod-b8c61797-5a53-4b9b-aa4f-294add40424c: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-b8c61797-5a53-4b9b-aa4f-294add40424c to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:44 +0000 UTC - event for pod-d15cecf5-b3e2-44a4-9588-5ef4190fd325: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-d15cecf5-b3e2-44a4-9588-5ef4190fd325 to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:44 +0000 UTC - event for pod-d68e7bee-ca0e-47f1-bb3e-cbf400d5eb3e: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-d68e7bee-ca0e-47f1-bb3e-cbf400d5eb3e to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:44 +0000 UTC - event for pod-dd8cdea3-5f8a-4b9c-a27b-4dad4ab66176: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-dd8cdea3-5f8a-4b9c-a27b-4dad4ab66176 to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-04545737-23c7-4d47-8731-7521b79b43e6: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-04545737-23c7-4d47-8731-7521b79b43e6 to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-063ca335-874c-4121-b5bb-83d31bf0cce6: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-063ca335-874c-4121-b5bb-83d31bf0cce6 to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-0ae07703-a1fb-450e-a77b-ce32a067220d: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-0ae07703-a1fb-450e-a77b-ce32a067220d to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-0f70c96a-f3a2-416f-afbe-b30581146951: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-0f70c96a-f3a2-416f-afbe-b30581146951 to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-10ad4892-5aa8-483f-8ecc-75620e02ae7f: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-10ad4892-5aa8-483f-8ecc-75620e02ae7f to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-2dd379ba-8ee3-4e7c-9515-d735fb305186: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-2dd379ba-8ee3-4e7c-9515-d735fb305186 to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-35450dd4-0071-41e1-926f-42d29dca5e57: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-35450dd4-0071-41e1-926f-42d29dca5e57 to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-3f4b7448-2483-47c1-94f2-20062f32ccd9: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-3f4b7448-2483-47c1-94f2-20062f32ccd9 to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-41de111d-d2ee-4842-8464-f3f4a42e52a5: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-41de111d-d2ee-4842-8464-f3f4a42e52a5 to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-489d9f3d-4ee3-4c8a-8382-f61ffe1bc37d: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-489d9f3d-4ee3-4c8a-8382-f61ffe1bc37d to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-4e56412f-6aef-4cdc-a1bc-866e307ae390: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-4e56412f-6aef-4cdc-a1bc-866e307ae390 to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-4ff53363-8a68-4d49-ba11-e57c78df1c24: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-4ff53363-8a68-4d49-ba11-e57c78df1c24 to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-5dbe3fd0-6433-490b-a9d8-da6b16bf9034: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-5dbe3fd0-6433-490b-a9d8-da6b16bf9034 to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-608407df-5264-4ddf-9188-05548319685f: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-608407df-5264-4ddf-9188-05548319685f to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-61b7bb98-c574-4f2b-99d0-1b168cd07041: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-61b7bb98-c574-4f2b-99d0-1b168cd07041 to node1 May 29 01:45:45.422: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-6cca9485-08b4-4d74-a20b-fdf5eede41e4: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-6cca9485-08b4-4d74-a20b-fdf5eede41e4 to node1 May 29 01:45:45.423: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-6e46e05e-b12e-415e-8822-97e76626df4a: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-6e46e05e-b12e-415e-8822-97e76626df4a to node1 May 29 01:45:45.423: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-73669c37-9e1f-4320-9e6d-e98c8b4c4ae3: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-73669c37-9e1f-4320-9e6d-e98c8b4c4ae3 to node1 May 29 01:45:45.423: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-8c1c447a-abd5-43ce-85ce-fbe03772e8f0: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-8c1c447a-abd5-43ce-85ce-fbe03772e8f0 to node1 May 29 01:45:45.423: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-8f2feb74-5a37-4fc8-a088-c16299c1806b: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-8f2feb74-5a37-4fc8-a088-c16299c1806b to node1 May 29 01:45:45.423: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-93735dfd-b03b-44e3-a7b0-eb404d8473b2: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-93735dfd-b03b-44e3-a7b0-eb404d8473b2 to node1 May 29 01:45:45.423: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-937996c9-5164-4d97-9c6f-3b261853cd14: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-937996c9-5164-4d97-9c6f-3b261853cd14 to node1 May 29 01:45:45.423: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-a6d81ce7-a0ab-4dc1-9bd5-0e98520394f5: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-a6d81ce7-a0ab-4dc1-9bd5-0e98520394f5 to node1 May 29 01:45:45.423: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-a93b9850-884f-48af-b0e9-2eabe63905cb: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-a93b9850-884f-48af-b0e9-2eabe63905cb to node1 May 29 01:45:45.423: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-ac542bd2-e7ee-4acf-a2ee-184ddc1baabd: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-ac542bd2-e7ee-4acf-a2ee-184ddc1baabd to node1 May 29 01:45:45.423: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-b0a85e72-2a06-4d8b-8874-7a8e19984aed: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-b0a85e72-2a06-4d8b-8874-7a8e19984aed to node1 May 29 01:45:45.423: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-b0e49dae-5123-4198-94e3-23e35033b867: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-b0e49dae-5123-4198-94e3-23e35033b867 to node1 May 29 01:45:45.423: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-cb2b1302-0bab-490a-9009-0efcffde1865: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-cb2b1302-0bab-490a-9009-0efcffde1865 to node1 May 29 01:45:45.423: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-d4d20ffd-a827-4734-8245-ea9708e8d922: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-d4d20ffd-a827-4734-8245-ea9708e8d922 to node1 May 29 01:45:45.423: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-d4efbbdd-3fa5-49e9-9ea6-c690132e490f: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-d4efbbdd-3fa5-49e9-9ea6-c690132e490f to node1 May 29 01:45:45.423: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-d6e76180-49a8-4493-baa6-b111f4de8073: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-d6e76180-49a8-4493-baa6-b111f4de8073 to node1 May 29 01:45:45.423: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-d7a46ae6-e723-438e-a241-edc720176911: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-d7a46ae6-e723-438e-a241-edc720176911 to node1 May 29 01:45:45.423: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-e2c62d91-0475-4ba9-ad19-813aed1bee83: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-e2c62d91-0475-4ba9-ad19-813aed1bee83 to node1 May 29 01:45:45.423: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-e858b45c-a964-4881-b0bf-ebb0388bb11c: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-e858b45c-a964-4881-b0bf-ebb0388bb11c to node1 May 29 01:45:45.423: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-eadaa880-5480-4ab1-94b1-dc548d038e74: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-eadaa880-5480-4ab1-94b1-dc548d038e74 to node1 May 29 01:45:45.423: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-efdaa6c2-d2a7-44f3-b092-2f363a4b14be: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-efdaa6c2-d2a7-44f3-b092-2f363a4b14be to node1 May 29 01:45:45.423: INFO: At 2021-05-29 01:40:45 +0000 UTC - event for pod-fb330863-8580-435b-a895-d93cb6dbd798: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-955/pod-fb330863-8580-435b-a895-d93cb6dbd798 to node1 May 29 01:45:45.423: INFO: At 2021-05-29 01:40:47 +0000 UTC - event for pod-d68e7bee-ca0e-47f1-bb3e-cbf400d5eb3e: {multus } AddedInterface: Add eth0 [10.244.4.197/24] May 29 01:45:45.423: INFO: At 2021-05-29 01:40:47 +0000 UTC - event for pod-d68e7bee-ca0e-47f1-bb3e-cbf400d5eb3e: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.423: INFO: At 2021-05-29 01:40:48 +0000 UTC - event for pod-d68e7bee-ca0e-47f1-bb3e-cbf400d5eb3e: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.423: INFO: At 2021-05-29 01:40:48 +0000 UTC - event for pod-d68e7bee-ca0e-47f1-bb3e-cbf400d5eb3e: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.423: INFO: At 2021-05-29 01:40:49 +0000 UTC - event for pod-48998c73-fb1b-4f97-923f-26a8c7fa70a5: {multus } AddedInterface: Add eth0 [10.244.4.199/24] May 29 01:45:45.423: INFO: At 2021-05-29 01:40:49 +0000 UTC - event for pod-48998c73-fb1b-4f97-923f-26a8c7fa70a5: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.423: INFO: At 2021-05-29 01:40:49 +0000 UTC - event for pod-d15cecf5-b3e2-44a4-9588-5ef4190fd325: {multus } AddedInterface: Add eth0 [10.244.4.198/24] May 29 01:45:45.423: INFO: At 2021-05-29 01:40:49 +0000 UTC - event for pod-d15cecf5-b3e2-44a4-9588-5ef4190fd325: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.423: INFO: At 2021-05-29 01:40:50 +0000 UTC - event for pod-1a64eff4-9c80-4e84-9739-89448e57613f: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.423: INFO: At 2021-05-29 01:40:50 +0000 UTC - event for pod-1a64eff4-9c80-4e84-9739-89448e57613f: {multus } AddedInterface: Add eth0 [10.244.4.200/24] May 29 01:45:45.423: INFO: At 2021-05-29 01:40:50 +0000 UTC - event for pod-48998c73-fb1b-4f97-923f-26a8c7fa70a5: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.423: INFO: At 2021-05-29 01:40:50 +0000 UTC - event for pod-48998c73-fb1b-4f97-923f-26a8c7fa70a5: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.423: INFO: At 2021-05-29 01:40:50 +0000 UTC - event for pod-d15cecf5-b3e2-44a4-9588-5ef4190fd325: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.423: INFO: At 2021-05-29 01:40:50 +0000 UTC - event for pod-d15cecf5-b3e2-44a4-9588-5ef4190fd325: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.423: INFO: At 2021-05-29 01:40:50 +0000 UTC - event for pod-d68e7bee-ca0e-47f1-bb3e-cbf400d5eb3e: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.423: INFO: At 2021-05-29 01:40:50 +0000 UTC - event for pod-d68e7bee-ca0e-47f1-bb3e-cbf400d5eb3e: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.423: INFO: At 2021-05-29 01:40:51 +0000 UTC - event for pod-1a64eff4-9c80-4e84-9739-89448e57613f: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.423: INFO: At 2021-05-29 01:40:51 +0000 UTC - event for pod-1a64eff4-9c80-4e84-9739-89448e57613f: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.423: INFO: At 2021-05-29 01:40:51 +0000 UTC - event for pod-48998c73-fb1b-4f97-923f-26a8c7fa70a5: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.423: INFO: At 2021-05-29 01:40:51 +0000 UTC - event for pod-48998c73-fb1b-4f97-923f-26a8c7fa70a5: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.423: INFO: At 2021-05-29 01:40:51 +0000 UTC - event for pod-d15cecf5-b3e2-44a4-9588-5ef4190fd325: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.423: INFO: At 2021-05-29 01:40:51 +0000 UTC - event for pod-d15cecf5-b3e2-44a4-9588-5ef4190fd325: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.423: INFO: At 2021-05-29 01:40:52 +0000 UTC - event for pod-3aa63228-9a6c-41b0-bdc2-90cb33daa37a: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.423: INFO: At 2021-05-29 01:40:52 +0000 UTC - event for pod-3aa63228-9a6c-41b0-bdc2-90cb33daa37a: {multus } AddedInterface: Add eth0 [10.244.4.201/24] May 29 01:45:45.423: INFO: At 2021-05-29 01:40:52 +0000 UTC - event for pod-4889ccc1-4d5f-4407-ac68-c20658f1e0c2: {multus } AddedInterface: Add eth0 [10.244.4.202/24] May 29 01:45:45.423: INFO: At 2021-05-29 01:40:52 +0000 UTC - event for pod-4889ccc1-4d5f-4407-ac68-c20658f1e0c2: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.423: INFO: At 2021-05-29 01:40:53 +0000 UTC - event for pod-1a64eff4-9c80-4e84-9739-89448e57613f: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.423: INFO: At 2021-05-29 01:40:53 +0000 UTC - event for pod-1a64eff4-9c80-4e84-9739-89448e57613f: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.423: INFO: At 2021-05-29 01:40:53 +0000 UTC - event for pod-3aa63228-9a6c-41b0-bdc2-90cb33daa37a: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.423: INFO: At 2021-05-29 01:40:53 +0000 UTC - event for pod-3aa63228-9a6c-41b0-bdc2-90cb33daa37a: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.423: INFO: At 2021-05-29 01:40:54 +0000 UTC - event for pod-455cc666-8b4b-47ca-92bb-f5a8ae64c66a: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.423: INFO: At 2021-05-29 01:40:54 +0000 UTC - event for pod-455cc666-8b4b-47ca-92bb-f5a8ae64c66a: {multus } AddedInterface: Add eth0 [10.244.4.204/24] May 29 01:45:45.423: INFO: At 2021-05-29 01:40:54 +0000 UTC - event for pod-7e57ebb4-8477-43ac-bb12-c8b440fa9899: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.423: INFO: At 2021-05-29 01:40:54 +0000 UTC - event for pod-7e57ebb4-8477-43ac-bb12-c8b440fa9899: {multus } AddedInterface: Add eth0 [10.244.4.203/24] May 29 01:45:45.423: INFO: At 2021-05-29 01:40:55 +0000 UTC - event for pod-4889ccc1-4d5f-4407-ac68-c20658f1e0c2: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.423: INFO: At 2021-05-29 01:40:55 +0000 UTC - event for pod-4889ccc1-4d5f-4407-ac68-c20658f1e0c2: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.423: INFO: At 2021-05-29 01:40:55 +0000 UTC - event for pod-fb330863-8580-435b-a895-d93cb6dbd798: {multus } AddedInterface: Add eth0 [10.244.4.205/24] May 29 01:45:45.423: INFO: At 2021-05-29 01:40:55 +0000 UTC - event for pod-fb330863-8580-435b-a895-d93cb6dbd798: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.423: INFO: At 2021-05-29 01:40:56 +0000 UTC - event for pod-3aa63228-9a6c-41b0-bdc2-90cb33daa37a: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.423: INFO: At 2021-05-29 01:40:56 +0000 UTC - event for pod-3aa63228-9a6c-41b0-bdc2-90cb33daa37a: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.423: INFO: At 2021-05-29 01:40:56 +0000 UTC - event for pod-7e57ebb4-8477-43ac-bb12-c8b440fa9899: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.423: INFO: At 2021-05-29 01:40:56 +0000 UTC - event for pod-7e57ebb4-8477-43ac-bb12-c8b440fa9899: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.423: INFO: At 2021-05-29 01:40:57 +0000 UTC - event for pod-002e9a2b-7575-4a65-b81c-12d287982b53: {multus } AddedInterface: Add eth0 [10.244.4.206/24] May 29 01:45:45.423: INFO: At 2021-05-29 01:40:57 +0000 UTC - event for pod-002e9a2b-7575-4a65-b81c-12d287982b53: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.423: INFO: At 2021-05-29 01:40:57 +0000 UTC - event for pod-455cc666-8b4b-47ca-92bb-f5a8ae64c66a: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.423: INFO: At 2021-05-29 01:40:57 +0000 UTC - event for pod-455cc666-8b4b-47ca-92bb-f5a8ae64c66a: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.423: INFO: At 2021-05-29 01:40:58 +0000 UTC - event for pod-6e46e05e-b12e-415e-8822-97e76626df4a: {multus } AddedInterface: Add eth0 [10.244.4.207/24] May 29 01:45:45.423: INFO: At 2021-05-29 01:40:58 +0000 UTC - event for pod-6e46e05e-b12e-415e-8822-97e76626df4a: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.423: INFO: At 2021-05-29 01:40:59 +0000 UTC - event for pod-002e9a2b-7575-4a65-b81c-12d287982b53: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.423: INFO: At 2021-05-29 01:40:59 +0000 UTC - event for pod-002e9a2b-7575-4a65-b81c-12d287982b53: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.423: INFO: At 2021-05-29 01:40:59 +0000 UTC - event for pod-455cc666-8b4b-47ca-92bb-f5a8ae64c66a: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.423: INFO: At 2021-05-29 01:40:59 +0000 UTC - event for pod-455cc666-8b4b-47ca-92bb-f5a8ae64c66a: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.423: INFO: At 2021-05-29 01:40:59 +0000 UTC - event for pod-4889ccc1-4d5f-4407-ac68-c20658f1e0c2: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.423: INFO: At 2021-05-29 01:40:59 +0000 UTC - event for pod-4889ccc1-4d5f-4407-ac68-c20658f1e0c2: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.423: INFO: At 2021-05-29 01:40:59 +0000 UTC - event for pod-73669c37-9e1f-4320-9e6d-e98c8b4c4ae3: {multus } AddedInterface: Add eth0 [10.244.4.209/24] May 29 01:45:45.423: INFO: At 2021-05-29 01:40:59 +0000 UTC - event for pod-7e57ebb4-8477-43ac-bb12-c8b440fa9899: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 29 01:45:45.423: INFO: At 2021-05-29 01:40:59 +0000 UTC - event for pod-d6e76180-49a8-4493-baa6-b111f4de8073: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.423: INFO: At 2021-05-29 01:40:59 +0000 UTC - event for pod-d6e76180-49a8-4493-baa6-b111f4de8073: {multus } AddedInterface: Add eth0 [10.244.4.208/24] May 29 01:45:45.423: INFO: At 2021-05-29 01:40:59 +0000 UTC - event for pod-fb330863-8580-435b-a895-d93cb6dbd798: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.423: INFO: At 2021-05-29 01:40:59 +0000 UTC - event for pod-fb330863-8580-435b-a895-d93cb6dbd798: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.423: INFO: At 2021-05-29 01:40:59 +0000 UTC - event for pod-fb330863-8580-435b-a895-d93cb6dbd798: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.423: INFO: At 2021-05-29 01:40:59 +0000 UTC - event for pod-fb330863-8580-435b-a895-d93cb6dbd798: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.424: INFO: At 2021-05-29 01:41:00 +0000 UTC - event for pod-73669c37-9e1f-4320-9e6d-e98c8b4c4ae3: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:01 +0000 UTC - event for pod-4ff53363-8a68-4d49-ba11-e57c78df1c24: {multus } AddedInterface: Add eth0 [10.244.4.210/24] May 29 01:45:45.424: INFO: At 2021-05-29 01:41:01 +0000 UTC - event for pod-4ff53363-8a68-4d49-ba11-e57c78df1c24: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:01 +0000 UTC - event for pod-6e46e05e-b12e-415e-8822-97e76626df4a: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.424: INFO: At 2021-05-29 01:41:01 +0000 UTC - event for pod-6e46e05e-b12e-415e-8822-97e76626df4a: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.424: INFO: At 2021-05-29 01:41:01 +0000 UTC - event for pod-d4efbbdd-3fa5-49e9-9ea6-c690132e490f: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:01 +0000 UTC - event for pod-d4efbbdd-3fa5-49e9-9ea6-c690132e490f: {multus } AddedInterface: Add eth0 [10.244.4.211/24] May 29 01:45:45.424: INFO: At 2021-05-29 01:41:03 +0000 UTC - event for pod-93735dfd-b03b-44e3-a7b0-eb404d8473b2: {multus } AddedInterface: Add eth0 [10.244.4.212/24] May 29 01:45:45.424: INFO: At 2021-05-29 01:41:03 +0000 UTC - event for pod-93735dfd-b03b-44e3-a7b0-eb404d8473b2: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:03 +0000 UTC - event for pod-d6e76180-49a8-4493-baa6-b111f4de8073: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.424: INFO: At 2021-05-29 01:41:03 +0000 UTC - event for pod-d6e76180-49a8-4493-baa6-b111f4de8073: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.424: INFO: At 2021-05-29 01:41:04 +0000 UTC - event for pod-002e9a2b-7575-4a65-b81c-12d287982b53: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.424: INFO: At 2021-05-29 01:41:04 +0000 UTC - event for pod-002e9a2b-7575-4a65-b81c-12d287982b53: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:04 +0000 UTC - event for pod-73669c37-9e1f-4320-9e6d-e98c8b4c4ae3: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.424: INFO: At 2021-05-29 01:41:04 +0000 UTC - event for pod-73669c37-9e1f-4320-9e6d-e98c8b4c4ae3: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.424: INFO: At 2021-05-29 01:41:05 +0000 UTC - event for pod-31b85f20-03f4-4f2d-848d-c83aaed0548d: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:05 +0000 UTC - event for pod-31b85f20-03f4-4f2d-848d-c83aaed0548d: {multus } AddedInterface: Add eth0 [10.244.4.213/24] May 29 01:45:45.424: INFO: At 2021-05-29 01:41:05 +0000 UTC - event for pod-d68e7bee-ca0e-47f1-bb3e-cbf400d5eb3e: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.424: INFO: At 2021-05-29 01:41:06 +0000 UTC - event for pod-4ff53363-8a68-4d49-ba11-e57c78df1c24: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.424: INFO: At 2021-05-29 01:41:06 +0000 UTC - event for pod-4ff53363-8a68-4d49-ba11-e57c78df1c24: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.424: INFO: At 2021-05-29 01:41:07 +0000 UTC - event for pod-6e46e05e-b12e-415e-8822-97e76626df4a: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 29 01:45:45.424: INFO: At 2021-05-29 01:41:07 +0000 UTC - event for pod-73669c37-9e1f-4320-9e6d-e98c8b4c4ae3: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:07 +0000 UTC - event for pod-73669c37-9e1f-4320-9e6d-e98c8b4c4ae3: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.424: INFO: At 2021-05-29 01:41:07 +0000 UTC - event for pod-7e57ebb4-8477-43ac-bb12-c8b440fa9899: {multus } AddedInterface: Add eth0 [10.244.4.215/24] May 29 01:45:45.424: INFO: At 2021-05-29 01:41:07 +0000 UTC - event for pod-8c1c447a-abd5-43ce-85ce-fbe03772e8f0: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:07 +0000 UTC - event for pod-8c1c447a-abd5-43ce-85ce-fbe03772e8f0: {multus } AddedInterface: Add eth0 [10.244.4.214/24] May 29 01:45:45.424: INFO: At 2021-05-29 01:41:07 +0000 UTC - event for pod-d4efbbdd-3fa5-49e9-9ea6-c690132e490f: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.424: INFO: At 2021-05-29 01:41:07 +0000 UTC - event for pod-d4efbbdd-3fa5-49e9-9ea6-c690132e490f: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.424: INFO: At 2021-05-29 01:41:07 +0000 UTC - event for pod-d6e76180-49a8-4493-baa6-b111f4de8073: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.424: INFO: At 2021-05-29 01:41:07 +0000 UTC - event for pod-d6e76180-49a8-4493-baa6-b111f4de8073: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:08 +0000 UTC - event for pod-35450dd4-0071-41e1-926f-42d29dca5e57: {multus } AddedInterface: Add eth0 [10.244.4.216/24] May 29 01:45:45.424: INFO: At 2021-05-29 01:41:09 +0000 UTC - event for pod-10ad4892-5aa8-483f-8ecc-75620e02ae7f: {multus } AddedInterface: Add eth0 [10.244.4.217/24] May 29 01:45:45.424: INFO: At 2021-05-29 01:41:09 +0000 UTC - event for pod-10ad4892-5aa8-483f-8ecc-75620e02ae7f: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:09 +0000 UTC - event for pod-35450dd4-0071-41e1-926f-42d29dca5e57: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:11 +0000 UTC - event for pod-063ca335-874c-4121-b5bb-83d31bf0cce6: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:11 +0000 UTC - event for pod-4ff53363-8a68-4d49-ba11-e57c78df1c24: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:11 +0000 UTC - event for pod-4ff53363-8a68-4d49-ba11-e57c78df1c24: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.424: INFO: At 2021-05-29 01:41:11 +0000 UTC - event for pod-61b7bb98-c574-4f2b-99d0-1b168cd07041: {multus } AddedInterface: Add eth0 [10.244.4.218/24] May 29 01:45:45.424: INFO: At 2021-05-29 01:41:11 +0000 UTC - event for pod-61b7bb98-c574-4f2b-99d0-1b168cd07041: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:11 +0000 UTC - event for pod-93735dfd-b03b-44e3-a7b0-eb404d8473b2: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.424: INFO: At 2021-05-29 01:41:11 +0000 UTC - event for pod-93735dfd-b03b-44e3-a7b0-eb404d8473b2: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.424: INFO: At 2021-05-29 01:41:11 +0000 UTC - event for pod-d4efbbdd-3fa5-49e9-9ea6-c690132e490f: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.424: INFO: At 2021-05-29 01:41:11 +0000 UTC - event for pod-d4efbbdd-3fa5-49e9-9ea6-c690132e490f: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:12 +0000 UTC - event for pod-4e56412f-6aef-4cdc-a1bc-866e307ae390: {multus } AddedInterface: Add eth0 [10.244.4.220/24] May 29 01:45:45.424: INFO: At 2021-05-29 01:41:12 +0000 UTC - event for pod-4e56412f-6aef-4cdc-a1bc-866e307ae390: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:13 +0000 UTC - event for pod-d4d20ffd-a827-4734-8245-ea9708e8d922: {multus } AddedInterface: Add eth0 [10.244.4.221/24] May 29 01:45:45.424: INFO: At 2021-05-29 01:41:13 +0000 UTC - event for pod-d4d20ffd-a827-4734-8245-ea9708e8d922: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:14 +0000 UTC - event for pod-31b85f20-03f4-4f2d-848d-c83aaed0548d: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.424: INFO: At 2021-05-29 01:41:14 +0000 UTC - event for pod-31b85f20-03f4-4f2d-848d-c83aaed0548d: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.424: INFO: At 2021-05-29 01:41:14 +0000 UTC - event for pod-6e46e05e-b12e-415e-8822-97e76626df4a: {multus } AddedInterface: Add eth0 [10.244.4.222/24] May 29 01:45:45.424: INFO: At 2021-05-29 01:41:14 +0000 UTC - event for pod-8c1c447a-abd5-43ce-85ce-fbe03772e8f0: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.424: INFO: At 2021-05-29 01:41:14 +0000 UTC - event for pod-8c1c447a-abd5-43ce-85ce-fbe03772e8f0: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.424: INFO: At 2021-05-29 01:41:14 +0000 UTC - event for pod-93735dfd-b03b-44e3-a7b0-eb404d8473b2: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.424: INFO: At 2021-05-29 01:41:14 +0000 UTC - event for pod-93735dfd-b03b-44e3-a7b0-eb404d8473b2: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:15 +0000 UTC - event for pod-ac542bd2-e7ee-4acf-a2ee-184ddc1baabd: {multus } AddedInterface: Add eth0 [10.244.4.223/24] May 29 01:45:45.424: INFO: At 2021-05-29 01:41:15 +0000 UTC - event for pod-ac542bd2-e7ee-4acf-a2ee-184ddc1baabd: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:15 +0000 UTC - event for pod-e2c62d91-0475-4ba9-ad19-813aed1bee83: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:15 +0000 UTC - event for pod-e2c62d91-0475-4ba9-ad19-813aed1bee83: {multus } AddedInterface: Add eth0 [10.244.4.224/24] May 29 01:45:45.424: INFO: At 2021-05-29 01:41:16 +0000 UTC - event for pod-31b85f20-03f4-4f2d-848d-c83aaed0548d: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.424: INFO: At 2021-05-29 01:41:16 +0000 UTC - event for pod-31b85f20-03f4-4f2d-848d-c83aaed0548d: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:16 +0000 UTC - event for pod-8c1c447a-abd5-43ce-85ce-fbe03772e8f0: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.424: INFO: At 2021-05-29 01:41:16 +0000 UTC - event for pod-8c1c447a-abd5-43ce-85ce-fbe03772e8f0: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:16 +0000 UTC - event for pod-b8c61797-5a53-4b9b-aa4f-294add40424c: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:16 +0000 UTC - event for pod-b8c61797-5a53-4b9b-aa4f-294add40424c: {multus } AddedInterface: Add eth0 [10.244.4.225/24] May 29 01:45:45.424: INFO: At 2021-05-29 01:41:17 +0000 UTC - event for pod-35450dd4-0071-41e1-926f-42d29dca5e57: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.424: INFO: At 2021-05-29 01:41:17 +0000 UTC - event for pod-35450dd4-0071-41e1-926f-42d29dca5e57: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.424: INFO: At 2021-05-29 01:41:17 +0000 UTC - event for pod-a6d81ce7-a0ab-4dc1-9bd5-0e98520394f5: {multus } AddedInterface: Add eth0 [10.244.4.226/24] May 29 01:45:45.424: INFO: At 2021-05-29 01:41:17 +0000 UTC - event for pod-a6d81ce7-a0ab-4dc1-9bd5-0e98520394f5: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:18 +0000 UTC - event for pod-10ad4892-5aa8-483f-8ecc-75620e02ae7f: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.424: INFO: At 2021-05-29 01:41:18 +0000 UTC - event for pod-10ad4892-5aa8-483f-8ecc-75620e02ae7f: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.424: INFO: At 2021-05-29 01:41:18 +0000 UTC - event for pod-6cca9485-08b4-4d74-a20b-fdf5eede41e4: {multus } AddedInterface: Add eth0 [10.244.4.227/24] May 29 01:45:45.424: INFO: At 2021-05-29 01:41:19 +0000 UTC - event for pod-04545737-23c7-4d47-8731-7521b79b43e6: {multus } AddedInterface: Add eth0 [10.244.4.229/24] May 29 01:45:45.424: INFO: At 2021-05-29 01:41:19 +0000 UTC - event for pod-35450dd4-0071-41e1-926f-42d29dca5e57: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 29 01:45:45.424: INFO: At 2021-05-29 01:41:19 +0000 UTC - event for pod-6cca9485-08b4-4d74-a20b-fdf5eede41e4: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:19 +0000 UTC - event for pod-7e57ebb4-8477-43ac-bb12-c8b440fa9899: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.424: INFO: At 2021-05-29 01:41:19 +0000 UTC - event for pod-7e57ebb4-8477-43ac-bb12-c8b440fa9899: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:19 +0000 UTC - event for pod-e858b45c-a964-4881-b0bf-ebb0388bb11c: {multus } AddedInterface: Add eth0 [10.244.4.228/24] May 29 01:45:45.424: INFO: At 2021-05-29 01:41:19 +0000 UTC - event for pod-e858b45c-a964-4881-b0bf-ebb0388bb11c: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:20 +0000 UTC - event for pod-04545737-23c7-4d47-8731-7521b79b43e6: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:20 +0000 UTC - event for pod-61b7bb98-c574-4f2b-99d0-1b168cd07041: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.424: INFO: At 2021-05-29 01:41:20 +0000 UTC - event for pod-61b7bb98-c574-4f2b-99d0-1b168cd07041: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.424: INFO: At 2021-05-29 01:41:20 +0000 UTC - event for pod-efdaa6c2-d2a7-44f3-b092-2f363a4b14be: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:20 +0000 UTC - event for pod-efdaa6c2-d2a7-44f3-b092-2f363a4b14be: {multus } AddedInterface: Add eth0 [10.244.4.230/24] May 29 01:45:45.424: INFO: At 2021-05-29 01:41:21 +0000 UTC - event for pod-41de111d-d2ee-4842-8464-f3f4a42e52a5: {multus } AddedInterface: Add eth0 [10.244.4.231/24] May 29 01:45:45.424: INFO: At 2021-05-29 01:41:21 +0000 UTC - event for pod-41de111d-d2ee-4842-8464-f3f4a42e52a5: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.424: INFO: At 2021-05-29 01:41:22 +0000 UTC - event for pod-063ca335-874c-4121-b5bb-83d31bf0cce6: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.425: INFO: At 2021-05-29 01:41:22 +0000 UTC - event for pod-063ca335-874c-4121-b5bb-83d31bf0cce6: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.425: INFO: At 2021-05-29 01:41:22 +0000 UTC - event for pod-10ad4892-5aa8-483f-8ecc-75620e02ae7f: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 29 01:45:45.425: INFO: At 2021-05-29 01:41:22 +0000 UTC - event for pod-61b7bb98-c574-4f2b-99d0-1b168cd07041: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.425: INFO: At 2021-05-29 01:41:22 +0000 UTC - event for pod-61b7bb98-c574-4f2b-99d0-1b168cd07041: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.425: INFO: At 2021-05-29 01:41:23 +0000 UTC - event for pod-0f70c96a-f3a2-416f-afbe-b30581146951: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.425: INFO: At 2021-05-29 01:41:23 +0000 UTC - event for pod-0f70c96a-f3a2-416f-afbe-b30581146951: {multus } AddedInterface: Add eth0 [10.244.4.232/24] May 29 01:45:45.425: INFO: At 2021-05-29 01:41:24 +0000 UTC - event for pod-4e56412f-6aef-4cdc-a1bc-866e307ae390: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.425: INFO: At 2021-05-29 01:41:24 +0000 UTC - event for pod-4e56412f-6aef-4cdc-a1bc-866e307ae390: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.425: INFO: At 2021-05-29 01:41:25 +0000 UTC - event for pod-35450dd4-0071-41e1-926f-42d29dca5e57: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.425: INFO: At 2021-05-29 01:41:25 +0000 UTC - event for pod-35450dd4-0071-41e1-926f-42d29dca5e57: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.425: INFO: At 2021-05-29 01:41:25 +0000 UTC - event for pod-35450dd4-0071-41e1-926f-42d29dca5e57: {multus } AddedInterface: Add eth0 [10.244.4.233/24] May 29 01:45:45.425: INFO: At 2021-05-29 01:41:25 +0000 UTC - event for pod-eadaa880-5480-4ab1-94b1-dc548d038e74: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.425: INFO: At 2021-05-29 01:41:25 +0000 UTC - event for pod-eadaa880-5480-4ab1-94b1-dc548d038e74: {multus } AddedInterface: Add eth0 [10.244.4.234/24] May 29 01:45:45.425: INFO: At 2021-05-29 01:41:27 +0000 UTC - event for pod-063ca335-874c-4121-b5bb-83d31bf0cce6: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.425: INFO: At 2021-05-29 01:41:27 +0000 UTC - event for pod-063ca335-874c-4121-b5bb-83d31bf0cce6: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.425: INFO: At 2021-05-29 01:41:27 +0000 UTC - event for pod-d4d20ffd-a827-4734-8245-ea9708e8d922: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.425: INFO: At 2021-05-29 01:41:27 +0000 UTC - event for pod-d4d20ffd-a827-4734-8245-ea9708e8d922: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.425: INFO: At 2021-05-29 01:41:31 +0000 UTC - event for pod-8f2feb74-5a37-4fc8-a088-c16299c1806b: {multus } AddedInterface: Add eth0 [10.244.4.236/24] May 29 01:45:45.425: INFO: At 2021-05-29 01:41:31 +0000 UTC - event for pod-b0e49dae-5123-4198-94e3-23e35033b867: {multus } AddedInterface: Add eth0 [10.244.4.235/24] May 29 01:45:45.425: INFO: At 2021-05-29 01:41:31 +0000 UTC - event for pod-e2c62d91-0475-4ba9-ad19-813aed1bee83: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.425: INFO: At 2021-05-29 01:41:31 +0000 UTC - event for pod-e2c62d91-0475-4ba9-ad19-813aed1bee83: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.425: INFO: At 2021-05-29 01:41:32 +0000 UTC - event for pod-ac542bd2-e7ee-4acf-a2ee-184ddc1baabd: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.425: INFO: At 2021-05-29 01:41:32 +0000 UTC - event for pod-ac542bd2-e7ee-4acf-a2ee-184ddc1baabd: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.425: INFO: At 2021-05-29 01:41:32 +0000 UTC - event for pod-b0e49dae-5123-4198-94e3-23e35033b867: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.425: INFO: At 2021-05-29 01:41:33 +0000 UTC - event for pod-10ad4892-5aa8-483f-8ecc-75620e02ae7f: {multus } AddedInterface: Add eth0 [10.244.4.237/24] May 29 01:45:45.425: INFO: At 2021-05-29 01:41:33 +0000 UTC - event for pod-4e56412f-6aef-4cdc-a1bc-866e307ae390: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.425: INFO: At 2021-05-29 01:41:33 +0000 UTC - event for pod-4e56412f-6aef-4cdc-a1bc-866e307ae390: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.425: INFO: At 2021-05-29 01:41:33 +0000 UTC - event for pod-6e46e05e-b12e-415e-8822-97e76626df4a: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.425: INFO: At 2021-05-29 01:41:33 +0000 UTC - event for pod-6e46e05e-b12e-415e-8822-97e76626df4a: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.425: INFO: At 2021-05-29 01:41:33 +0000 UTC - event for pod-8f2feb74-5a37-4fc8-a088-c16299c1806b: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.425: INFO: At 2021-05-29 01:41:33 +0000 UTC - event for pod-b8c61797-5a53-4b9b-aa4f-294add40424c: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.425: INFO: At 2021-05-29 01:41:33 +0000 UTC - event for pod-b8c61797-5a53-4b9b-aa4f-294add40424c: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.425: INFO: At 2021-05-29 01:41:33 +0000 UTC - event for pod-d4d20ffd-a827-4734-8245-ea9708e8d922: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 29 01:45:45.425: INFO: At 2021-05-29 01:41:34 +0000 UTC - event for pod-dd8cdea3-5f8a-4b9c-a27b-4dad4ab66176: {multus } AddedInterface: Add eth0 [10.244.4.238/24] May 29 01:45:45.425: INFO: At 2021-05-29 01:41:35 +0000 UTC - event for pod-a6d81ce7-a0ab-4dc1-9bd5-0e98520394f5: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.425: INFO: At 2021-05-29 01:41:35 +0000 UTC - event for pod-a6d81ce7-a0ab-4dc1-9bd5-0e98520394f5: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.425: INFO: At 2021-05-29 01:41:35 +0000 UTC - event for pod-ac542bd2-e7ee-4acf-a2ee-184ddc1baabd: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 29 01:45:45.425: INFO: At 2021-05-29 01:41:35 +0000 UTC - event for pod-b0a85e72-2a06-4d8b-8874-7a8e19984aed: {multus } AddedInterface: Add eth0 [10.244.4.239/24] May 29 01:45:45.425: INFO: At 2021-05-29 01:41:35 +0000 UTC - event for pod-b0a85e72-2a06-4d8b-8874-7a8e19984aed: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.425: INFO: At 2021-05-29 01:41:35 +0000 UTC - event for pod-b8c61797-5a53-4b9b-aa4f-294add40424c: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.425: INFO: At 2021-05-29 01:41:35 +0000 UTC - event for pod-b8c61797-5a53-4b9b-aa4f-294add40424c: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.425: INFO: At 2021-05-29 01:41:35 +0000 UTC - event for pod-d7a46ae6-e723-438e-a241-edc720176911: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.425: INFO: At 2021-05-29 01:41:35 +0000 UTC - event for pod-d7a46ae6-e723-438e-a241-edc720176911: {multus } AddedInterface: Add eth0 [10.244.4.240/24] May 29 01:45:45.425: INFO: At 2021-05-29 01:41:35 +0000 UTC - event for pod-dd8cdea3-5f8a-4b9c-a27b-4dad4ab66176: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.425: INFO: At 2021-05-29 01:41:35 +0000 UTC - event for pod-e2c62d91-0475-4ba9-ad19-813aed1bee83: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.425: INFO: At 2021-05-29 01:41:35 +0000 UTC - event for pod-e2c62d91-0475-4ba9-ad19-813aed1bee83: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.425: INFO: At 2021-05-29 01:41:36 +0000 UTC - event for pod-5dbe3fd0-6433-490b-a9d8-da6b16bf9034: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.425: INFO: At 2021-05-29 01:41:36 +0000 UTC - event for pod-5dbe3fd0-6433-490b-a9d8-da6b16bf9034: {multus } AddedInterface: Add eth0 [10.244.4.241/24] May 29 01:45:45.425: INFO: At 2021-05-29 01:41:37 +0000 UTC - event for pod-6cca9485-08b4-4d74-a20b-fdf5eede41e4: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.425: INFO: At 2021-05-29 01:41:37 +0000 UTC - event for pod-6cca9485-08b4-4d74-a20b-fdf5eede41e4: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.425: INFO: At 2021-05-29 01:41:38 +0000 UTC - event for pod-a93b9850-884f-48af-b0e9-2eabe63905cb: {multus } AddedInterface: Add eth0 [10.244.4.242/24] May 29 01:45:45.425: INFO: At 2021-05-29 01:41:38 +0000 UTC - event for pod-a93b9850-884f-48af-b0e9-2eabe63905cb: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.425: INFO: At 2021-05-29 01:41:38 +0000 UTC - event for pod-e858b45c-a964-4881-b0bf-ebb0388bb11c: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.425: INFO: At 2021-05-29 01:41:38 +0000 UTC - event for pod-e858b45c-a964-4881-b0bf-ebb0388bb11c: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.425: INFO: At 2021-05-29 01:41:39 +0000 UTC - event for pod-04545737-23c7-4d47-8731-7521b79b43e6: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.425: INFO: At 2021-05-29 01:41:39 +0000 UTC - event for pod-04545737-23c7-4d47-8731-7521b79b43e6: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.425: INFO: At 2021-05-29 01:41:39 +0000 UTC - event for pod-d4d20ffd-a827-4734-8245-ea9708e8d922: {multus } AddedInterface: Add eth0 [10.244.4.243/24] May 29 01:45:45.425: INFO: At 2021-05-29 01:41:40 +0000 UTC - event for pod-608407df-5264-4ddf-9188-05548319685f: {multus } AddedInterface: Add eth0 [10.244.4.244/24] May 29 01:45:45.425: INFO: At 2021-05-29 01:41:40 +0000 UTC - event for pod-608407df-5264-4ddf-9188-05548319685f: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.425: INFO: At 2021-05-29 01:41:40 +0000 UTC - event for pod-a6d81ce7-a0ab-4dc1-9bd5-0e98520394f5: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.425: INFO: At 2021-05-29 01:41:40 +0000 UTC - event for pod-a6d81ce7-a0ab-4dc1-9bd5-0e98520394f5: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.425: INFO: At 2021-05-29 01:41:41 +0000 UTC - event for pod-0ae07703-a1fb-450e-a77b-ce32a067220d: {multus } AddedInterface: Add eth0 [10.244.4.246/24] May 29 01:45:45.425: INFO: At 2021-05-29 01:41:41 +0000 UTC - event for pod-0ae07703-a1fb-450e-a77b-ce32a067220d: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.425: INFO: At 2021-05-29 01:41:41 +0000 UTC - event for pod-937996c9-5164-4d97-9c6f-3b261853cd14: {multus } AddedInterface: Add eth0 [10.244.4.245/24] May 29 01:45:45.425: INFO: At 2021-05-29 01:41:41 +0000 UTC - event for pod-937996c9-5164-4d97-9c6f-3b261853cd14: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.425: INFO: At 2021-05-29 01:41:42 +0000 UTC - event for pod-ac542bd2-e7ee-4acf-a2ee-184ddc1baabd: {multus } AddedInterface: Add eth0 [10.244.4.247/24] May 29 01:45:45.425: INFO: At 2021-05-29 01:41:42 +0000 UTC - event for pod-ac542bd2-e7ee-4acf-a2ee-184ddc1baabd: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.425: INFO: At 2021-05-29 01:41:42 +0000 UTC - event for pod-ac542bd2-e7ee-4acf-a2ee-184ddc1baabd: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.425: INFO: At 2021-05-29 01:41:42 +0000 UTC - event for pod-efdaa6c2-d2a7-44f3-b092-2f363a4b14be: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.425: INFO: At 2021-05-29 01:41:42 +0000 UTC - event for pod-efdaa6c2-d2a7-44f3-b092-2f363a4b14be: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.425: INFO: At 2021-05-29 01:41:43 +0000 UTC - event for pod-04545737-23c7-4d47-8731-7521b79b43e6: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.425: INFO: At 2021-05-29 01:41:43 +0000 UTC - event for pod-04545737-23c7-4d47-8731-7521b79b43e6: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.425: INFO: At 2021-05-29 01:41:43 +0000 UTC - event for pod-41de111d-d2ee-4842-8464-f3f4a42e52a5: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.425: INFO: At 2021-05-29 01:41:43 +0000 UTC - event for pod-41de111d-d2ee-4842-8464-f3f4a42e52a5: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.425: INFO: At 2021-05-29 01:41:43 +0000 UTC - event for pod-6cca9485-08b4-4d74-a20b-fdf5eede41e4: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.425: INFO: At 2021-05-29 01:41:43 +0000 UTC - event for pod-6cca9485-08b4-4d74-a20b-fdf5eede41e4: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.425: INFO: At 2021-05-29 01:41:43 +0000 UTC - event for pod-cb2b1302-0bab-490a-9009-0efcffde1865: {multus } AddedInterface: Add eth0 [10.244.4.248/24] May 29 01:45:45.425: INFO: At 2021-05-29 01:41:43 +0000 UTC - event for pod-e858b45c-a964-4881-b0bf-ebb0388bb11c: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.425: INFO: At 2021-05-29 01:41:43 +0000 UTC - event for pod-e858b45c-a964-4881-b0bf-ebb0388bb11c: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.425: INFO: At 2021-05-29 01:41:44 +0000 UTC - event for pod-0f70c96a-f3a2-416f-afbe-b30581146951: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.425: INFO: At 2021-05-29 01:41:44 +0000 UTC - event for pod-0f70c96a-f3a2-416f-afbe-b30581146951: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.425: INFO: At 2021-05-29 01:41:44 +0000 UTC - event for pod-41de111d-d2ee-4842-8464-f3f4a42e52a5: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.425: INFO: At 2021-05-29 01:41:44 +0000 UTC - event for pod-41de111d-d2ee-4842-8464-f3f4a42e52a5: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.425: INFO: At 2021-05-29 01:41:44 +0000 UTC - event for pod-489d9f3d-4ee3-4c8a-8382-f61ffe1bc37d: {multus } AddedInterface: Add eth0 [10.244.4.250/24] May 29 01:45:45.426: INFO: At 2021-05-29 01:41:44 +0000 UTC - event for pod-489d9f3d-4ee3-4c8a-8382-f61ffe1bc37d: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.426: INFO: At 2021-05-29 01:41:44 +0000 UTC - event for pod-9498afb7-3237-482d-afb1-2fd37f376683: {multus } AddedInterface: Add eth0 [10.244.4.249/24] May 29 01:45:45.426: INFO: At 2021-05-29 01:41:44 +0000 UTC - event for pod-9498afb7-3237-482d-afb1-2fd37f376683: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.426: INFO: At 2021-05-29 01:41:44 +0000 UTC - event for pod-cb2b1302-0bab-490a-9009-0efcffde1865: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.426: INFO: At 2021-05-29 01:41:44 +0000 UTC - event for pod-efdaa6c2-d2a7-44f3-b092-2f363a4b14be: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.426: INFO: At 2021-05-29 01:41:44 +0000 UTC - event for pod-efdaa6c2-d2a7-44f3-b092-2f363a4b14be: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.426: INFO: At 2021-05-29 01:41:46 +0000 UTC - event for pod-0f70c96a-f3a2-416f-afbe-b30581146951: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.426: INFO: At 2021-05-29 01:41:46 +0000 UTC - event for pod-0f70c96a-f3a2-416f-afbe-b30581146951: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.426: INFO: At 2021-05-29 01:41:46 +0000 UTC - event for pod-2dd379ba-8ee3-4e7c-9515-d735fb305186: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.426: INFO: At 2021-05-29 01:41:46 +0000 UTC - event for pod-2dd379ba-8ee3-4e7c-9515-d735fb305186: {multus } AddedInterface: Add eth0 [10.244.4.252/24] May 29 01:45:45.426: INFO: At 2021-05-29 01:41:46 +0000 UTC - event for pod-3f4b7448-2483-47c1-94f2-20062f32ccd9: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.426: INFO: At 2021-05-29 01:41:46 +0000 UTC - event for pod-3f4b7448-2483-47c1-94f2-20062f32ccd9: {multus } AddedInterface: Add eth0 [10.244.4.251/24] May 29 01:45:45.426: INFO: At 2021-05-29 01:41:47 +0000 UTC - event for pod-eadaa880-5480-4ab1-94b1-dc548d038e74: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.426: INFO: At 2021-05-29 01:41:47 +0000 UTC - event for pod-eadaa880-5480-4ab1-94b1-dc548d038e74: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.426: INFO: At 2021-05-29 01:41:48 +0000 UTC - event for pod-eadaa880-5480-4ab1-94b1-dc548d038e74: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.426: INFO: At 2021-05-29 01:41:48 +0000 UTC - event for pod-eadaa880-5480-4ab1-94b1-dc548d038e74: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.426: INFO: At 2021-05-29 01:41:53 +0000 UTC - event for pod-b0e49dae-5123-4198-94e3-23e35033b867: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.426: INFO: At 2021-05-29 01:41:53 +0000 UTC - event for pod-b0e49dae-5123-4198-94e3-23e35033b867: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.426: INFO: At 2021-05-29 01:41:54 +0000 UTC - event for pod-8f2feb74-5a37-4fc8-a088-c16299c1806b: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.426: INFO: At 2021-05-29 01:41:54 +0000 UTC - event for pod-8f2feb74-5a37-4fc8-a088-c16299c1806b: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.426: INFO: At 2021-05-29 01:41:54 +0000 UTC - event for pod-b0e49dae-5123-4198-94e3-23e35033b867: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.426: INFO: At 2021-05-29 01:41:54 +0000 UTC - event for pod-b0e49dae-5123-4198-94e3-23e35033b867: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.426: INFO: At 2021-05-29 01:41:55 +0000 UTC - event for pod-8f2feb74-5a37-4fc8-a088-c16299c1806b: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.426: INFO: At 2021-05-29 01:41:55 +0000 UTC - event for pod-8f2feb74-5a37-4fc8-a088-c16299c1806b: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.426: INFO: At 2021-05-29 01:41:56 +0000 UTC - event for pod-10ad4892-5aa8-483f-8ecc-75620e02ae7f: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.426: INFO: At 2021-05-29 01:41:56 +0000 UTC - event for pod-10ad4892-5aa8-483f-8ecc-75620e02ae7f: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.426: INFO: At 2021-05-29 01:41:56 +0000 UTC - event for pod-dd8cdea3-5f8a-4b9c-a27b-4dad4ab66176: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.426: INFO: At 2021-05-29 01:41:56 +0000 UTC - event for pod-dd8cdea3-5f8a-4b9c-a27b-4dad4ab66176: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.426: INFO: At 2021-05-29 01:41:57 +0000 UTC - event for pod-b0a85e72-2a06-4d8b-8874-7a8e19984aed: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.426: INFO: At 2021-05-29 01:41:57 +0000 UTC - event for pod-b0a85e72-2a06-4d8b-8874-7a8e19984aed: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.426: INFO: At 2021-05-29 01:41:57 +0000 UTC - event for pod-dd8cdea3-5f8a-4b9c-a27b-4dad4ab66176: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 29 01:45:45.426: INFO: At 2021-05-29 01:41:58 +0000 UTC - event for pod-b0a85e72-2a06-4d8b-8874-7a8e19984aed: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 29 01:45:45.426: INFO: At 2021-05-29 01:41:58 +0000 UTC - event for pod-d7a46ae6-e723-438e-a241-edc720176911: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.426: INFO: At 2021-05-29 01:41:58 +0000 UTC - event for pod-d7a46ae6-e723-438e-a241-edc720176911: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.426: INFO: At 2021-05-29 01:41:59 +0000 UTC - event for pod-d7a46ae6-e723-438e-a241-edc720176911: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.426: INFO: At 2021-05-29 01:41:59 +0000 UTC - event for pod-d7a46ae6-e723-438e-a241-edc720176911: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.426: INFO: At 2021-05-29 01:41:59 +0000 UTC - event for pod-dd8cdea3-5f8a-4b9c-a27b-4dad4ab66176: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.426: INFO: At 2021-05-29 01:41:59 +0000 UTC - event for pod-dd8cdea3-5f8a-4b9c-a27b-4dad4ab66176: {multus } AddedInterface: Add eth0 [10.244.4.253/24] May 29 01:45:45.426: INFO: At 2021-05-29 01:41:59 +0000 UTC - event for pod-dd8cdea3-5f8a-4b9c-a27b-4dad4ab66176: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.426: INFO: At 2021-05-29 01:42:01 +0000 UTC - event for pod-b0a85e72-2a06-4d8b-8874-7a8e19984aed: {multus } AddedInterface: Add eth0 [10.244.4.254/24] May 29 01:45:45.426: INFO: At 2021-05-29 01:42:01 +0000 UTC - event for pod-b0a85e72-2a06-4d8b-8874-7a8e19984aed: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.426: INFO: At 2021-05-29 01:42:01 +0000 UTC - event for pod-b0a85e72-2a06-4d8b-8874-7a8e19984aed: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.426: INFO: At 2021-05-29 01:42:03 +0000 UTC - event for pod-5dbe3fd0-6433-490b-a9d8-da6b16bf9034: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.426: INFO: At 2021-05-29 01:42:03 +0000 UTC - event for pod-5dbe3fd0-6433-490b-a9d8-da6b16bf9034: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.426: INFO: At 2021-05-29 01:42:04 +0000 UTC - event for pod-5dbe3fd0-6433-490b-a9d8-da6b16bf9034: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.426: INFO: At 2021-05-29 01:42:04 +0000 UTC - event for pod-5dbe3fd0-6433-490b-a9d8-da6b16bf9034: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.426: INFO: At 2021-05-29 01:42:04 +0000 UTC - event for pod-b0a85e72-2a06-4d8b-8874-7a8e19984aed: {multus } AddedInterface: Add eth0 [10.244.4.2/24] May 29 01:45:45.426: INFO: At 2021-05-29 01:42:05 +0000 UTC - event for pod-a93b9850-884f-48af-b0e9-2eabe63905cb: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.426: INFO: At 2021-05-29 01:42:05 +0000 UTC - event for pod-a93b9850-884f-48af-b0e9-2eabe63905cb: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.426: INFO: At 2021-05-29 01:42:05 +0000 UTC - event for pod-a93b9850-884f-48af-b0e9-2eabe63905cb: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.426: INFO: At 2021-05-29 01:42:05 +0000 UTC - event for pod-a93b9850-884f-48af-b0e9-2eabe63905cb: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.426: INFO: At 2021-05-29 01:42:06 +0000 UTC - event for pod-d4d20ffd-a827-4734-8245-ea9708e8d922: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.426: INFO: At 2021-05-29 01:42:06 +0000 UTC - event for pod-d4d20ffd-a827-4734-8245-ea9708e8d922: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.426: INFO: At 2021-05-29 01:42:07 +0000 UTC - event for pod-b0a85e72-2a06-4d8b-8874-7a8e19984aed: {multus } AddedInterface: Add eth0 [10.244.4.3/24] May 29 01:45:45.426: INFO: At 2021-05-29 01:42:09 +0000 UTC - event for pod-608407df-5264-4ddf-9188-05548319685f: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.426: INFO: At 2021-05-29 01:42:09 +0000 UTC - event for pod-608407df-5264-4ddf-9188-05548319685f: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.426: INFO: At 2021-05-29 01:42:09 +0000 UTC - event for pod-608407df-5264-4ddf-9188-05548319685f: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.426: INFO: At 2021-05-29 01:42:09 +0000 UTC - event for pod-608407df-5264-4ddf-9188-05548319685f: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.426: INFO: At 2021-05-29 01:42:10 +0000 UTC - event for pod-0ae07703-a1fb-450e-a77b-ce32a067220d: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.426: INFO: At 2021-05-29 01:42:10 +0000 UTC - event for pod-0ae07703-a1fb-450e-a77b-ce32a067220d: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.426: INFO: At 2021-05-29 01:42:10 +0000 UTC - event for pod-0ae07703-a1fb-450e-a77b-ce32a067220d: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.426: INFO: At 2021-05-29 01:42:10 +0000 UTC - event for pod-0ae07703-a1fb-450e-a77b-ce32a067220d: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.426: INFO: At 2021-05-29 01:42:10 +0000 UTC - event for pod-937996c9-5164-4d97-9c6f-3b261853cd14: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.426: INFO: At 2021-05-29 01:42:10 +0000 UTC - event for pod-937996c9-5164-4d97-9c6f-3b261853cd14: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.426: INFO: At 2021-05-29 01:42:12 +0000 UTC - event for pod-937996c9-5164-4d97-9c6f-3b261853cd14: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.426: INFO: At 2021-05-29 01:42:12 +0000 UTC - event for pod-937996c9-5164-4d97-9c6f-3b261853cd14: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.426: INFO: At 2021-05-29 01:42:13 +0000 UTC - event for pod-cb2b1302-0bab-490a-9009-0efcffde1865: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.426: INFO: At 2021-05-29 01:42:13 +0000 UTC - event for pod-cb2b1302-0bab-490a-9009-0efcffde1865: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.426: INFO: At 2021-05-29 01:42:13 +0000 UTC - event for pod-cb2b1302-0bab-490a-9009-0efcffde1865: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.426: INFO: At 2021-05-29 01:42:13 +0000 UTC - event for pod-cb2b1302-0bab-490a-9009-0efcffde1865: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.426: INFO: At 2021-05-29 01:42:14 +0000 UTC - event for pod-9498afb7-3237-482d-afb1-2fd37f376683: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.426: INFO: At 2021-05-29 01:42:14 +0000 UTC - event for pod-9498afb7-3237-482d-afb1-2fd37f376683: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.426: INFO: At 2021-05-29 01:42:14 +0000 UTC - event for pod-9498afb7-3237-482d-afb1-2fd37f376683: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.426: INFO: At 2021-05-29 01:42:14 +0000 UTC - event for pod-9498afb7-3237-482d-afb1-2fd37f376683: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.426: INFO: At 2021-05-29 01:42:15 +0000 UTC - event for pod-489d9f3d-4ee3-4c8a-8382-f61ffe1bc37d: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.426: INFO: At 2021-05-29 01:42:15 +0000 UTC - event for pod-489d9f3d-4ee3-4c8a-8382-f61ffe1bc37d: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.427: INFO: At 2021-05-29 01:42:15 +0000 UTC - event for pod-489d9f3d-4ee3-4c8a-8382-f61ffe1bc37d: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.427: INFO: At 2021-05-29 01:42:15 +0000 UTC - event for pod-489d9f3d-4ee3-4c8a-8382-f61ffe1bc37d: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.427: INFO: At 2021-05-29 01:42:17 +0000 UTC - event for pod-3f4b7448-2483-47c1-94f2-20062f32ccd9: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 29 01:45:45.427: INFO: At 2021-05-29 01:42:17 +0000 UTC - event for pod-3f4b7448-2483-47c1-94f2-20062f32ccd9: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.427: INFO: At 2021-05-29 01:42:17 +0000 UTC - event for pod-3f4b7448-2483-47c1-94f2-20062f32ccd9: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.427: INFO: At 2021-05-29 01:42:18 +0000 UTC - event for pod-2dd379ba-8ee3-4e7c-9515-d735fb305186: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.427: INFO: At 2021-05-29 01:42:18 +0000 UTC - event for pod-2dd379ba-8ee3-4e7c-9515-d735fb305186: {kubelet node1} Failed: Error: ErrImagePull May 29 01:45:45.427: INFO: At 2021-05-29 01:42:19 +0000 UTC - event for pod-2dd379ba-8ee3-4e7c-9515-d735fb305186: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 29 01:45:45.427: INFO: At 2021-05-29 01:42:20 +0000 UTC - event for pod-3f4b7448-2483-47c1-94f2-20062f32ccd9: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.427: INFO: At 2021-05-29 01:42:20 +0000 UTC - event for pod-3f4b7448-2483-47c1-94f2-20062f32ccd9: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.427: INFO: At 2021-05-29 01:42:20 +0000 UTC - event for pod-3f4b7448-2483-47c1-94f2-20062f32ccd9: {multus } AddedInterface: Add eth0 [10.244.4.4/24] May 29 01:45:45.427: INFO: At 2021-05-29 01:42:21 +0000 UTC - event for pod-2dd379ba-8ee3-4e7c-9515-d735fb305186: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 29 01:45:45.427: INFO: At 2021-05-29 01:42:21 +0000 UTC - event for pod-2dd379ba-8ee3-4e7c-9515-d735fb305186: {multus } AddedInterface: Add eth0 [10.244.4.7/24] May 29 01:45:45.427: INFO: At 2021-05-29 01:42:21 +0000 UTC - event for pod-2dd379ba-8ee3-4e7c-9515-d735fb305186: {kubelet node1} Failed: Error: ImagePullBackOff May 29 01:45:45.427: INFO: At 2021-05-29 01:42:24 +0000 UTC - event for pod-2dd379ba-8ee3-4e7c-9515-d735fb305186: {multus } AddedInterface: Add eth0 [10.244.4.10/24] May 29 01:45:45.427: INFO: At 2021-05-29 01:42:47 +0000 UTC - event for pod-a93b9850-884f-48af-b0e9-2eabe63905cb: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.427: INFO: At 2021-05-29 01:43:53 +0000 UTC - event for pod-31b85f20-03f4-4f2d-848d-c83aaed0548d: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 29 01:45:45.435: INFO: POD NODE PHASE GRACE CONDITIONS May 29 01:45:45.435: INFO: pod-002e9a2b-7575-4a65-b81c-12d287982b53 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC }] May 29 01:45:45.435: INFO: pod-04545737-23c7-4d47-8731-7521b79b43e6 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.435: INFO: pod-063ca335-874c-4121-b5bb-83d31bf0cce6 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.435: INFO: pod-0ae07703-a1fb-450e-a77b-ce32a067220d node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.435: INFO: pod-0f70c96a-f3a2-416f-afbe-b30581146951 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.435: INFO: pod-10ad4892-5aa8-483f-8ecc-75620e02ae7f node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.435: INFO: pod-1a64eff4-9c80-4e84-9739-89448e57613f node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC }] May 29 01:45:45.435: INFO: pod-2dd379ba-8ee3-4e7c-9515-d735fb305186 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.435: INFO: pod-31b85f20-03f4-4f2d-848d-c83aaed0548d node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC }] May 29 01:45:45.435: INFO: pod-35450dd4-0071-41e1-926f-42d29dca5e57 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.435: INFO: pod-3aa63228-9a6c-41b0-bdc2-90cb33daa37a node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC }] May 29 01:45:45.435: INFO: pod-3f4b7448-2483-47c1-94f2-20062f32ccd9 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.435: INFO: pod-41de111d-d2ee-4842-8464-f3f4a42e52a5 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.435: INFO: pod-455cc666-8b4b-47ca-92bb-f5a8ae64c66a node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC }] May 29 01:45:45.435: INFO: pod-4889ccc1-4d5f-4407-ac68-c20658f1e0c2 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC }] May 29 01:45:45.435: INFO: pod-48998c73-fb1b-4f97-923f-26a8c7fa70a5 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC }] May 29 01:45:45.435: INFO: pod-489d9f3d-4ee3-4c8a-8382-f61ffe1bc37d node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.435: INFO: pod-4e56412f-6aef-4cdc-a1bc-866e307ae390 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.435: INFO: pod-4ff53363-8a68-4d49-ba11-e57c78df1c24 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.435: INFO: pod-5dbe3fd0-6433-490b-a9d8-da6b16bf9034 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.435: INFO: pod-608407df-5264-4ddf-9188-05548319685f node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.435: INFO: pod-61b7bb98-c574-4f2b-99d0-1b168cd07041 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.435: INFO: pod-6cca9485-08b4-4d74-a20b-fdf5eede41e4 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.435: INFO: pod-6e46e05e-b12e-415e-8822-97e76626df4a node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.435: INFO: pod-73669c37-9e1f-4320-9e6d-e98c8b4c4ae3 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.435: INFO: pod-7e57ebb4-8477-43ac-bb12-c8b440fa9899 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC }] May 29 01:45:45.436: INFO: pod-8c1c447a-abd5-43ce-85ce-fbe03772e8f0 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.436: INFO: pod-8f2feb74-5a37-4fc8-a088-c16299c1806b node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.436: INFO: pod-93735dfd-b03b-44e3-a7b0-eb404d8473b2 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.436: INFO: pod-937996c9-5164-4d97-9c6f-3b261853cd14 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.436: INFO: pod-9498afb7-3237-482d-afb1-2fd37f376683 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC }] May 29 01:45:45.436: INFO: pod-a6d81ce7-a0ab-4dc1-9bd5-0e98520394f5 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.436: INFO: pod-a93b9850-884f-48af-b0e9-2eabe63905cb node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.436: INFO: pod-ac542bd2-e7ee-4acf-a2ee-184ddc1baabd node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.436: INFO: pod-b0a85e72-2a06-4d8b-8874-7a8e19984aed node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.436: INFO: pod-b0e49dae-5123-4198-94e3-23e35033b867 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.436: INFO: pod-b8c61797-5a53-4b9b-aa4f-294add40424c node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC }] May 29 01:45:45.436: INFO: pod-cb2b1302-0bab-490a-9009-0efcffde1865 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.436: INFO: pod-d15cecf5-b3e2-44a4-9588-5ef4190fd325 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC }] May 29 01:45:45.436: INFO: pod-d4d20ffd-a827-4734-8245-ea9708e8d922 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.436: INFO: pod-d4efbbdd-3fa5-49e9-9ea6-c690132e490f node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.436: INFO: pod-d68e7bee-ca0e-47f1-bb3e-cbf400d5eb3e node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC }] May 29 01:45:45.436: INFO: pod-d6e76180-49a8-4493-baa6-b111f4de8073 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.436: INFO: pod-d7a46ae6-e723-438e-a241-edc720176911 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.436: INFO: pod-dd8cdea3-5f8a-4b9c-a27b-4dad4ab66176 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC }] May 29 01:45:45.436: INFO: pod-e2c62d91-0475-4ba9-ad19-813aed1bee83 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.436: INFO: pod-e858b45c-a964-4881-b0bf-ebb0388bb11c node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.436: INFO: pod-eadaa880-5480-4ab1-94b1-dc548d038e74 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.436: INFO: pod-efdaa6c2-d2a7-44f3-b092-2f363a4b14be node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC }] May 29 01:45:45.436: INFO: pod-fb330863-8580-435b-a895-d93cb6dbd798 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-29 01:40:44 +0000 UTC }] May 29 01:45:45.436: INFO: May 29 01:45:45.441: INFO: Logging node info for node master1 May 29 01:45:45.443: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 0aa78934-442a-44a3-8c5c-f827e18dd3d7 166815 0 2021-05-28 19:56:25 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"0a:41:0b:9d:15:5a"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-28 19:56:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-28 19:56:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-28 19:59:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:02:03 +0000 UTC,LastTransitionTime:2021-05-28 20:02:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-29 01:45:39 +0000 UTC,LastTransitionTime:2021-05-28 19:56:25 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-29 01:45:39 +0000 UTC,LastTransitionTime:2021-05-28 19:56:25 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-29 01:45:39 +0000 UTC,LastTransitionTime:2021-05-28 19:56:25 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-29 01:45:39 +0000 UTC,LastTransitionTime:2021-05-28 19:59:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f7fb2c462cae4b9c990ab2e5c72f7816,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:24c06694-15ae-4da4-9143-144d98afdd8d,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:7f3d9945acdf5d86edd89b2b16fe1f6d63ba8bdb4cab50e66f9bce162df9e388 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:9af6075c93013910787a4e97973da6e0739a86dee1186d7965a5d00b1ac35636 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 29 01:45:45.443: INFO: Logging kubelet events for node master1 May 29 01:45:45.447: INFO: Logging pods the kubelet thinks is on node master1 May 29 01:45:45.462: INFO: kube-apiserver-master1 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 29 01:45:45.462: INFO: Container kube-apiserver ready: true, restart count 0 May 29 01:45:45.462: INFO: kube-controller-manager-master1 started at 2021-05-28 19:57:39 +0000 UTC (0+1 container statuses recorded) May 29 01:45:45.462: INFO: Container kube-controller-manager ready: true, restart count 2 May 29 01:45:45.462: INFO: kube-multus-ds-amd64-n9j8k started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 29 01:45:45.462: INFO: Container kube-multus ready: true, restart count 1 May 29 01:45:45.462: INFO: docker-registry-docker-registry-56cbc7bc58-rbghz started at 2021-05-28 20:02:55 +0000 UTC (0+2 container statuses recorded) May 29 01:45:45.462: INFO: Container docker-registry ready: true, restart count 0 May 29 01:45:45.462: INFO: Container nginx ready: true, restart count 0 May 29 01:45:45.462: INFO: prometheus-operator-5bb8cb9d8f-7wdtq started at 2021-05-28 20:10:02 +0000 UTC (0+2 container statuses recorded) May 29 01:45:45.462: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 01:45:45.462: INFO: Container prometheus-operator ready: true, restart count 0 May 29 01:45:45.462: INFO: kube-scheduler-master1 started at 2021-05-28 19:57:39 +0000 UTC (0+1 container statuses recorded) May 29 01:45:45.462: INFO: Container kube-scheduler ready: true, restart count 0 May 29 01:45:45.462: INFO: kube-proxy-994p2 started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 29 01:45:45.462: INFO: Container kube-proxy ready: true, restart count 1 May 29 01:45:45.462: INFO: kube-flannel-d54gm started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 29 01:45:45.462: INFO: Init container install-cni ready: true, restart count 0 May 29 01:45:45.462: INFO: Container kube-flannel ready: true, restart count 2 May 29 01:45:45.462: INFO: node-exporter-9b7pq started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 29 01:45:45.462: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 01:45:45.462: INFO: Container node-exporter ready: true, restart count 0 W0529 01:45:45.473542 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 29 01:45:45.505: INFO: Latency metrics for node master1 May 29 01:45:45.505: INFO: Logging node info for node master2 May 29 01:45:45.507: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 b80f32b6-a396-4f09-a110-345a08d762ee 166834 0 2021-05-28 19:57:04 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"b2:be:c9:d8:cf:bb"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-28 19:57:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-28 19:57:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-28 19:59:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-28 20:06:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:00:49 +0000 UTC,LastTransitionTime:2021-05-28 20:00:49 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-29 01:45:43 +0000 UTC,LastTransitionTime:2021-05-28 19:57:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-29 01:45:43 +0000 UTC,LastTransitionTime:2021-05-28 19:57:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-29 01:45:43 +0000 UTC,LastTransitionTime:2021-05-28 19:57:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-29 01:45:43 +0000 UTC,LastTransitionTime:2021-05-28 20:00:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2746caf91c53460599f165aa716150cd,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:b63b522f-706f-4e28-a104-c73edcd04319,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 29 01:45:45.507: INFO: Logging kubelet events for node master2 May 29 01:45:45.511: INFO: Logging pods the kubelet thinks is on node master2 May 29 01:45:45.533: INFO: kube-scheduler-master2 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 29 01:45:45.534: INFO: Container kube-scheduler ready: true, restart count 3 May 29 01:45:45.534: INFO: kube-proxy-jkbl8 started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 29 01:45:45.534: INFO: Container kube-proxy ready: true, restart count 1 May 29 01:45:45.534: INFO: dns-autoscaler-5b7b5c9b6f-r797x started at 2021-05-28 19:59:31 +0000 UTC (0+1 container statuses recorded) May 29 01:45:45.534: INFO: Container autoscaler ready: true, restart count 1 May 29 01:45:45.534: INFO: node-exporter-frch9 started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 29 01:45:45.534: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 01:45:45.534: INFO: Container node-exporter ready: true, restart count 0 May 29 01:45:45.534: INFO: coredns-7677f9bb54-x2ckq started at 2021-05-29 00:53:57 +0000 UTC (0+1 container statuses recorded) May 29 01:45:45.534: INFO: Container coredns ready: true, restart count 0 May 29 01:45:45.534: INFO: kube-apiserver-master2 started at 2021-05-28 20:05:31 +0000 UTC (0+1 container statuses recorded) May 29 01:45:45.534: INFO: Container kube-apiserver ready: true, restart count 0 May 29 01:45:45.534: INFO: kube-controller-manager-master2 started at 2021-05-28 20:05:41 +0000 UTC (0+1 container statuses recorded) May 29 01:45:45.534: INFO: Container kube-controller-manager ready: true, restart count 3 May 29 01:45:45.534: INFO: kube-flannel-xvtkj started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 29 01:45:45.534: INFO: Init container install-cni ready: true, restart count 0 May 29 01:45:45.534: INFO: Container kube-flannel ready: true, restart count 2 May 29 01:45:45.534: INFO: kube-multus-ds-amd64-qjwcz started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 29 01:45:45.534: INFO: Container kube-multus ready: true, restart count 1 May 29 01:45:45.534: INFO: node-feature-discovery-controller-5bf5c49849-n9ncl started at 2021-05-28 20:05:52 +0000 UTC (0+1 container statuses recorded) May 29 01:45:45.534: INFO: Container nfd-controller ready: true, restart count 0 W0529 01:45:45.546757 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 29 01:45:45.573: INFO: Latency metrics for node master2 May 29 01:45:45.573: INFO: Logging node info for node master3 May 29 01:45:45.575: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 301b0b5b-fc42-4c78-adb7-75baf6e0cc7e 166799 0 2021-05-28 19:57:14 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"52:fa:ab:49:88:02"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-28 19:57:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-28 19:57:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-28 19:59:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:02:12 +0000 UTC,LastTransitionTime:2021-05-28 20:02:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-29 01:45:36 +0000 UTC,LastTransitionTime:2021-05-28 19:57:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-29 01:45:36 +0000 UTC,LastTransitionTime:2021-05-28 19:57:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-29 01:45:36 +0000 UTC,LastTransitionTime:2021-05-28 19:57:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-29 01:45:36 +0000 UTC,LastTransitionTime:2021-05-28 19:59:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a0cb6c0eb1d842469076fff344213c13,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:c6adcff4-8bf7-40d7-9d14-54b1c6a87bc8,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 29 01:45:45.576: INFO: Logging kubelet events for node master3 May 29 01:45:45.579: INFO: Logging pods the kubelet thinks is on node master3 May 29 01:45:45.593: INFO: kube-controller-manager-master3 started at 2021-05-28 20:06:02 +0000 UTC (0+1 container statuses recorded) May 29 01:45:45.593: INFO: Container kube-controller-manager ready: true, restart count 1 May 29 01:45:45.593: INFO: kube-scheduler-master3 started at 2021-05-28 20:01:23 +0000 UTC (0+1 container statuses recorded) May 29 01:45:45.593: INFO: Container kube-scheduler ready: true, restart count 1 May 29 01:45:45.593: INFO: kube-proxy-t5bh6 started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 29 01:45:45.593: INFO: Container kube-proxy ready: true, restart count 1 May 29 01:45:45.593: INFO: kube-multus-ds-amd64-wqgf7 started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 29 01:45:45.593: INFO: Container kube-multus ready: true, restart count 1 May 29 01:45:45.593: INFO: coredns-7677f9bb54-sj78s started at 2021-05-29 00:53:57 +0000 UTC (0+1 container statuses recorded) May 29 01:45:45.593: INFO: Container coredns ready: true, restart count 0 May 29 01:45:45.593: INFO: node-exporter-w42s5 started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 29 01:45:45.593: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 01:45:45.593: INFO: Container node-exporter ready: true, restart count 0 May 29 01:45:45.593: INFO: kube-apiserver-master3 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 29 01:45:45.593: INFO: Container kube-apiserver ready: true, restart count 0 May 29 01:45:45.593: INFO: kube-flannel-zrskq started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 29 01:45:45.593: INFO: Init container install-cni ready: true, restart count 0 May 29 01:45:45.593: INFO: Container kube-flannel ready: true, restart count 1 W0529 01:45:45.605903 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 29 01:45:45.640: INFO: Latency metrics for node master3 May 29 01:45:45.640: INFO: Logging node info for node node1 May 29 01:45:45.643: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 43e51cb4-5acb-42b5-8f26-cd5e977f3829 166816 0 2021-05-28 19:58:22 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-2661":"csi-mock-csi-mock-volumes-2661","csi-mock-csi-mock-volumes-2991":"csi-mock-csi-mock-volumes-2991","csi-mock-csi-mock-volumes-4403":"csi-mock-csi-mock-volumes-4403","csi-mock-csi-mock-volumes-5716":"csi-mock-csi-mock-volumes-5716","csi-mock-csi-mock-volumes-617":"csi-mock-csi-mock-volumes-617","csi-mock-csi-mock-volumes-6185":"csi-mock-csi-mock-volumes-6185","csi-mock-csi-mock-volumes-6201":"csi-mock-csi-mock-volumes-6201"} flannel.alpha.coreos.com/backend-data:{"VtepMAC":"d2:9d:b7:73:58:07"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-05-28 19:58:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-28 20:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-28 20:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-05-29 01:14:54 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{}}}}} {kube-controller-manager Update v1 2021-05-29 01:21:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubelet Update v1 2021-05-29 01:22:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:01:58 +0000 UTC,LastTransitionTime:2021-05-28 20:01:58 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-29 01:45:39 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-29 01:45:39 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-29 01:45:39 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-29 01:45:39 +0000 UTC,LastTransitionTime:2021-05-28 19:59:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:abe6e95dbfa24a9abd34d8fa2abe7655,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:17719d1f-7df5-4d95-81f3-7d3ac5110ba2,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:d731a0fc49b9ad6125b8d5dcb29da2b60bc940b48eacb6f5a9eb2a55c10598db localhost:30500/barometer-collectd:stable],SizeBytes:1464395058,},ContainerImage{Names:[@ :],SizeBytes:1002495332,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:97953d03767e4c2eb5d156394aeaf4bb0b74f3fd1ad08c303cb7561e272a00ff cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:aa24a0a337084e0747e7c8e97e1131270ae38150e691314f1fa19f4b2f9093c0 golang:alpine3.12],SizeBytes:301156062,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:df13abe416e37eb3db4722840dd479b00ba193ac6606e7902331dcea50f4f1f2 nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:48281550,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:46041582,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2bec7a43da8efe70cb7cb14020a6b10aecd02c87e020d394de84e6807e2cf620 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392623,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:7f3d9945acdf5d86edd89b2b16fe1f6d63ba8bdb4cab50e66f9bce162df9e388 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:9af6075c93013910787a4e97973da6e0739a86dee1186d7965a5d00b1ac35636 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:0b4273abac4c241fa3d70aaf52b0d79a133d2737081f4a5c5dea4949f6c45dc3 k8s.gcr.io/sig-storage/mock-driver:v3.1.0],SizeBytes:18687618,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:16322467,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 29 01:45:45.644: INFO: Logging kubelet events for node node1 May 29 01:45:45.647: INFO: Logging pods the kubelet thinks is on node node1 May 29 01:45:46.146: INFO: pod-4ff53363-8a68-4d49-ba11-e57c78df1c24 started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.146: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.146: INFO: pod-73669c37-9e1f-4320-9e6d-e98c8b4c4ae3 started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.146: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.146: INFO: pod-d4efbbdd-3fa5-49e9-9ea6-c690132e490f started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.146: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.146: INFO: pod-93735dfd-b03b-44e3-a7b0-eb404d8473b2 started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: nginx-proxy-node1 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container nginx-proxy ready: true, restart count 1 May 29 01:45:46.147: INFO: pod-31b85f20-03f4-4f2d-848d-c83aaed0548d started at 2021-05-29 01:40:44 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-8c1c447a-abd5-43ce-85ce-fbe03772e8f0 started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-35450dd4-0071-41e1-926f-42d29dca5e57 started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-61b7bb98-c574-4f2b-99d0-1b168cd07041 started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-d4d20ffd-a827-4734-8245-ea9708e8d922 started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-10ad4892-5aa8-483f-8ecc-75620e02ae7f started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-ac542bd2-e7ee-4acf-a2ee-184ddc1baabd started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: node-exporter-khdpg started at 2021-05-28 20:10:09 +0000 UTC (0+2 container statuses recorded) May 29 01:45:46.147: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 01:45:46.147: INFO: Container node-exporter ready: true, restart count 0 May 29 01:45:46.147: INFO: pod-48998c73-fb1b-4f97-923f-26a8c7fa70a5 started at 2021-05-29 01:40:44 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-e2c62d91-0475-4ba9-ad19-813aed1bee83 started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-063ca335-874c-4121-b5bb-83d31bf0cce6 started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-4e56412f-6aef-4cdc-a1bc-866e307ae390 started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-6cca9485-08b4-4d74-a20b-fdf5eede41e4 started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: kube-flannel-2tjjt started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 29 01:45:46.147: INFO: Init container install-cni ready: true, restart count 0 May 29 01:45:46.147: INFO: Container kube-flannel ready: true, restart count 2 May 29 01:45:46.147: INFO: collectd-qw9nd started at 2021-05-28 20:16:29 +0000 UTC (0+3 container statuses recorded) May 29 01:45:46.147: INFO: Container collectd ready: true, restart count 0 May 29 01:45:46.147: INFO: Container collectd-exporter ready: true, restart count 0 May 29 01:45:46.147: INFO: Container rbac-proxy ready: true, restart count 0 May 29 01:45:46.147: INFO: cmk-webhook-6c9d5f8578-kt8bp started at 2021-05-29 00:29:43 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container cmk-webhook ready: true, restart count 0 May 29 01:45:46.147: INFO: pod-4889ccc1-4d5f-4407-ac68-c20658f1e0c2 started at 2021-05-29 01:40:44 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-3aa63228-9a6c-41b0-bdc2-90cb33daa37a started at 2021-05-29 01:40:44 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: kubernetes-dashboard-86c6f9df5b-c5sbq started at 2021-05-28 19:59:33 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 29 01:45:46.147: INFO: node-feature-discovery-worker-5x4qg started at 2021-05-28 20:05:52 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container nfd-worker ready: true, restart count 0 May 29 01:45:46.147: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28 started at 2021-05-29 00:29:43 +0000 UTC (0+2 container statuses recorded) May 29 01:45:46.147: INFO: Container tas-controller ready: true, restart count 0 May 29 01:45:46.147: INFO: Container tas-extender ready: true, restart count 0 May 29 01:45:46.147: INFO: pod-7e57ebb4-8477-43ac-bb12-c8b440fa9899 started at 2021-05-29 01:40:44 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-b8c61797-5a53-4b9b-aa4f-294add40424c started at 2021-05-29 01:40:44 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-a6d81ce7-a0ab-4dc1-9bd5-0e98520394f5 started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: cmk-init-discover-node1-rvqxm started at 2021-05-28 20:08:32 +0000 UTC (0+3 container statuses recorded) May 29 01:45:46.147: INFO: Container discover ready: false, restart count 0 May 29 01:45:46.147: INFO: Container init ready: false, restart count 0 May 29 01:45:46.147: INFO: Container install ready: false, restart count 0 May 29 01:45:46.147: INFO: prometheus-k8s-0 started at 2021-05-28 20:10:26 +0000 UTC (0+5 container statuses recorded) May 29 01:45:46.147: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 29 01:45:46.147: INFO: Container grafana ready: true, restart count 0 May 29 01:45:46.147: INFO: Container prometheus ready: true, restart count 1 May 29 01:45:46.147: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 29 01:45:46.147: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 29 01:45:46.147: INFO: pod-efdaa6c2-d2a7-44f3-b092-2f363a4b14be started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-e858b45c-a964-4881-b0bf-ebb0388bb11c started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-41de111d-d2ee-4842-8464-f3f4a42e52a5 started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-04545737-23c7-4d47-8731-7521b79b43e6 started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: kube-multus-ds-amd64-x7826 started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container kube-multus ready: true, restart count 1 May 29 01:45:46.147: INFO: pod-0f70c96a-f3a2-416f-afbe-b30581146951 started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-b0e49dae-5123-4198-94e3-23e35033b867 started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-eadaa880-5480-4ab1-94b1-dc548d038e74 started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: kubernetes-metrics-scraper-678c97765c-wblkm started at 2021-05-28 19:59:33 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 29 01:45:46.147: INFO: pod-dd8cdea3-5f8a-4b9c-a27b-4dad4ab66176 started at 2021-05-29 01:40:44 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-2dd379ba-8ee3-4e7c-9515-d735fb305186 started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-8f2feb74-5a37-4fc8-a088-c16299c1806b started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-d7a46ae6-e723-438e-a241-edc720176911 started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-cb2b1302-0bab-490a-9009-0efcffde1865 started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zk2pt started at 2021-05-28 20:06:47 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container kube-sriovdp ready: true, restart count 0 May 29 01:45:46.147: INFO: cmk-jhzjr started at 2021-05-28 20:09:15 +0000 UTC (0+2 container statuses recorded) May 29 01:45:46.147: INFO: Container nodereport ready: true, restart count 0 May 29 01:45:46.147: INFO: Container reconcile ready: true, restart count 0 May 29 01:45:46.147: INFO: pod-d68e7bee-ca0e-47f1-bb3e-cbf400d5eb3e started at 2021-05-29 01:40:44 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-937996c9-5164-4d97-9c6f-3b261853cd14 started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-b0a85e72-2a06-4d8b-8874-7a8e19984aed started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-5dbe3fd0-6433-490b-a9d8-da6b16bf9034 started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-d15cecf5-b3e2-44a4-9588-5ef4190fd325 started at 2021-05-29 01:40:44 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-1a64eff4-9c80-4e84-9739-89448e57613f started at 2021-05-29 01:40:44 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-9498afb7-3237-482d-afb1-2fd37f376683 started at 2021-05-29 01:40:44 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-608407df-5264-4ddf-9188-05548319685f started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-a93b9850-884f-48af-b0e9-2eabe63905cb started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-489d9f3d-4ee3-4c8a-8382-f61ffe1bc37d started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-3f4b7448-2483-47c1-94f2-20062f32ccd9 started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-0ae07703-a1fb-450e-a77b-ce32a067220d started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: kube-proxy-lsngv started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container kube-proxy ready: true, restart count 2 May 29 01:45:46.147: INFO: pod-455cc666-8b4b-47ca-92bb-f5a8ae64c66a started at 2021-05-29 01:40:44 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-002e9a2b-7575-4a65-b81c-12d287982b53 started at 2021-05-29 01:40:44 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-fb330863-8580-435b-a895-d93cb6dbd798 started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-6e46e05e-b12e-415e-8822-97e76626df4a started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 May 29 01:45:46.147: INFO: pod-d6e76180-49a8-4493-baa6-b111f4de8073 started at 2021-05-29 01:40:45 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.147: INFO: Container write-pod ready: false, restart count 0 W0529 01:45:46.158665 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 29 01:45:46.803: INFO: Latency metrics for node node1 May 29 01:45:46.803: INFO: Logging node info for node node2 May 29 01:45:46.806: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 3cc89580-b568-4c82-bd1f-200d0823da3b 166822 0 2021-05-28 19:58:22 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1742":"csi-mock-csi-mock-volumes-1742","csi-mock-csi-mock-volumes-3056":"csi-mock-csi-mock-volumes-3056","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4234":"csi-mock-csi-mock-volumes-4234","csi-mock-csi-mock-volumes-4289":"csi-mock-csi-mock-volumes-4289","csi-mock-csi-mock-volumes-6106":"csi-mock-csi-mock-volumes-6106","csi-mock-csi-mock-volumes-6742":"csi-mock-csi-mock-volumes-6742","csi-mock-csi-mock-volumes-7637":"csi-mock-csi-mock-volumes-7637","csi-mock-csi-mock-volumes-7787":"csi-mock-csi-mock-volumes-7787","csi-mock-csi-mock-volumes-8094":"csi-mock-csi-mock-volumes-8094","csi-mock-csi-mock-volumes-9667":"csi-mock-csi-mock-volumes-9667"} flannel.alpha.coreos.com/backend-data:{"VtepMAC":"62:22:2c:ae:14:ae"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-05-28 19:58:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-28 19:59:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-28 20:06:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-28 20:08:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-05-29 01:15:15 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}}} {kube-controller-manager Update v1 2021-05-29 01:23:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubelet Update v1 2021-05-29 01:23:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-28 20:01:05 +0000 UTC,LastTransitionTime:2021-05-28 20:01:05 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-29 01:45:40 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-29 01:45:40 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-29 01:45:40 +0000 UTC,LastTransitionTime:2021-05-28 19:58:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-29 01:45:40 +0000 UTC,LastTransitionTime:2021-05-28 19:59:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b2730c4b09814ab9a78e7bc62c820fbb,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:f1459072-d21d-46de-a5d9-46ec9349aae0,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:d731a0fc49b9ad6125b8d5dcb29da2b60bc940b48eacb6f5a9eb2a55c10598db localhost:30500/barometer-collectd:stable],SizeBytes:1464395058,},ContainerImage{Names:[localhost:30500/cmk@sha256:97953d03767e4c2eb5d156394aeaf4bb0b74f3fd1ad08c303cb7561e272a00ff localhost:30500/cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726715672,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[nginx@sha256:df13abe416e37eb3db4722840dd479b00ba193ac6606e7902331dcea50f4f1f2 nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:48281550,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:46041582,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2bec7a43da8efe70cb7cb14020a6b10aecd02c87e020d394de84e6807e2cf620 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392623,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:7f3d9945acdf5d86edd89b2b16fe1f6d63ba8bdb4cab50e66f9bce162df9e388 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:9af6075c93013910787a4e97973da6e0739a86dee1186d7965a5d00b1ac35636 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64@sha256:3b36bd80b97c532a774e7f6246797b8575d97037982f353476c703ba6686c75c gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64:1.0],SizeBytes:19227369,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:0b4273abac4c241fa3d70aaf52b0d79a133d2737081f4a5c5dea4949f6c45dc3 k8s.gcr.io/sig-storage/mock-driver:v3.1.0],SizeBytes:18687618,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:16322467,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 29 01:45:46.807: INFO: Logging kubelet events for node node2 May 29 01:45:46.811: INFO: Logging pods the kubelet thinks is on node node2 May 29 01:45:46.825: INFO: nginx-proxy-node2 started at 2021-05-28 20:05:21 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.825: INFO: Container nginx-proxy ready: true, restart count 2 May 29 01:45:46.825: INFO: kube-proxy-z5czn started at 2021-05-28 19:58:24 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.825: INFO: Container kube-proxy ready: true, restart count 2 May 29 01:45:46.825: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-mkc6p started at 2021-05-29 00:29:50 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.826: INFO: Container kube-sriovdp ready: true, restart count 0 May 29 01:45:46.826: INFO: node-feature-discovery-worker-2qfpd started at 2021-05-29 00:29:50 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.826: INFO: Container nfd-worker ready: true, restart count 0 May 29 01:45:46.826: INFO: kube-flannel-d9wsg started at 2021-05-28 19:59:00 +0000 UTC (1+1 container statuses recorded) May 29 01:45:46.826: INFO: Init container install-cni ready: true, restart count 2 May 29 01:45:46.826: INFO: Container kube-flannel ready: true, restart count 2 May 29 01:45:46.826: INFO: kube-multus-ds-amd64-c9cj2 started at 2021-05-28 19:59:08 +0000 UTC (0+1 container statuses recorded) May 29 01:45:46.826: INFO: Container kube-multus ready: true, restart count 1 May 29 01:45:46.826: INFO: cmk-lbg6n started at 2021-05-29 00:29:50 +0000 UTC (0+2 container statuses recorded) May 29 01:45:46.826: INFO: Container nodereport ready: true, restart count 0 May 29 01:45:46.826: INFO: Container reconcile ready: true, restart count 0 May 29 01:45:46.826: INFO: node-exporter-nsrbd started at 2021-05-29 00:29:50 +0000 UTC (0+2 container statuses recorded) May 29 01:45:46.826: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 01:45:46.826: INFO: Container node-exporter ready: true, restart count 0 May 29 01:45:46.826: INFO: collectd-k6rzg started at 2021-05-29 00:30:20 +0000 UTC (0+3 container statuses recorded) May 29 01:45:46.826: INFO: Container collectd ready: true, restart count 0 May 29 01:45:46.826: INFO: Container collectd-exporter ready: true, restart count 0 May 29 01:45:46.826: INFO: Container rbac-proxy ready: true, restart count 0 W0529 01:45:46.838568 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 29 01:45:46.877: INFO: Latency metrics for node node2 May 29 01:45:46.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-955" for this suite. • Failure [302.044 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:614 all pods should be running [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:642 May 29 01:45:45.396: Some pods are not running within 5m0s Unexpected error: <*errors.errorString | 0xc0002c4200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:683 ------------------------------ {"msg":"FAILED [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running","total":17,"completed":0,"skipped":3699,"failed":2,"failures":["[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:481 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:45:46.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 29 01:45:46.907: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:45:46.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-6756" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.024 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:481 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:490 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:45:46.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 29 01:45:46.948: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:45:46.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-7831" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.039 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:490 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create volume metrics with the correct PVC ref /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:203 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:45:46.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 29 01:45:46.975: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:45:46.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2892" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.027 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics with the correct PVC ref [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:203 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:499 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:45:46.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 29 01:45:47.009: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:45:47.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-2095" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.025 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:499 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 29 01:45:47.020: INFO: Running AfterSuite actions on all nodes May 29 01:45:47.020: INFO: Running AfterSuite actions on node 1 May 29 01:45:47.020: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_storage_serial/junit_01.xml {"msg":"Test Suite completed","total":17,"completed":0,"skipped":5482,"failed":2,"failures":["[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running"]} Summarizing 2 Failures: [Fail] [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] [It] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:610 [Fail] [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] [It] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:683 Ran 2 of 5484 Specs in 636.551 seconds FAIL! -- 0 Passed | 2 Failed | 0 Pending | 5482 Skipped --- FAIL: TestE2E (636.66s) FAIL Ginkgo ran 1 suite in 10m37.812882024s Test Suite Failed