I0515 02:04:28.821260 22 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0515 02:04:28.821374 22 e2e.go:129] Starting e2e run "1a7f7e61-f888-4227-a0dc-6adf72d73b13" on Ginkgo node 1 {"msg":"Test Suite starting","total":17,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1621044267 - Will randomize all specs Will run 17 of 5484 specs May 15 02:04:28.922: INFO: >>> kubeConfig: /root/.kube/config May 15 02:04:28.926: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 15 02:04:28.954: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 15 02:04:29.014: INFO: The status of Pod cmk-init-discover-node2-j75ff is Succeeded, skipping waiting May 15 02:04:29.014: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 15 02:04:29.014: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 15 02:04:29.014: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 15 02:04:29.031: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 15 02:04:29.031: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 15 02:04:29.031: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 15 02:04:29.031: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 15 02:04:29.031: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 15 02:04:29.031: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 15 02:04:29.031: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 15 02:04:29.031: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 15 02:04:29.032: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 15 02:04:29.032: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 15 02:04:29.032: INFO: e2e test version: v1.19.10 May 15 02:04:29.032: INFO: kube-apiserver version: v1.19.8 May 15 02:04:29.032: INFO: >>> kubeConfig: /root/.kube/config May 15 02:04:29.037: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 02:04:29.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test May 15 02:04:29.062: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 15 02:04:29.065: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 May 15 02:04:33.096: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-2300 PodName:hostexec-node1-pnbch ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:04:33.096: INFO: >>> kubeConfig: /root/.kube/config May 15 02:04:33.333: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 15 02:04:33.333: INFO: exec node1: stdout: "0\n" May 15 02:04:33.333: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" May 15 02:04:33.333: INFO: exec node1: exit code: 0 May 15 02:04:33.333: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 02:04:33.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2300" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.304 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Two pods mounting a local volume one after the other [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create metrics for total time taken in volume operations in P/V Controller /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:260 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 02:04:33.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 15 02:04:33.375: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 02:04:33.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-1928" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total time taken in volume operations in P/V Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:260 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ S ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:481 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 02:04:33.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 15 02:04:33.417: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 02:04:33.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-4937" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.040 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:481 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:490 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 02:04:33.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 15 02:04:33.450: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 02:04:33.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-4153" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:490 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create prometheus metrics for volume provisioning and attach/detach /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:100 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 02:04:33.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 15 02:04:33.480: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 02:04:33.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-9470" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning and attach/detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:100 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:499 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 02:04:33.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 15 02:04:33.540: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 02:04:33.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-6984" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:499 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 02:04:33.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 May 15 02:04:35.600: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-4136 PodName:hostexec-node1-5n988 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:04:35.600: INFO: >>> kubeConfig: /root/.kube/config May 15 02:04:35.735: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 15 02:04:35.735: INFO: exec node1: stdout: "0\n" May 15 02:04:35.735: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" May 15 02:04:35.735: INFO: exec node1: exit code: 0 May 15 02:04:35.735: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 02:04:35.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4136" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.189 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Two pods mounting a local volume at the same time [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 02:04:35.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 May 15 02:04:37.790: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-7432 PodName:hostexec-node1-dz96c ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:04:37.790: INFO: >>> kubeConfig: /root/.kube/config May 15 02:04:37.908: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 15 02:04:37.908: INFO: exec node1: stdout: "0\n" May 15 02:04:37.908: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" May 15 02:04:37.908: INFO: exec node1: exit code: 0 May 15 02:04:37.908: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 02:04:37.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7432" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.170 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:517 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 02:04:37.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:441 STEP: Setting up 10 local volumes on node "node1" STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-4400e93d-d916-4179-be4f-13f1a8cf4e34" May 15 02:04:41.967: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-4400e93d-d916-4179-be4f-13f1a8cf4e34" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-4400e93d-d916-4179-be4f-13f1a8cf4e34" "/tmp/local-volume-test-4400e93d-d916-4179-be4f-13f1a8cf4e34"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:04:41.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-c9fdaa9c-81ab-40a8-9139-66caf72ee11e" May 15 02:04:42.085: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-c9fdaa9c-81ab-40a8-9139-66caf72ee11e" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-c9fdaa9c-81ab-40a8-9139-66caf72ee11e" "/tmp/local-volume-test-c9fdaa9c-81ab-40a8-9139-66caf72ee11e"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:04:42.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-0accf70f-9dd1-404e-ac94-df401911dca8" May 15 02:04:42.262: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-0accf70f-9dd1-404e-ac94-df401911dca8" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-0accf70f-9dd1-404e-ac94-df401911dca8" "/tmp/local-volume-test-0accf70f-9dd1-404e-ac94-df401911dca8"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:04:42.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-203f53c6-3171-44c1-be3b-eee76e8b6b89" May 15 02:04:42.384: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-203f53c6-3171-44c1-be3b-eee76e8b6b89" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-203f53c6-3171-44c1-be3b-eee76e8b6b89" "/tmp/local-volume-test-203f53c6-3171-44c1-be3b-eee76e8b6b89"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:04:42.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-de91f70a-2e3e-41a2-817b-f9461a63e1eb" May 15 02:04:42.510: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-de91f70a-2e3e-41a2-817b-f9461a63e1eb" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-de91f70a-2e3e-41a2-817b-f9461a63e1eb" "/tmp/local-volume-test-de91f70a-2e3e-41a2-817b-f9461a63e1eb"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:04:42.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-10a487f4-565d-4b4d-b361-03874ed76389" May 15 02:04:42.624: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-10a487f4-565d-4b4d-b361-03874ed76389" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-10a487f4-565d-4b4d-b361-03874ed76389" "/tmp/local-volume-test-10a487f4-565d-4b4d-b361-03874ed76389"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:04:42.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-503918db-8834-4a8a-a23e-8a0ccf239a94" May 15 02:04:42.757: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-503918db-8834-4a8a-a23e-8a0ccf239a94" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-503918db-8834-4a8a-a23e-8a0ccf239a94" "/tmp/local-volume-test-503918db-8834-4a8a-a23e-8a0ccf239a94"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:04:42.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-1033eaba-6f8c-40b0-80a4-ea2585b850fb" May 15 02:04:42.871: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-1033eaba-6f8c-40b0-80a4-ea2585b850fb" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-1033eaba-6f8c-40b0-80a4-ea2585b850fb" "/tmp/local-volume-test-1033eaba-6f8c-40b0-80a4-ea2585b850fb"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:04:42.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-f0b8ac41-88a8-4051-9476-fe14e0430a14" May 15 02:04:42.991: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-f0b8ac41-88a8-4051-9476-fe14e0430a14" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-f0b8ac41-88a8-4051-9476-fe14e0430a14" "/tmp/local-volume-test-f0b8ac41-88a8-4051-9476-fe14e0430a14"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:04:42.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-f93f3ec4-7ceb-47de-86e3-80e17ad49c2b" May 15 02:04:43.161: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-f93f3ec4-7ceb-47de-86e3-80e17ad49c2b" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-f93f3ec4-7ceb-47de-86e3-80e17ad49c2b" "/tmp/local-volume-test-f93f3ec4-7ceb-47de-86e3-80e17ad49c2b"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:04:43.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Setting up 10 local volumes on node "node2" STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-e8b3a261-34b4-4590-929d-e0ea04d6eb26" May 15 02:04:47.349: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-e8b3a261-34b4-4590-929d-e0ea04d6eb26" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-e8b3a261-34b4-4590-929d-e0ea04d6eb26" "/tmp/local-volume-test-e8b3a261-34b4-4590-929d-e0ea04d6eb26"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:04:47.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-ede09820-5490-4177-9f51-b405e1e12064" May 15 02:04:47.479: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ede09820-5490-4177-9f51-b405e1e12064" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ede09820-5490-4177-9f51-b405e1e12064" "/tmp/local-volume-test-ede09820-5490-4177-9f51-b405e1e12064"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:04:47.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-bbdf340c-7540-4f09-9e8f-7d569038ae0e" May 15 02:04:47.594: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-bbdf340c-7540-4f09-9e8f-7d569038ae0e" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-bbdf340c-7540-4f09-9e8f-7d569038ae0e" "/tmp/local-volume-test-bbdf340c-7540-4f09-9e8f-7d569038ae0e"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:04:47.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-260a55a3-cd3c-46b3-838c-06499be7c934" May 15 02:04:47.703: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-260a55a3-cd3c-46b3-838c-06499be7c934" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-260a55a3-cd3c-46b3-838c-06499be7c934" "/tmp/local-volume-test-260a55a3-cd3c-46b3-838c-06499be7c934"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:04:47.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-275f56d9-63d2-44e8-94d4-8209c2028fc9" May 15 02:04:47.809: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-275f56d9-63d2-44e8-94d4-8209c2028fc9" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-275f56d9-63d2-44e8-94d4-8209c2028fc9" "/tmp/local-volume-test-275f56d9-63d2-44e8-94d4-8209c2028fc9"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:04:47.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-bd9022e1-d71a-4181-b0d8-e0f3dc0fbc4f" May 15 02:04:47.913: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-bd9022e1-d71a-4181-b0d8-e0f3dc0fbc4f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-bd9022e1-d71a-4181-b0d8-e0f3dc0fbc4f" "/tmp/local-volume-test-bd9022e1-d71a-4181-b0d8-e0f3dc0fbc4f"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:04:47.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-7c31ab30-324f-4d05-b33b-014d3f9f6c99" May 15 02:04:48.031: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-7c31ab30-324f-4d05-b33b-014d3f9f6c99" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-7c31ab30-324f-4d05-b33b-014d3f9f6c99" "/tmp/local-volume-test-7c31ab30-324f-4d05-b33b-014d3f9f6c99"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:04:48.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-41b3dd74-362e-472f-bcac-cfa8ff0bf1ea" May 15 02:04:48.136: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-41b3dd74-362e-472f-bcac-cfa8ff0bf1ea" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-41b3dd74-362e-472f-bcac-cfa8ff0bf1ea" "/tmp/local-volume-test-41b3dd74-362e-472f-bcac-cfa8ff0bf1ea"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:04:48.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-b9bd5c1e-a645-41b5-bb5e-b87aeae8d46e" May 15 02:04:48.241: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-b9bd5c1e-a645-41b5-bb5e-b87aeae8d46e" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-b9bd5c1e-a645-41b5-bb5e-b87aeae8d46e" "/tmp/local-volume-test-b9bd5c1e-a645-41b5-bb5e-b87aeae8d46e"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:04:48.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-62dac2c2-ff7f-4ff5-a3e4-534fa8078930" May 15 02:04:48.344: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-62dac2c2-ff7f-4ff5-a3e4-534fa8078930" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-62dac2c2-ff7f-4ff5-a3e4-534fa8078930" "/tmp/local-volume-test-62dac2c2-ff7f-4ff5-a3e4-534fa8078930"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:04:48.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Create 20 PVs STEP: Start a goroutine to recycle unbound PVs [It] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:517 STEP: Creating 7 pods periodically STEP: Waiting for all pods to complete successfully May 15 02:04:55.659: INFO: Deleting pod pod-59b81854-7d72-4913-be06-f1facbb75247 May 15 02:04:55.667: INFO: Deleting PersistentVolumeClaim "pvc-4zggc" May 15 02:04:55.671: INFO: Deleting PersistentVolumeClaim "pvc-vqhzj" May 15 02:04:55.674: INFO: Deleting PersistentVolumeClaim "pvc-cxqp4" May 15 02:04:55.678: INFO: 1/28 pods finished May 15 02:04:55.678: INFO: Deleting pod pod-81d9ece7-7790-4400-9782-0109c6720dd9 May 15 02:04:55.684: INFO: Deleting PersistentVolumeClaim "pvc-bc4j2" STEP: Delete "local-pv42g8j" and create a new PV for same local volume storage May 15 02:04:55.688: INFO: Deleting PersistentVolumeClaim "pvc-qmlwx" May 15 02:04:55.692: INFO: Deleting PersistentVolumeClaim "pvc-4fp4z" STEP: Delete "local-pv42g8j" and create a new PV for same local volume storage May 15 02:04:55.697: INFO: 2/28 pods finished STEP: Delete "local-pvwdv8m" and create a new PV for same local volume storage STEP: Delete "local-pvxvmgj" and create a new PV for same local volume storage STEP: Delete "local-pvz96vb" and create a new PV for same local volume storage STEP: Delete "local-pvrs29d" and create a new PV for same local volume storage STEP: Delete "local-pvtqdnk" and create a new PV for same local volume storage May 15 02:09:48.660: FAIL: some pods failed to complete within 5m0s Unexpected error: <*errors.errorString | 0xc0002bc200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func20.6.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:610 +0x42a k8s.io/kubernetes/test/e2e.RunE2ETests(0xc004283680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc004283680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc004283680, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 May 15 02:09:48.661: INFO: Deleting pod pod-ed322ced-117e-419e-99ed-1d765a51b554 May 15 02:09:48.667: INFO: Deleting PersistentVolumeClaim "pvc-j8hmk" May 15 02:09:48.670: INFO: Deleting PersistentVolumeClaim "pvc-vcfk8" May 15 02:09:48.674: INFO: Deleting PersistentVolumeClaim "pvc-qxjkl" May 15 02:09:48.678: INFO: Deleting pod pod-49a826c3-524a-4877-b67e-de43c6ab2cf5 May 15 02:09:48.682: INFO: Deleting PersistentVolumeClaim "pvc-p8vsd" May 15 02:09:48.686: INFO: Deleting PersistentVolumeClaim "pvc-7tgmw" May 15 02:09:48.689: INFO: Deleting PersistentVolumeClaim "pvc-f6cx2" May 15 02:09:48.695: INFO: Deleting pod pod-e260a2d3-d0c5-45c5-8300-39a4e6a40e0f May 15 02:09:48.699: INFO: Deleting PersistentVolumeClaim "pvc-26w5x" May 15 02:09:48.703: INFO: Deleting PersistentVolumeClaim "pvc-prhlf" May 15 02:09:48.706: INFO: Deleting PersistentVolumeClaim "pvc-m9gr2" May 15 02:09:48.712: INFO: Deleting pod pod-53ae386e-4922-4c5d-9f15-821f9964e4ce May 15 02:09:48.717: INFO: Deleting PersistentVolumeClaim "pvc-pw2hs" May 15 02:09:48.720: INFO: Deleting PersistentVolumeClaim "pvc-xqh87" May 15 02:09:48.724: INFO: Deleting PersistentVolumeClaim "pvc-nx5fn" May 15 02:09:48.728: INFO: Deleting pod pod-9e5bece1-b84d-45ea-b86f-6be4730c37ba May 15 02:09:48.733: INFO: Deleting PersistentVolumeClaim "pvc-dtjjt" May 15 02:09:48.737: INFO: Deleting PersistentVolumeClaim "pvc-chm9l" May 15 02:09:48.740: INFO: Deleting PersistentVolumeClaim "pvc-97cw5" [AfterEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:505 STEP: Stop and wait for recycle goroutine to finish STEP: Clean all PVs STEP: Cleaning up 10 local volumes on node "node1" STEP: Cleaning up PVC and PV May 15 02:09:48.745: INFO: pvc is nil May 15 02:09:48.745: INFO: Deleting PersistentVolume "local-pvx758l" STEP: Cleaning up PVC and PV May 15 02:09:48.749: INFO: pvc is nil May 15 02:09:48.749: INFO: Deleting PersistentVolume "local-pvm5ghr" STEP: Cleaning up PVC and PV May 15 02:09:48.752: INFO: pvc is nil May 15 02:09:48.752: INFO: Deleting PersistentVolume "local-pvstqm2" STEP: Cleaning up PVC and PV May 15 02:09:48.756: INFO: pvc is nil May 15 02:09:48.756: INFO: Deleting PersistentVolume "local-pvf2rlh" STEP: Cleaning up PVC and PV May 15 02:09:48.760: INFO: pvc is nil May 15 02:09:48.760: INFO: Deleting PersistentVolume "local-pvhqdr5" STEP: Cleaning up PVC and PV May 15 02:09:48.763: INFO: pvc is nil May 15 02:09:48.763: INFO: Deleting PersistentVolume "local-pvpvv4c" STEP: Cleaning up PVC and PV May 15 02:09:48.766: INFO: pvc is nil May 15 02:09:48.766: INFO: Deleting PersistentVolume "local-pv2zj72" STEP: Cleaning up PVC and PV May 15 02:09:48.770: INFO: pvc is nil May 15 02:09:48.770: INFO: Deleting PersistentVolume "local-pvkrs5h" STEP: Cleaning up PVC and PV May 15 02:09:48.774: INFO: pvc is nil May 15 02:09:48.774: INFO: Deleting PersistentVolume "local-pv9hs25" STEP: Cleaning up PVC and PV May 15 02:09:48.778: INFO: pvc is nil May 15 02:09:48.778: INFO: Deleting PersistentVolume "local-pv9sv4t" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-4400e93d-d916-4179-be4f-13f1a8cf4e34" May 15 02:09:48.781: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-4400e93d-d916-4179-be4f-13f1a8cf4e34"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:48.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 15 02:09:48.918: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4400e93d-d916-4179-be4f-13f1a8cf4e34] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:48.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-c9fdaa9c-81ab-40a8-9139-66caf72ee11e" May 15 02:09:49.034: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-c9fdaa9c-81ab-40a8-9139-66caf72ee11e"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:49.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 15 02:09:49.169: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c9fdaa9c-81ab-40a8-9139-66caf72ee11e] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:49.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-0accf70f-9dd1-404e-ac94-df401911dca8" May 15 02:09:49.288: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-0accf70f-9dd1-404e-ac94-df401911dca8"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:49.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 15 02:09:49.413: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0accf70f-9dd1-404e-ac94-df401911dca8] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:49.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-203f53c6-3171-44c1-be3b-eee76e8b6b89" May 15 02:09:49.528: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-203f53c6-3171-44c1-be3b-eee76e8b6b89"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:49.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 15 02:09:49.883: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-203f53c6-3171-44c1-be3b-eee76e8b6b89] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:49.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-de91f70a-2e3e-41a2-817b-f9461a63e1eb" May 15 02:09:50.206: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-de91f70a-2e3e-41a2-817b-f9461a63e1eb"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:50.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 15 02:09:50.353: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-de91f70a-2e3e-41a2-817b-f9461a63e1eb] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:50.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-10a487f4-565d-4b4d-b361-03874ed76389" May 15 02:09:50.491: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-10a487f4-565d-4b4d-b361-03874ed76389"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:50.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 15 02:09:50.713: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-10a487f4-565d-4b4d-b361-03874ed76389] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:50.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-503918db-8834-4a8a-a23e-8a0ccf239a94" May 15 02:09:50.855: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-503918db-8834-4a8a-a23e-8a0ccf239a94"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:50.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 15 02:09:50.975: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-503918db-8834-4a8a-a23e-8a0ccf239a94] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:50.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-1033eaba-6f8c-40b0-80a4-ea2585b850fb" May 15 02:09:51.087: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-1033eaba-6f8c-40b0-80a4-ea2585b850fb"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:51.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 15 02:09:51.205: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1033eaba-6f8c-40b0-80a4-ea2585b850fb] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:51.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-f0b8ac41-88a8-4051-9476-fe14e0430a14" May 15 02:09:51.304: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-f0b8ac41-88a8-4051-9476-fe14e0430a14"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:51.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 15 02:09:51.433: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f0b8ac41-88a8-4051-9476-fe14e0430a14] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:51.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-f93f3ec4-7ceb-47de-86e3-80e17ad49c2b" May 15 02:09:51.536: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-f93f3ec4-7ceb-47de-86e3-80e17ad49c2b"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:51.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 15 02:09:51.654: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f93f3ec4-7ceb-47de-86e3-80e17ad49c2b] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node1-776qc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:51.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up 10 local volumes on node "node2" STEP: Cleaning up PVC and PV May 15 02:09:51.780: INFO: pvc is nil May 15 02:09:51.780: INFO: Deleting PersistentVolume "local-pvbf5g6" STEP: Cleaning up PVC and PV May 15 02:09:51.786: INFO: pvc is nil May 15 02:09:51.786: INFO: Deleting PersistentVolume "local-pvl8lgs" STEP: Cleaning up PVC and PV May 15 02:09:51.790: INFO: pvc is nil May 15 02:09:51.790: INFO: Deleting PersistentVolume "local-pvxw95v" STEP: Cleaning up PVC and PV May 15 02:09:51.793: INFO: pvc is nil May 15 02:09:51.794: INFO: Deleting PersistentVolume "local-pvzqwkw" STEP: Cleaning up PVC and PV May 15 02:09:51.797: INFO: pvc is nil May 15 02:09:51.797: INFO: Deleting PersistentVolume "local-pvbdv87" STEP: Cleaning up PVC and PV May 15 02:09:51.800: INFO: pvc is nil May 15 02:09:51.800: INFO: Deleting PersistentVolume "local-pvg8dhh" STEP: Cleaning up PVC and PV May 15 02:09:51.803: INFO: pvc is nil May 15 02:09:51.803: INFO: Deleting PersistentVolume "local-pvnh7s9" STEP: Cleaning up PVC and PV May 15 02:09:51.807: INFO: pvc is nil May 15 02:09:51.807: INFO: Deleting PersistentVolume "local-pvcfwl2" STEP: Cleaning up PVC and PV May 15 02:09:51.810: INFO: pvc is nil May 15 02:09:51.810: INFO: Deleting PersistentVolume "local-pvcjck9" STEP: Cleaning up PVC and PV May 15 02:09:51.814: INFO: pvc is nil May 15 02:09:51.814: INFO: Deleting PersistentVolume "local-pvhcf8r" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-e8b3a261-34b4-4590-929d-e0ea04d6eb26" May 15 02:09:51.818: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-e8b3a261-34b4-4590-929d-e0ea04d6eb26"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:51.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 15 02:09:51.947: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e8b3a261-34b4-4590-929d-e0ea04d6eb26] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:51.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-ede09820-5490-4177-9f51-b405e1e12064" May 15 02:09:52.067: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ede09820-5490-4177-9f51-b405e1e12064"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:52.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 15 02:09:52.181: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ede09820-5490-4177-9f51-b405e1e12064] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:52.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-bbdf340c-7540-4f09-9e8f-7d569038ae0e" May 15 02:09:52.290: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-bbdf340c-7540-4f09-9e8f-7d569038ae0e"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:52.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 15 02:09:52.408: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-bbdf340c-7540-4f09-9e8f-7d569038ae0e] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:52.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-260a55a3-cd3c-46b3-838c-06499be7c934" May 15 02:09:52.513: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-260a55a3-cd3c-46b3-838c-06499be7c934"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:52.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 15 02:09:52.637: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-260a55a3-cd3c-46b3-838c-06499be7c934] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:52.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-275f56d9-63d2-44e8-94d4-8209c2028fc9" May 15 02:09:52.745: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-275f56d9-63d2-44e8-94d4-8209c2028fc9"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:52.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 15 02:09:52.857: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-275f56d9-63d2-44e8-94d4-8209c2028fc9] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:52.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-bd9022e1-d71a-4181-b0d8-e0f3dc0fbc4f" May 15 02:09:52.962: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-bd9022e1-d71a-4181-b0d8-e0f3dc0fbc4f"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:52.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 15 02:09:53.073: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-bd9022e1-d71a-4181-b0d8-e0f3dc0fbc4f] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:53.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-7c31ab30-324f-4d05-b33b-014d3f9f6c99" May 15 02:09:53.176: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-7c31ab30-324f-4d05-b33b-014d3f9f6c99"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:53.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 15 02:09:53.290: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-7c31ab30-324f-4d05-b33b-014d3f9f6c99] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:53.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-41b3dd74-362e-472f-bcac-cfa8ff0bf1ea" May 15 02:09:53.397: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-41b3dd74-362e-472f-bcac-cfa8ff0bf1ea"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:53.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 15 02:09:53.524: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-41b3dd74-362e-472f-bcac-cfa8ff0bf1ea] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:53.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-b9bd5c1e-a645-41b5-bb5e-b87aeae8d46e" May 15 02:09:53.634: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-b9bd5c1e-a645-41b5-bb5e-b87aeae8d46e"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:53.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 15 02:09:53.748: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b9bd5c1e-a645-41b5-bb5e-b87aeae8d46e] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:53.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-62dac2c2-ff7f-4ff5-a3e4-534fa8078930" May 15 02:09:53.855: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-62dac2c2-ff7f-4ff5-a3e4-534fa8078930"] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:53.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 15 02:09:53.965: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-62dac2c2-ff7f-4ff5-a3e4-534fa8078930] Namespace:persistent-local-volumes-test-2514 PodName:hostexec-node2-xp5pj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:09:53.965: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "persistent-local-volumes-test-2514". STEP: Found 71 events. May 15 02:09:54.094: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for hostexec-node1-776qc: { } Scheduled: Successfully assigned persistent-local-volumes-test-2514/hostexec-node1-776qc to node1 May 15 02:09:54.094: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for hostexec-node2-xp5pj: { } Scheduled: Successfully assigned persistent-local-volumes-test-2514/hostexec-node2-xp5pj to node2 May 15 02:09:54.094: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-49a826c3-524a-4877-b67e-de43c6ab2cf5: { } Scheduled: Successfully assigned persistent-local-volumes-test-2514/pod-49a826c3-524a-4877-b67e-de43c6ab2cf5 to node2 May 15 02:09:54.094: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-53ae386e-4922-4c5d-9f15-821f9964e4ce: { } Scheduled: Successfully assigned persistent-local-volumes-test-2514/pod-53ae386e-4922-4c5d-9f15-821f9964e4ce to node1 May 15 02:09:54.094: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-59b81854-7d72-4913-be06-f1facbb75247: { } Scheduled: Successfully assigned persistent-local-volumes-test-2514/pod-59b81854-7d72-4913-be06-f1facbb75247 to node1 May 15 02:09:54.094: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-81d9ece7-7790-4400-9782-0109c6720dd9: { } Scheduled: Successfully assigned persistent-local-volumes-test-2514/pod-81d9ece7-7790-4400-9782-0109c6720dd9 to node2 May 15 02:09:54.094: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-9e5bece1-b84d-45ea-b86f-6be4730c37ba: { } Scheduled: Successfully assigned persistent-local-volumes-test-2514/pod-9e5bece1-b84d-45ea-b86f-6be4730c37ba to node2 May 15 02:09:54.094: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-9e5bece1-b84d-45ea-b86f-6be4730c37ba: { } FailedScheduling: 0/5 nodes are available: 2 node(s) didn't find available persistent volumes to bind, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. May 15 02:09:54.094: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-9e5bece1-b84d-45ea-b86f-6be4730c37ba: { } FailedScheduling: 0/5 nodes are available: 2 node(s) didn't find available persistent volumes to bind, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. May 15 02:09:54.094: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-e260a2d3-d0c5-45c5-8300-39a4e6a40e0f: { } Scheduled: Successfully assigned persistent-local-volumes-test-2514/pod-e260a2d3-d0c5-45c5-8300-39a4e6a40e0f to node2 May 15 02:09:54.094: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-ed322ced-117e-419e-99ed-1d765a51b554: { } Scheduled: Successfully assigned persistent-local-volumes-test-2514/pod-ed322ced-117e-419e-99ed-1d765a51b554 to node1 May 15 02:09:54.094: INFO: At 2021-05-15 02:04:38 +0000 UTC - event for hostexec-node1-776qc: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 15 02:09:54.094: INFO: At 2021-05-15 02:04:39 +0000 UTC - event for hostexec-node1-776qc: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 496.900422ms May 15 02:09:54.094: INFO: At 2021-05-15 02:04:39 +0000 UTC - event for hostexec-node1-776qc: {kubelet node1} Started: Started container agnhost-container May 15 02:09:54.094: INFO: At 2021-05-15 02:04:39 +0000 UTC - event for hostexec-node1-776qc: {kubelet node1} Created: Created container agnhost-container May 15 02:09:54.094: INFO: At 2021-05-15 02:04:43 +0000 UTC - event for hostexec-node2-xp5pj: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 15 02:09:54.095: INFO: At 2021-05-15 02:04:44 +0000 UTC - event for hostexec-node2-xp5pj: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 507.546497ms May 15 02:09:54.095: INFO: At 2021-05-15 02:04:44 +0000 UTC - event for hostexec-node2-xp5pj: {kubelet node2} Created: Created container agnhost-container May 15 02:09:54.095: INFO: At 2021-05-15 02:04:44 +0000 UTC - event for hostexec-node2-xp5pj: {kubelet node2} Started: Started container agnhost-container May 15 02:09:54.095: INFO: At 2021-05-15 02:04:48 +0000 UTC - event for pvc-4fp4z: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding May 15 02:09:54.095: INFO: At 2021-05-15 02:04:48 +0000 UTC - event for pvc-bc4j2: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding May 15 02:09:54.095: INFO: At 2021-05-15 02:04:48 +0000 UTC - event for pvc-j8hmk: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding May 15 02:09:54.095: INFO: At 2021-05-15 02:04:48 +0000 UTC - event for pvc-qmlwx: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding May 15 02:09:54.095: INFO: At 2021-05-15 02:04:48 +0000 UTC - event for pvc-qxjkl: {persistentvolume-controller } WaitForPodScheduled: waiting for pod pod-ed322ced-117e-419e-99ed-1d765a51b554 to be scheduled May 15 02:09:54.095: INFO: At 2021-05-15 02:04:48 +0000 UTC - event for pvc-vcfk8: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding May 15 02:09:54.095: INFO: At 2021-05-15 02:04:50 +0000 UTC - event for pvc-97cw5: {persistentvolume-controller } WaitForPodScheduled: waiting for pod pod-9e5bece1-b84d-45ea-b86f-6be4730c37ba to be scheduled May 15 02:09:54.095: INFO: At 2021-05-15 02:04:50 +0000 UTC - event for pvc-chm9l: {persistentvolume-controller } WaitForPodScheduled: waiting for pod pod-9e5bece1-b84d-45ea-b86f-6be4730c37ba to be scheduled May 15 02:09:54.095: INFO: At 2021-05-15 02:04:50 +0000 UTC - event for pvc-dtjjt: {persistentvolume-controller } WaitForPodScheduled: waiting for pod pod-9e5bece1-b84d-45ea-b86f-6be4730c37ba to be scheduled May 15 02:09:54.095: INFO: At 2021-05-15 02:04:51 +0000 UTC - event for pod-59b81854-7d72-4913-be06-f1facbb75247: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:09:54.095: INFO: At 2021-05-15 02:04:51 +0000 UTC - event for pod-59b81854-7d72-4913-be06-f1facbb75247: {multus } AddedInterface: Add eth0 [10.244.3.208/24] May 15 02:09:54.095: INFO: At 2021-05-15 02:04:51 +0000 UTC - event for pod-81d9ece7-7790-4400-9782-0109c6720dd9: {multus } AddedInterface: Add eth0 [10.244.4.244/24] May 15 02:09:54.095: INFO: At 2021-05-15 02:04:51 +0000 UTC - event for pod-81d9ece7-7790-4400-9782-0109c6720dd9: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:09:54.095: INFO: At 2021-05-15 02:04:52 +0000 UTC - event for pod-59b81854-7d72-4913-be06-f1facbb75247: {kubelet node1} Pulled: Successfully pulled image "docker.io/library/busybox:1.29" in 1.33790159s May 15 02:09:54.095: INFO: At 2021-05-15 02:04:52 +0000 UTC - event for pod-81d9ece7-7790-4400-9782-0109c6720dd9: {kubelet node2} Created: Created container write-pod May 15 02:09:54.095: INFO: At 2021-05-15 02:04:52 +0000 UTC - event for pod-81d9ece7-7790-4400-9782-0109c6720dd9: {kubelet node2} Pulled: Successfully pulled image "docker.io/library/busybox:1.29" in 1.324660026s May 15 02:09:54.095: INFO: At 2021-05-15 02:04:52 +0000 UTC - event for pod-e260a2d3-d0c5-45c5-8300-39a4e6a40e0f: {multus } AddedInterface: Add eth0 [10.244.4.245/24] May 15 02:09:54.095: INFO: At 2021-05-15 02:04:52 +0000 UTC - event for pod-e260a2d3-d0c5-45c5-8300-39a4e6a40e0f: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:09:54.095: INFO: At 2021-05-15 02:04:53 +0000 UTC - event for pod-59b81854-7d72-4913-be06-f1facbb75247: {kubelet node1} Created: Created container write-pod May 15 02:09:54.095: INFO: At 2021-05-15 02:04:53 +0000 UTC - event for pod-59b81854-7d72-4913-be06-f1facbb75247: {kubelet node1} Started: Started container write-pod May 15 02:09:54.095: INFO: At 2021-05-15 02:04:53 +0000 UTC - event for pod-81d9ece7-7790-4400-9782-0109c6720dd9: {kubelet node2} Started: Started container write-pod May 15 02:09:54.095: INFO: At 2021-05-15 02:04:53 +0000 UTC - event for pod-ed322ced-117e-419e-99ed-1d765a51b554: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:09:54.095: INFO: At 2021-05-15 02:04:53 +0000 UTC - event for pod-ed322ced-117e-419e-99ed-1d765a51b554: {multus } AddedInterface: Add eth0 [10.244.3.209/24] May 15 02:09:54.095: INFO: At 2021-05-15 02:04:54 +0000 UTC - event for pod-49a826c3-524a-4877-b67e-de43c6ab2cf5: {multus } AddedInterface: Add eth0 [10.244.4.246/24] May 15 02:09:54.095: INFO: At 2021-05-15 02:04:54 +0000 UTC - event for pod-49a826c3-524a-4877-b67e-de43c6ab2cf5: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:09:54.095: INFO: At 2021-05-15 02:04:54 +0000 UTC - event for pod-53ae386e-4922-4c5d-9f15-821f9964e4ce: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:09:54.095: INFO: At 2021-05-15 02:04:54 +0000 UTC - event for pod-53ae386e-4922-4c5d-9f15-821f9964e4ce: {multus } AddedInterface: Add eth0 [10.244.3.210/24] May 15 02:09:54.095: INFO: At 2021-05-15 02:04:54 +0000 UTC - event for pod-e260a2d3-d0c5-45c5-8300-39a4e6a40e0f: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:09:54.095: INFO: At 2021-05-15 02:04:54 +0000 UTC - event for pod-e260a2d3-d0c5-45c5-8300-39a4e6a40e0f: {kubelet node2} Failed: Error: ErrImagePull May 15 02:09:54.095: INFO: At 2021-05-15 02:04:54 +0000 UTC - event for pod-e260a2d3-d0c5-45c5-8300-39a4e6a40e0f: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:09:54.095: INFO: At 2021-05-15 02:04:54 +0000 UTC - event for pod-e260a2d3-d0c5-45c5-8300-39a4e6a40e0f: {kubelet node2} Failed: Error: ImagePullBackOff May 15 02:09:54.095: INFO: At 2021-05-15 02:04:54 +0000 UTC - event for pod-ed322ced-117e-419e-99ed-1d765a51b554: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:09:54.095: INFO: At 2021-05-15 02:04:54 +0000 UTC - event for pod-ed322ced-117e-419e-99ed-1d765a51b554: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:09:54.095: INFO: At 2021-05-15 02:04:54 +0000 UTC - event for pod-ed322ced-117e-419e-99ed-1d765a51b554: {kubelet node1} Failed: Error: ErrImagePull May 15 02:09:54.095: INFO: At 2021-05-15 02:04:54 +0000 UTC - event for pod-ed322ced-117e-419e-99ed-1d765a51b554: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:09:54.095: INFO: At 2021-05-15 02:04:55 +0000 UTC - event for pod-49a826c3-524a-4877-b67e-de43c6ab2cf5: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:09:54.095: INFO: At 2021-05-15 02:04:55 +0000 UTC - event for pod-49a826c3-524a-4877-b67e-de43c6ab2cf5: {kubelet node2} Failed: Error: ImagePullBackOff May 15 02:09:54.095: INFO: At 2021-05-15 02:04:55 +0000 UTC - event for pod-49a826c3-524a-4877-b67e-de43c6ab2cf5: {kubelet node2} Failed: Error: ErrImagePull May 15 02:09:54.095: INFO: At 2021-05-15 02:04:55 +0000 UTC - event for pod-49a826c3-524a-4877-b67e-de43c6ab2cf5: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:09:54.095: INFO: At 2021-05-15 02:04:56 +0000 UTC - event for pod-53ae386e-4922-4c5d-9f15-821f9964e4ce: {kubelet node1} Failed: Error: ErrImagePull May 15 02:09:54.095: INFO: At 2021-05-15 02:04:56 +0000 UTC - event for pod-53ae386e-4922-4c5d-9f15-821f9964e4ce: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:09:54.095: INFO: At 2021-05-15 02:04:57 +0000 UTC - event for pod-53ae386e-4922-4c5d-9f15-821f9964e4ce: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:09:54.095: INFO: At 2021-05-15 02:04:57 +0000 UTC - event for pod-53ae386e-4922-4c5d-9f15-821f9964e4ce: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:09:54.095: INFO: At 2021-05-15 02:05:00 +0000 UTC - event for pod-9e5bece1-b84d-45ea-b86f-6be4730c37ba: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:09:54.095: INFO: At 2021-05-15 02:05:00 +0000 UTC - event for pod-9e5bece1-b84d-45ea-b86f-6be4730c37ba: {multus } AddedInterface: Add eth0 [10.244.4.247/24] May 15 02:09:54.095: INFO: At 2021-05-15 02:05:01 +0000 UTC - event for pod-9e5bece1-b84d-45ea-b86f-6be4730c37ba: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:09:54.095: INFO: At 2021-05-15 02:05:01 +0000 UTC - event for pod-9e5bece1-b84d-45ea-b86f-6be4730c37ba: {kubelet node2} Failed: Error: ErrImagePull May 15 02:09:54.095: INFO: At 2021-05-15 02:05:02 +0000 UTC - event for pod-9e5bece1-b84d-45ea-b86f-6be4730c37ba: {kubelet node2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 15 02:09:54.095: INFO: At 2021-05-15 02:05:04 +0000 UTC - event for pod-9e5bece1-b84d-45ea-b86f-6be4730c37ba: {multus } AddedInterface: Add eth0 [10.244.4.248/24] May 15 02:09:54.095: INFO: At 2021-05-15 02:05:04 +0000 UTC - event for pod-9e5bece1-b84d-45ea-b86f-6be4730c37ba: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:09:54.095: INFO: At 2021-05-15 02:05:04 +0000 UTC - event for pod-9e5bece1-b84d-45ea-b86f-6be4730c37ba: {kubelet node2} Failed: Error: ImagePullBackOff May 15 02:09:54.095: INFO: At 2021-05-15 02:05:11 +0000 UTC - event for pod-e260a2d3-d0c5-45c5-8300-39a4e6a40e0f: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:09:54.098: INFO: POD NODE PHASE GRACE CONDITIONS May 15 02:09:54.098: INFO: hostexec-node1-776qc node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:04:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:04:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:04:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:04:37 +0000 UTC }] May 15 02:09:54.098: INFO: hostexec-node2-xp5pj node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:04:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:04:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:04:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:04:43 +0000 UTC }] May 15 02:09:54.098: INFO: pod-53ae386e-4922-4c5d-9f15-821f9964e4ce node1 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:04:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:04:51 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:04:51 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:04:51 +0000 UTC }] May 15 02:09:54.098: INFO: May 15 02:09:54.102: INFO: Logging node info for node master1 May 15 02:09:54.104: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 8e20012e-a811-456d-9add-2ea316e23700 166505 0 2021-05-14 19:56:35 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"a6:a3:7b:a0:c9:75"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-14 19:56:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-14 19:56:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-14 19:59:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-14 19:59:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-14 20:06:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-14 20:02:00 +0000 UTC,LastTransitionTime:2021-05-14 20:02:00 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-15 02:09:53 +0000 UTC,LastTransitionTime:2021-05-14 19:56:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-15 02:09:53 +0000 UTC,LastTransitionTime:2021-05-14 19:56:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-15 02:09:53 +0000 UTC,LastTransitionTime:2021-05-14 19:56:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-15 02:09:53 +0000 UTC,LastTransitionTime:2021-05-14 19:59:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a88b162033bc4931ba0342c7f78a28b9,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:ba5ed4e5-a8ef-4986-946f-e7e2d91395d2,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726663003,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:e3157cfba16d361ffec06306dd0154c7dca1931cbc4569e3c5822e30e311948b tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:a43c7fdd150533238a300ad84ac906e551111f9b57273afcb8781ee675fd23b3 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 15 02:09:54.105: INFO: Logging kubelet events for node master1 May 15 02:09:54.107: INFO: Logging pods the kubelet thinks is on node master1 May 15 02:09:54.123: INFO: kube-flannel-cx7s6 started at 2021-05-14 19:58:58 +0000 UTC (1+1 container statuses recorded) May 15 02:09:54.123: INFO: Init container install-cni ready: true, restart count 0 May 15 02:09:54.123: INFO: Container kube-flannel ready: true, restart count 1 May 15 02:09:54.123: INFO: node-exporter-nvrxr started at 2021-05-14 20:09:56 +0000 UTC (0+2 container statuses recorded) May 15 02:09:54.123: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 15 02:09:54.123: INFO: Container node-exporter ready: true, restart count 0 May 15 02:09:54.123: INFO: kube-scheduler-master1 started at 2021-05-14 19:57:39 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.123: INFO: Container kube-scheduler ready: true, restart count 0 May 15 02:09:54.123: INFO: kube-controller-manager-master1 started at 2021-05-14 20:01:22 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.123: INFO: Container kube-controller-manager ready: true, restart count 2 May 15 02:09:54.123: INFO: kubernetes-metrics-scraper-678c97765c-fswrn started at 2021-05-15 00:18:49 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.123: INFO: Container kubernetes-metrics-scraper ready: true, restart count 0 May 15 02:09:54.123: INFO: kube-proxy-v2c76 started at 2021-05-14 19:58:24 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.123: INFO: Container kube-proxy ready: true, restart count 2 May 15 02:09:54.123: INFO: kube-multus-ds-amd64-m54v2 started at 2021-05-14 19:59:07 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.123: INFO: Container kube-multus ready: true, restart count 1 May 15 02:09:54.123: INFO: docker-registry-docker-registry-56cbc7bc58-bjc5h started at 2021-05-14 20:02:43 +0000 UTC (0+2 container statuses recorded) May 15 02:09:54.123: INFO: Container docker-registry ready: true, restart count 0 May 15 02:09:54.123: INFO: Container nginx ready: true, restart count 0 May 15 02:09:54.123: INFO: node-feature-discovery-controller-5bf5c49849-27v77 started at 2021-05-14 20:05:52 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.123: INFO: Container nfd-controller ready: true, restart count 0 May 15 02:09:54.123: INFO: kube-apiserver-master1 started at 2021-05-14 19:57:39 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.123: INFO: Container kube-apiserver ready: true, restart count 0 W0515 02:09:54.135978 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 15 02:09:54.168: INFO: Latency metrics for node master1 May 15 02:09:54.168: INFO: Logging node info for node master2 May 15 02:09:54.170: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e8f1881f-5ded-4c6c-b7e6-eb354b7962e2 166460 0 2021-05-14 19:57:04 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"0a:97:9a:eb:9d:a8"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-14 19:57:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-14 19:57:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-14 19:59:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-14 19:59:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-14 20:02:07 +0000 UTC,LastTransitionTime:2021-05-14 20:02:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-15 02:09:50 +0000 UTC,LastTransitionTime:2021-05-14 19:57:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-15 02:09:50 +0000 UTC,LastTransitionTime:2021-05-14 19:57:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-15 02:09:50 +0000 UTC,LastTransitionTime:2021-05-14 19:57:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-15 02:09:50 +0000 UTC,LastTransitionTime:2021-05-14 19:59:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:14c4cdd0613041bb923c5f9b84e0bcde,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:9bdca68c-a5fc-48f7-b392-63d2c04d224d,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726663003,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 15 02:09:54.170: INFO: Logging kubelet events for node master2 May 15 02:09:54.173: INFO: Logging pods the kubelet thinks is on node master2 May 15 02:09:54.187: INFO: kube-proxy-qcgpm started at 2021-05-14 19:58:24 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.187: INFO: Container kube-proxy ready: true, restart count 2 May 15 02:09:54.187: INFO: kube-multus-ds-amd64-bt5kr started at 2021-05-14 19:59:07 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.187: INFO: Container kube-multus ready: true, restart count 1 May 15 02:09:54.187: INFO: coredns-7677f9bb54-96w24 started at 2021-05-15 01:12:26 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.187: INFO: Container coredns ready: true, restart count 0 May 15 02:09:54.187: INFO: prometheus-operator-5bb8cb9d8f-fqb87 started at 2021-05-15 00:18:49 +0000 UTC (0+2 container statuses recorded) May 15 02:09:54.187: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 15 02:09:54.187: INFO: Container prometheus-operator ready: true, restart count 0 May 15 02:09:54.187: INFO: kube-apiserver-master2 started at 2021-05-14 20:05:10 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.187: INFO: Container kube-apiserver ready: true, restart count 0 May 15 02:09:54.187: INFO: kube-controller-manager-master2 started at 2021-05-14 20:01:22 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.187: INFO: Container kube-controller-manager ready: true, restart count 2 May 15 02:09:54.187: INFO: kube-scheduler-master2 started at 2021-05-14 19:57:39 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.187: INFO: Container kube-scheduler ready: true, restart count 2 May 15 02:09:54.187: INFO: kube-flannel-fc4sf started at 2021-05-14 19:58:58 +0000 UTC (1+1 container statuses recorded) May 15 02:09:54.187: INFO: Init container install-cni ready: true, restart count 0 May 15 02:09:54.187: INFO: Container kube-flannel ready: true, restart count 1 May 15 02:09:54.187: INFO: dns-autoscaler-5b7b5c9b6f-fgzqp started at 2021-05-14 19:59:30 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.187: INFO: Container autoscaler ready: true, restart count 2 May 15 02:09:54.187: INFO: node-exporter-gjrtc started at 2021-05-14 20:09:56 +0000 UTC (0+2 container statuses recorded) May 15 02:09:54.187: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 15 02:09:54.187: INFO: Container node-exporter ready: true, restart count 0 W0515 02:09:54.201806 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 15 02:09:54.235: INFO: Latency metrics for node master2 May 15 02:09:54.235: INFO: Logging node info for node master3 May 15 02:09:54.238: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 29fd0a5d-1350-4e28-a4cb-b26dd82cd397 166436 0 2021-05-14 19:57:14 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"ae:27:37:b7:ad:a5"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-14 19:57:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-14 19:57:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-14 19:59:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-14 19:59:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-14 20:01:03 +0000 UTC,LastTransitionTime:2021-05-14 20:01:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-15 02:09:49 +0000 UTC,LastTransitionTime:2021-05-14 19:57:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-15 02:09:49 +0000 UTC,LastTransitionTime:2021-05-14 19:57:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-15 02:09:49 +0000 UTC,LastTransitionTime:2021-05-14 19:57:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-15 02:09:49 +0000 UTC,LastTransitionTime:2021-05-14 20:00:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f3fe601830d34e59967ed389af552f25,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:56dd60e2-98fe-4d87-81d9-95db820e7426,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726663003,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 15 02:09:54.238: INFO: Logging kubelet events for node master3 May 15 02:09:54.241: INFO: Logging pods the kubelet thinks is on node master3 May 15 02:09:54.257: INFO: kube-flannel-cl8jf started at 2021-05-14 19:58:58 +0000 UTC (1+1 container statuses recorded) May 15 02:09:54.257: INFO: Init container install-cni ready: true, restart count 0 May 15 02:09:54.257: INFO: Container kube-flannel ready: true, restart count 2 May 15 02:09:54.257: INFO: kube-multus-ds-amd64-hp6bp started at 2021-05-14 19:59:07 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.257: INFO: Container kube-multus ready: true, restart count 1 May 15 02:09:54.257: INFO: coredns-7677f9bb54-rpj8c started at 2021-05-15 01:12:26 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.257: INFO: Container coredns ready: true, restart count 0 May 15 02:09:54.257: INFO: node-exporter-4cgbq started at 2021-05-14 20:09:56 +0000 UTC (0+2 container statuses recorded) May 15 02:09:54.257: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 15 02:09:54.257: INFO: Container node-exporter ready: true, restart count 0 May 15 02:09:54.257: INFO: kube-controller-manager-master3 started at 2021-05-14 20:00:41 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.257: INFO: Container kube-controller-manager ready: true, restart count 3 May 15 02:09:54.257: INFO: kube-scheduler-master3 started at 2021-05-14 19:57:39 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.257: INFO: Container kube-scheduler ready: true, restart count 3 May 15 02:09:54.257: INFO: kube-apiserver-master3 started at 2021-05-14 19:57:39 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.257: INFO: Container kube-apiserver ready: true, restart count 0 May 15 02:09:54.257: INFO: kube-proxy-2crs2 started at 2021-05-14 19:58:24 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.257: INFO: Container kube-proxy ready: true, restart count 1 W0515 02:09:54.269389 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 15 02:09:54.297: INFO: Latency metrics for node master3 May 15 02:09:54.297: INFO: Logging node info for node node1 May 15 02:09:54.301: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 5e4c0fba-b5fa-4177-b834-f3e04c846ff3 166461 0 2021-05-14 19:58:21 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1602":"csi-mock-csi-mock-volumes-1602","csi-mock-csi-mock-volumes-253":"csi-mock-csi-mock-volumes-253","csi-mock-csi-mock-volumes-2591":"csi-mock-csi-mock-volumes-2591","csi-mock-csi-mock-volumes-2993":"csi-mock-csi-mock-volumes-2993","csi-mock-csi-mock-volumes-6288":"csi-mock-csi-mock-volumes-6288","csi-mock-csi-mock-volumes-7734":"csi-mock-csi-mock-volumes-7734","csi-mock-csi-mock-volumes-895":"csi-mock-csi-mock-volumes-895","csi-mock-csi-mock-volumes-9232":"csi-mock-csi-mock-volumes-9232","csi-mock-csi-mock-volumes-9474":"csi-mock-csi-mock-volumes-9474","csi-mock-csi-mock-volumes-9728":"csi-mock-csi-mock-volumes-9728","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} flannel.alpha.coreos.com/backend-data:{"VtepMAC":"ba:ee:c6:a6:52:03"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-05-14 19:58:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-14 19:59:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-14 20:06:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-14 20:08:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-05-15 01:37:00 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}}} {kube-controller-manager Update v1 2021-05-15 01:52:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubelet Update v1 2021-05-15 01:53:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-14 20:00:44 +0000 UTC,LastTransitionTime:2021-05-14 20:00:44 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-15 02:09:51 +0000 UTC,LastTransitionTime:2021-05-14 19:58:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-15 02:09:51 +0000 UTC,LastTransitionTime:2021-05-14 19:58:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-15 02:09:51 +0000 UTC,LastTransitionTime:2021-05-14 19:58:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-15 02:09:51 +0000 UTC,LastTransitionTime:2021-05-14 20:00:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4b96d01fdbcb4fadb4a59fca2e1bf368,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:93c238b3-1895-423c-a1aa-193fbcf8b55f,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:869f7b3516c269b43448f1227c57d536e8a4cf723eeef3b5f8b8e224ecbcfd8e localhost:30500/barometer-collectd:stable],SizeBytes:1464261626,},ContainerImage{Names:[@ :],SizeBytes:1002487751,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f759b012c29126f880575ac543d09301d45f0b2b9d0f5329849ea40e65017dde cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726663003,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:1636899c10870ab66c48d960a9df620f4f9e86a0c72fbacf36032d27404e7e6c golang:alpine3.12],SizeBytes:301156062,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:df13abe416e37eb3db4722840dd479b00ba193ac6606e7902331dcea50f4f1f2 nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:48281550,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:46041582,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:948a93bc3803d61dd66ab49f99d4cc657e87273682aec7dd5955a000fd17a7e5 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392645,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:0b4273abac4c241fa3d70aaf52b0d79a133d2737081f4a5c5dea4949f6c45dc3 k8s.gcr.io/sig-storage/mock-driver:v3.1.0],SizeBytes:18687618,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:16322467,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 15 02:09:54.301: INFO: Logging kubelet events for node node1 May 15 02:09:54.304: INFO: Logging pods the kubelet thinks is on node node1 May 15 02:09:54.324: INFO: cmk-4s6dm started at 2021-05-15 00:18:54 +0000 UTC (0+2 container statuses recorded) May 15 02:09:54.324: INFO: Container nodereport ready: true, restart count 0 May 15 02:09:54.324: INFO: Container reconcile ready: true, restart count 0 May 15 02:09:54.324: INFO: kube-proxy-l7697 started at 2021-05-14 19:58:24 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.324: INFO: Container kube-proxy ready: true, restart count 2 May 15 02:09:54.324: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-4vxcc started at 2021-05-15 00:19:00 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.324: INFO: Container kube-sriovdp ready: true, restart count 0 May 15 02:09:54.324: INFO: nginx-proxy-node1 started at 2021-05-14 20:05:10 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.324: INFO: Container nginx-proxy ready: true, restart count 2 May 15 02:09:54.324: INFO: hostexec-node1-776qc started at 2021-05-15 02:04:37 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.324: INFO: Container agnhost-container ready: true, restart count 0 May 15 02:09:54.324: INFO: collectd-mrzps started at 2021-05-15 00:19:22 +0000 UTC (0+3 container statuses recorded) May 15 02:09:54.324: INFO: Container collectd ready: true, restart count 0 May 15 02:09:54.324: INFO: Container collectd-exporter ready: true, restart count 0 May 15 02:09:54.324: INFO: Container rbac-proxy ready: true, restart count 0 May 15 02:09:54.324: INFO: kube-flannel-hj8sj started at 2021-05-14 19:58:58 +0000 UTC (1+1 container statuses recorded) May 15 02:09:54.324: INFO: Init container install-cni ready: true, restart count 0 May 15 02:09:54.324: INFO: Container kube-flannel ready: true, restart count 1 May 15 02:09:54.324: INFO: node-feature-discovery-worker-bw8zg started at 2021-05-15 00:18:56 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.324: INFO: Container nfd-worker ready: true, restart count 0 May 15 02:09:54.324: INFO: pod-53ae386e-4922-4c5d-9f15-821f9964e4ce started at 2021-05-15 02:04:51 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.324: INFO: Container write-pod ready: false, restart count 0 May 15 02:09:54.324: INFO: kube-multus-ds-amd64-jhf4c started at 2021-05-14 19:59:07 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.324: INFO: Container kube-multus ready: true, restart count 1 May 15 02:09:54.324: INFO: prometheus-k8s-0 started at 2021-05-15 00:19:01 +0000 UTC (0+5 container statuses recorded) May 15 02:09:54.324: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 15 02:09:54.324: INFO: Container grafana ready: true, restart count 0 May 15 02:09:54.324: INFO: Container prometheus ready: true, restart count 26 May 15 02:09:54.324: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 15 02:09:54.324: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 15 02:09:54.324: INFO: node-exporter-flvqz started at 2021-05-15 00:18:55 +0000 UTC (0+2 container statuses recorded) May 15 02:09:54.324: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 15 02:09:54.324: INFO: Container node-exporter ready: true, restart count 0 W0515 02:09:54.336207 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 15 02:09:54.388: INFO: Latency metrics for node node1 May 15 02:09:54.388: INFO: Logging node info for node node2 May 15 02:09:54.391: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 0bae98dc-2ebc-4849-b99e-7780a3bea71e 166390 0 2021-05-14 19:58:22 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1663":"csi-mock-csi-mock-volumes-1663","csi-mock-csi-mock-volumes-3052":"csi-mock-csi-mock-volumes-3052","csi-mock-csi-mock-volumes-5200":"csi-mock-csi-mock-volumes-5200","csi-mock-csi-mock-volumes-5678":"csi-mock-csi-mock-volumes-5678","csi-mock-csi-mock-volumes-8760":"csi-mock-csi-mock-volumes-8760","csi-mock-csi-mock-volumes-9624":"csi-mock-csi-mock-volumes-9624"} flannel.alpha.coreos.com/backend-data:{"VtepMAC":"c6:18:ed:95:bb:1a"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-05-14 19:58:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-14 19:59:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-14 20:06:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-14 20:08:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-05-15 01:37:46 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{}}}}} {kube-controller-manager Update v1 2021-05-15 01:52:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubelet Update v1 2021-05-15 01:52:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-14 20:01:27 +0000 UTC,LastTransitionTime:2021-05-14 20:01:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-15 02:09:46 +0000 UTC,LastTransitionTime:2021-05-14 19:58:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-15 02:09:46 +0000 UTC,LastTransitionTime:2021-05-14 19:58:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-15 02:09:46 +0000 UTC,LastTransitionTime:2021-05-14 19:58:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-15 02:09:46 +0000 UTC,LastTransitionTime:2021-05-14 19:59:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a3f22fbf9e534ba1819f7a549414a8a6,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:166b6e45-ba8b-4b89-80b0-befc9a0152b8,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:869f7b3516c269b43448f1227c57d536e8a4cf723eeef3b5f8b8e224ecbcfd8e localhost:30500/barometer-collectd:stable],SizeBytes:1464261626,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726663003,},ContainerImage{Names:[localhost:30500/cmk@sha256:f759b012c29126f880575ac543d09301d45f0b2b9d0f5329849ea40e65017dde localhost:30500/cmk:v1.5.1],SizeBytes:726663003,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 gluster/glusterdynamic-provisioner:v1.0],SizeBytes:373281573,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[nginx@sha256:df13abe416e37eb3db4722840dd479b00ba193ac6606e7902331dcea50f4f1f2 nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:48281550,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:948a93bc3803d61dd66ab49f99d4cc657e87273682aec7dd5955a000fd17a7e5 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392645,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:e3157cfba16d361ffec06306dd0154c7dca1931cbc4569e3c5822e30e311948b localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:a43c7fdd150533238a300ad84ac906e551111f9b57273afcb8781ee675fd23b3 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:0b4273abac4c241fa3d70aaf52b0d79a133d2737081f4a5c5dea4949f6c45dc3 k8s.gcr.io/sig-storage/mock-driver:v3.1.0],SizeBytes:18687618,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:16322467,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 15 02:09:54.392: INFO: Logging kubelet events for node node2 May 15 02:09:54.395: INFO: Logging pods the kubelet thinks is on node node2 May 15 02:09:54.413: INFO: nginx-proxy-node2 started at 2021-05-14 20:05:10 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.413: INFO: Container nginx-proxy ready: true, restart count 2 May 15 02:09:54.413: INFO: kube-flannel-rqcwp started at 2021-05-14 19:58:58 +0000 UTC (1+1 container statuses recorded) May 15 02:09:54.413: INFO: Init container install-cni ready: true, restart count 1 May 15 02:09:54.413: INFO: Container kube-flannel ready: true, restart count 4 May 15 02:09:54.413: INFO: node-exporter-rnd5f started at 2021-05-14 20:09:56 +0000 UTC (0+2 container statuses recorded) May 15 02:09:54.413: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 15 02:09:54.413: INFO: Container node-exporter ready: true, restart count 0 May 15 02:09:54.413: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-l5hlq started at 2021-05-14 20:12:48 +0000 UTC (0+2 container statuses recorded) May 15 02:09:54.413: INFO: Container tas-controller ready: true, restart count 0 May 15 02:09:54.413: INFO: Container tas-extender ready: true, restart count 0 May 15 02:09:54.413: INFO: collectd-xzrgs started at 2021-05-14 20:15:36 +0000 UTC (0+3 container statuses recorded) May 15 02:09:54.413: INFO: Container collectd ready: true, restart count 0 May 15 02:09:54.413: INFO: Container collectd-exporter ready: true, restart count 0 May 15 02:09:54.413: INFO: Container rbac-proxy ready: true, restart count 0 May 15 02:09:54.413: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2c2pw started at 2021-05-14 20:06:38 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.413: INFO: Container kube-sriovdp ready: true, restart count 0 May 15 02:09:54.413: INFO: node-feature-discovery-worker-76m6b started at 2021-05-14 20:05:42 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.413: INFO: Container nfd-worker ready: true, restart count 0 May 15 02:09:54.413: INFO: cmk-d2qwf started at 2021-05-14 20:09:04 +0000 UTC (0+2 container statuses recorded) May 15 02:09:54.413: INFO: Container nodereport ready: true, restart count 0 May 15 02:09:54.413: INFO: Container reconcile ready: true, restart count 0 May 15 02:09:54.413: INFO: cmk-webhook-6c9d5f8578-pjgxh started at 2021-05-14 20:09:04 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.413: INFO: Container cmk-webhook ready: true, restart count 0 May 15 02:09:54.413: INFO: cmk-init-discover-node2-j75ff started at 2021-05-14 20:08:41 +0000 UTC (0+3 container statuses recorded) May 15 02:09:54.413: INFO: Container discover ready: false, restart count 0 May 15 02:09:54.413: INFO: Container init ready: false, restart count 0 May 15 02:09:54.413: INFO: Container install ready: false, restart count 0 May 15 02:09:54.413: INFO: hostexec-node2-xp5pj started at 2021-05-15 02:04:43 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.413: INFO: Container agnhost-container ready: true, restart count 0 May 15 02:09:54.413: INFO: kube-proxy-t524z started at 2021-05-14 19:58:24 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.413: INFO: Container kube-proxy ready: true, restart count 2 May 15 02:09:54.413: INFO: kube-multus-ds-amd64-n7cb2 started at 2021-05-14 19:59:07 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.413: INFO: Container kube-multus ready: true, restart count 1 May 15 02:09:54.413: INFO: kubernetes-dashboard-86c6f9df5b-ndntg started at 2021-05-14 19:59:31 +0000 UTC (0+1 container statuses recorded) May 15 02:09:54.413: INFO: Container kubernetes-dashboard ready: true, restart count 2 W0515 02:09:54.427361 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 15 02:09:54.466: INFO: Latency metrics for node node2 May 15 02:09:54.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2514" for this suite. • Failure [316.553 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:427 should be able to process many pods and reuse local volumes [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:517 May 15 02:09:48.660: some pods failed to complete within 5m0s Unexpected error: <*errors.errorString | 0xc0002bc200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:610 ------------------------------ {"msg":"FAILED [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","total":17,"completed":0,"skipped":3411,"failed":1,"failures":["[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:642 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 02:09:54.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:619 [It] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:642 STEP: Create a PVC STEP: Create 50 pods to use this PVC STEP: Wait for all pods are running May 15 02:14:55.042: FAIL: Some pods are not running within 5m0s Unexpected error: <*errors.errorString | 0xc0002bc200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func20.7.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:683 +0x748 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc004283680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc004283680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc004283680, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:633 STEP: Clean PV local-pvxh4dq [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "persistent-local-volumes-test-8617". STEP: Found 371 events. May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-0269144a-4370-4704-a1c2-c0f407d98f0a: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-0269144a-4370-4704-a1c2-c0f407d98f0a to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-06eab932-084d-490d-a3d9-59feee07adaf: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-06eab932-084d-490d-a3d9-59feee07adaf to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-0780a016-d244-4bbb-8f47-2f297e59fd58: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-0780a016-d244-4bbb-8f47-2f297e59fd58 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-088478ea-3873-48e9-8e63-839f8c352771: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-088478ea-3873-48e9-8e63-839f8c352771 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-0a52f19d-bad1-4ee5-9a99-98f71ef763d7: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-0a52f19d-bad1-4ee5-9a99-98f71ef763d7 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-1380abd4-33aa-487c-aff8-089358ae5a39: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-1380abd4-33aa-487c-aff8-089358ae5a39 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-182bb37f-fbf1-482a-8aa8-03563d7aefd0: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-182bb37f-fbf1-482a-8aa8-03563d7aefd0 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-26feecc3-b990-46b9-9186-2f6cfea28725: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-26feecc3-b990-46b9-9186-2f6cfea28725 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-2f132e16-9b60-4044-8856-1c537d83e924: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-2f132e16-9b60-4044-8856-1c537d83e924 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-3b141899-67ee-47dd-a614-717a00fdc43e: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-3b141899-67ee-47dd-a614-717a00fdc43e to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-3fd03fb4-184e-412a-b852-6979ed7a93d2: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-3fd03fb4-184e-412a-b852-6979ed7a93d2 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-46816fb0-59fd-4b80-86f6-e5de68239a20: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-46816fb0-59fd-4b80-86f6-e5de68239a20 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-57a1b9d5-6d9d-47e8-9b20-995213945154: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-57a1b9d5-6d9d-47e8-9b20-995213945154 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-5a5ebf17-19c6-430e-a7df-2dcbd0f4dfe8: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-5a5ebf17-19c6-430e-a7df-2dcbd0f4dfe8 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-5e0b2962-1beb-4f0e-b00d-3c3fb5ff4f67: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-5e0b2962-1beb-4f0e-b00d-3c3fb5ff4f67 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-6281ba5b-4d09-4cb6-9b6a-c82141ad2e06: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-6281ba5b-4d09-4cb6-9b6a-c82141ad2e06 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-63421429-f7ce-4603-ae23-470172db27ac: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-63421429-f7ce-4603-ae23-470172db27ac to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-6345af86-d298-43b9-a648-e4bd833ab81c: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-6345af86-d298-43b9-a648-e4bd833ab81c to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-64d96337-bd3e-4aca-8392-d465ee5a1ed7: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-64d96337-bd3e-4aca-8392-d465ee5a1ed7 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-659d7a78-330f-41c0-9363-c970094088ab: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-659d7a78-330f-41c0-9363-c970094088ab to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-67841c2d-5cbe-4e06-9ae9-e4099e36c13d: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-67841c2d-5cbe-4e06-9ae9-e4099e36c13d to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-68664722-e47b-48b2-b06e-91155e4f3594: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-68664722-e47b-48b2-b06e-91155e4f3594 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-6994c576-03e5-4748-87d6-f7d52695b943: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-6994c576-03e5-4748-87d6-f7d52695b943 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-6a079afc-c913-4898-aef2-69394c05288d: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-6a079afc-c913-4898-aef2-69394c05288d to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-6f552dde-8b50-4c1b-9f75-29301f222304: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-6f552dde-8b50-4c1b-9f75-29301f222304 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-7b61d18a-43e9-478f-9427-9173e0e8b16b: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-7b61d18a-43e9-478f-9427-9173e0e8b16b to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-7b8d60b1-8f1e-499c-b17f-816abbe56315: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-7b8d60b1-8f1e-499c-b17f-816abbe56315 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-7c3e1e12-24c7-4dbb-968a-c5a5ea611b09: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-7c3e1e12-24c7-4dbb-968a-c5a5ea611b09 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-7ff0a3c2-aa1f-43ee-95ba-bfe31b61df87: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-7ff0a3c2-aa1f-43ee-95ba-bfe31b61df87 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-85dcad4d-875a-461c-be23-212028dd9cc4: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-85dcad4d-875a-461c-be23-212028dd9cc4 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-8832974e-719a-4bac-990d-6917f5e29ff6: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-8832974e-719a-4bac-990d-6917f5e29ff6 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-8e6b4f06-4653-426d-8861-79606a1bc5ac: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-8e6b4f06-4653-426d-8861-79606a1bc5ac to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-944e7ffd-8685-41e6-8af8-eff3f2685db3: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-944e7ffd-8685-41e6-8af8-eff3f2685db3 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-9741d55e-19d2-4907-9709-31aab1dc27a7: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-9741d55e-19d2-4907-9709-31aab1dc27a7 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-99a28b03-2a0e-4f4e-93a6-4f8f1349c374: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-99a28b03-2a0e-4f4e-93a6-4f8f1349c374 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-9d1c8f1b-b2c2-4604-9cae-7ccae3143396: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-9d1c8f1b-b2c2-4604-9cae-7ccae3143396 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-9d74f724-4c10-4c23-9a96-6aed362ad077: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-9d74f724-4c10-4c23-9a96-6aed362ad077 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-a53b4edb-f546-4ebf-b7aa-b0e273692384: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-a53b4edb-f546-4ebf-b7aa-b0e273692384 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-aa71f4e5-b094-4438-a0d0-11bf0093a83d: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-aa71f4e5-b094-4438-a0d0-11bf0093a83d to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-adf20463-6227-4b36-9ec9-be2637a72af1: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-adf20463-6227-4b36-9ec9-be2637a72af1 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-baf0825d-cd75-468c-b208-e00c655b3984: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-baf0825d-cd75-468c-b208-e00c655b3984 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-cccb383e-37ae-496b-bc61-e84155c15470: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-cccb383e-37ae-496b-bc61-e84155c15470 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-d414a6bb-0f77-423b-bbe3-b10a586ff867: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-d414a6bb-0f77-423b-bbe3-b10a586ff867 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-d9940773-2114-4c06-a526-c795dec59ce0: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-d9940773-2114-4c06-a526-c795dec59ce0 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-e5db5676-3fa1-43b8-99da-988e664767e5: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-e5db5676-3fa1-43b8-99da-988e664767e5 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-ec39befc-6a99-45df-8eb8-d594af0627d8: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-ec39befc-6a99-45df-8eb8-d594af0627d8 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-efd8f062-dcf7-4e99-8049-16a688458438: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-efd8f062-dcf7-4e99-8049-16a688458438 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-f5252834-4a86-4465-a5e9-5e8afac874af: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-f5252834-4a86-4465-a5e9-5e8afac874af to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-f75db98e-2a08-4812-91d2-70d06c53eff2: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-f75db98e-2a08-4812-91d2-70d06c53eff2 to node1 May 15 02:14:55.063: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-fb25a31c-ab40-4711-bcf9-eff4f0db5a45: { } Scheduled: Successfully assigned persistent-local-volumes-test-8617/pod-fb25a31c-ab40-4711-bcf9-eff4f0db5a45 to node1 May 15 02:14:55.063: INFO: At 2021-05-15 02:09:57 +0000 UTC - event for pod-6a079afc-c913-4898-aef2-69394c05288d: {multus } AddedInterface: Add eth0 [10.244.3.212/24] May 15 02:14:55.063: INFO: At 2021-05-15 02:09:57 +0000 UTC - event for pod-6a079afc-c913-4898-aef2-69394c05288d: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.063: INFO: At 2021-05-15 02:09:57 +0000 UTC - event for pod-944e7ffd-8685-41e6-8af8-eff3f2685db3: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.063: INFO: At 2021-05-15 02:09:57 +0000 UTC - event for pod-944e7ffd-8685-41e6-8af8-eff3f2685db3: {multus } AddedInterface: Add eth0 [10.244.3.211/24] May 15 02:14:55.063: INFO: At 2021-05-15 02:09:58 +0000 UTC - event for pod-944e7ffd-8685-41e6-8af8-eff3f2685db3: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.063: INFO: At 2021-05-15 02:09:58 +0000 UTC - event for pod-944e7ffd-8685-41e6-8af8-eff3f2685db3: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.063: INFO: At 2021-05-15 02:09:59 +0000 UTC - event for pod-57a1b9d5-6d9d-47e8-9b20-995213945154: {multus } AddedInterface: Add eth0 [10.244.3.213/24] May 15 02:14:55.063: INFO: At 2021-05-15 02:09:59 +0000 UTC - event for pod-57a1b9d5-6d9d-47e8-9b20-995213945154: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.063: INFO: At 2021-05-15 02:09:59 +0000 UTC - event for pod-6a079afc-c913-4898-aef2-69394c05288d: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.063: INFO: At 2021-05-15 02:09:59 +0000 UTC - event for pod-6a079afc-c913-4898-aef2-69394c05288d: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.063: INFO: At 2021-05-15 02:09:59 +0000 UTC - event for pod-944e7ffd-8685-41e6-8af8-eff3f2685db3: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.063: INFO: At 2021-05-15 02:09:59 +0000 UTC - event for pod-944e7ffd-8685-41e6-8af8-eff3f2685db3: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.063: INFO: At 2021-05-15 02:09:59 +0000 UTC - event for pod-d414a6bb-0f77-423b-bbe3-b10a586ff867: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.063: INFO: At 2021-05-15 02:09:59 +0000 UTC - event for pod-d414a6bb-0f77-423b-bbe3-b10a586ff867: {multus } AddedInterface: Add eth0 [10.244.3.214/24] May 15 02:14:55.063: INFO: At 2021-05-15 02:10:00 +0000 UTC - event for pod-57a1b9d5-6d9d-47e8-9b20-995213945154: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.063: INFO: At 2021-05-15 02:10:00 +0000 UTC - event for pod-57a1b9d5-6d9d-47e8-9b20-995213945154: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.063: INFO: At 2021-05-15 02:10:01 +0000 UTC - event for pod-6a079afc-c913-4898-aef2-69394c05288d: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 15 02:14:55.063: INFO: At 2021-05-15 02:10:01 +0000 UTC - event for pod-d414a6bb-0f77-423b-bbe3-b10a586ff867: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.063: INFO: At 2021-05-15 02:10:01 +0000 UTC - event for pod-d414a6bb-0f77-423b-bbe3-b10a586ff867: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.063: INFO: At 2021-05-15 02:10:01 +0000 UTC - event for pod-d9940773-2114-4c06-a526-c795dec59ce0: {multus } AddedInterface: Add eth0 [10.244.3.215/24] May 15 02:14:55.063: INFO: At 2021-05-15 02:10:01 +0000 UTC - event for pod-d9940773-2114-4c06-a526-c795dec59ce0: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.063: INFO: At 2021-05-15 02:10:02 +0000 UTC - event for pod-7ff0a3c2-aa1f-43ee-95ba-bfe31b61df87: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.063: INFO: At 2021-05-15 02:10:02 +0000 UTC - event for pod-7ff0a3c2-aa1f-43ee-95ba-bfe31b61df87: {multus } AddedInterface: Add eth0 [10.244.3.216/24] May 15 02:14:55.063: INFO: At 2021-05-15 02:10:02 +0000 UTC - event for pod-d9940773-2114-4c06-a526-c795dec59ce0: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.063: INFO: At 2021-05-15 02:10:02 +0000 UTC - event for pod-d9940773-2114-4c06-a526-c795dec59ce0: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.063: INFO: At 2021-05-15 02:10:03 +0000 UTC - event for pod-57a1b9d5-6d9d-47e8-9b20-995213945154: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 15 02:14:55.063: INFO: At 2021-05-15 02:10:03 +0000 UTC - event for pod-7ff0a3c2-aa1f-43ee-95ba-bfe31b61df87: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.063: INFO: At 2021-05-15 02:10:03 +0000 UTC - event for pod-7ff0a3c2-aa1f-43ee-95ba-bfe31b61df87: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.063: INFO: At 2021-05-15 02:10:03 +0000 UTC - event for pod-7ff0a3c2-aa1f-43ee-95ba-bfe31b61df87: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.063: INFO: At 2021-05-15 02:10:03 +0000 UTC - event for pod-7ff0a3c2-aa1f-43ee-95ba-bfe31b61df87: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.063: INFO: At 2021-05-15 02:10:03 +0000 UTC - event for pod-9d1c8f1b-b2c2-4604-9cae-7ccae3143396: {multus } AddedInterface: Add eth0 [10.244.3.217/24] May 15 02:14:55.063: INFO: At 2021-05-15 02:10:03 +0000 UTC - event for pod-9d1c8f1b-b2c2-4604-9cae-7ccae3143396: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:03 +0000 UTC - event for pod-d414a6bb-0f77-423b-bbe3-b10a586ff867: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.064: INFO: At 2021-05-15 02:10:03 +0000 UTC - event for pod-d414a6bb-0f77-423b-bbe3-b10a586ff867: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:03 +0000 UTC - event for pod-d9940773-2114-4c06-a526-c795dec59ce0: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:03 +0000 UTC - event for pod-d9940773-2114-4c06-a526-c795dec59ce0: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.064: INFO: At 2021-05-15 02:10:04 +0000 UTC - event for pod-26feecc3-b990-46b9-9186-2f6cfea28725: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:04 +0000 UTC - event for pod-26feecc3-b990-46b9-9186-2f6cfea28725: {multus } AddedInterface: Add eth0 [10.244.3.218/24] May 15 02:14:55.064: INFO: At 2021-05-15 02:10:04 +0000 UTC - event for pod-9d1c8f1b-b2c2-4604-9cae-7ccae3143396: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.064: INFO: At 2021-05-15 02:10:04 +0000 UTC - event for pod-9d1c8f1b-b2c2-4604-9cae-7ccae3143396: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.064: INFO: At 2021-05-15 02:10:05 +0000 UTC - event for pod-26feecc3-b990-46b9-9186-2f6cfea28725: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.064: INFO: At 2021-05-15 02:10:05 +0000 UTC - event for pod-26feecc3-b990-46b9-9186-2f6cfea28725: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.064: INFO: At 2021-05-15 02:10:05 +0000 UTC - event for pod-9d1c8f1b-b2c2-4604-9cae-7ccae3143396: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:05 +0000 UTC - event for pod-9d1c8f1b-b2c2-4604-9cae-7ccae3143396: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.064: INFO: At 2021-05-15 02:10:06 +0000 UTC - event for pod-1380abd4-33aa-487c-aff8-089358ae5a39: {multus } AddedInterface: Add eth0 [10.244.3.219/24] May 15 02:14:55.064: INFO: At 2021-05-15 02:10:06 +0000 UTC - event for pod-1380abd4-33aa-487c-aff8-089358ae5a39: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:06 +0000 UTC - event for pod-6a079afc-c913-4898-aef2-69394c05288d: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.064: INFO: At 2021-05-15 02:10:06 +0000 UTC - event for pod-6a079afc-c913-4898-aef2-69394c05288d: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:06 +0000 UTC - event for pod-6a079afc-c913-4898-aef2-69394c05288d: {multus } AddedInterface: Add eth0 [10.244.3.220/24] May 15 02:14:55.064: INFO: At 2021-05-15 02:10:07 +0000 UTC - event for pod-1380abd4-33aa-487c-aff8-089358ae5a39: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.064: INFO: At 2021-05-15 02:10:07 +0000 UTC - event for pod-1380abd4-33aa-487c-aff8-089358ae5a39: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.064: INFO: At 2021-05-15 02:10:08 +0000 UTC - event for pod-1380abd4-33aa-487c-aff8-089358ae5a39: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:08 +0000 UTC - event for pod-1380abd4-33aa-487c-aff8-089358ae5a39: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.064: INFO: At 2021-05-15 02:10:08 +0000 UTC - event for pod-26feecc3-b990-46b9-9186-2f6cfea28725: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:08 +0000 UTC - event for pod-26feecc3-b990-46b9-9186-2f6cfea28725: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.064: INFO: At 2021-05-15 02:10:09 +0000 UTC - event for pod-088478ea-3873-48e9-8e63-839f8c352771: {multus } AddedInterface: Add eth0 [10.244.3.223/24] May 15 02:14:55.064: INFO: At 2021-05-15 02:10:09 +0000 UTC - event for pod-182bb37f-fbf1-482a-8aa8-03563d7aefd0: {multus } AddedInterface: Add eth0 [10.244.3.221/24] May 15 02:14:55.064: INFO: At 2021-05-15 02:10:09 +0000 UTC - event for pod-57a1b9d5-6d9d-47e8-9b20-995213945154: {multus } AddedInterface: Add eth0 [10.244.3.222/24] May 15 02:14:55.064: INFO: At 2021-05-15 02:10:10 +0000 UTC - event for pod-088478ea-3873-48e9-8e63-839f8c352771: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:10 +0000 UTC - event for pod-182bb37f-fbf1-482a-8aa8-03563d7aefd0: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:10 +0000 UTC - event for pod-57a1b9d5-6d9d-47e8-9b20-995213945154: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:10 +0000 UTC - event for pod-57a1b9d5-6d9d-47e8-9b20-995213945154: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.064: INFO: At 2021-05-15 02:10:10 +0000 UTC - event for pod-6345af86-d298-43b9-a648-e4bd833ab81c: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:10 +0000 UTC - event for pod-6345af86-d298-43b9-a648-e4bd833ab81c: {multus } AddedInterface: Add eth0 [10.244.3.224/24] May 15 02:14:55.064: INFO: At 2021-05-15 02:10:11 +0000 UTC - event for pod-088478ea-3873-48e9-8e63-839f8c352771: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.064: INFO: At 2021-05-15 02:10:11 +0000 UTC - event for pod-088478ea-3873-48e9-8e63-839f8c352771: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.064: INFO: At 2021-05-15 02:10:11 +0000 UTC - event for pod-088478ea-3873-48e9-8e63-839f8c352771: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.064: INFO: At 2021-05-15 02:10:11 +0000 UTC - event for pod-088478ea-3873-48e9-8e63-839f8c352771: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:12 +0000 UTC - event for pod-06eab932-084d-490d-a3d9-59feee07adaf: {multus } AddedInterface: Add eth0 [10.244.3.226/24] May 15 02:14:55.064: INFO: At 2021-05-15 02:10:12 +0000 UTC - event for pod-06eab932-084d-490d-a3d9-59feee07adaf: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:12 +0000 UTC - event for pod-182bb37f-fbf1-482a-8aa8-03563d7aefd0: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.064: INFO: At 2021-05-15 02:10:12 +0000 UTC - event for pod-182bb37f-fbf1-482a-8aa8-03563d7aefd0: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.064: INFO: At 2021-05-15 02:10:12 +0000 UTC - event for pod-67841c2d-5cbe-4e06-9ae9-e4099e36c13d: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:12 +0000 UTC - event for pod-67841c2d-5cbe-4e06-9ae9-e4099e36c13d: {multus } AddedInterface: Add eth0 [10.244.3.225/24] May 15 02:14:55.064: INFO: At 2021-05-15 02:10:12 +0000 UTC - event for pod-7b8d60b1-8f1e-499c-b17f-816abbe56315: {multus } AddedInterface: Add eth0 [10.244.3.227/24] May 15 02:14:55.064: INFO: At 2021-05-15 02:10:12 +0000 UTC - event for pod-7b8d60b1-8f1e-499c-b17f-816abbe56315: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:13 +0000 UTC - event for pod-944e7ffd-8685-41e6-8af8-eff3f2685db3: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.064: INFO: At 2021-05-15 02:10:13 +0000 UTC - event for pod-a53b4edb-f546-4ebf-b7aa-b0e273692384: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:13 +0000 UTC - event for pod-a53b4edb-f546-4ebf-b7aa-b0e273692384: {multus } AddedInterface: Add eth0 [10.244.3.228/24] May 15 02:14:55.064: INFO: At 2021-05-15 02:10:14 +0000 UTC - event for pod-6345af86-d298-43b9-a648-e4bd833ab81c: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.064: INFO: At 2021-05-15 02:10:14 +0000 UTC - event for pod-6345af86-d298-43b9-a648-e4bd833ab81c: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.064: INFO: At 2021-05-15 02:10:14 +0000 UTC - event for pod-7b61d18a-43e9-478f-9427-9173e0e8b16b: {multus } AddedInterface: Add eth0 [10.244.3.229/24] May 15 02:14:55.064: INFO: At 2021-05-15 02:10:15 +0000 UTC - event for pod-7b61d18a-43e9-478f-9427-9173e0e8b16b: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:16 +0000 UTC - event for pod-182bb37f-fbf1-482a-8aa8-03563d7aefd0: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:16 +0000 UTC - event for pod-182bb37f-fbf1-482a-8aa8-03563d7aefd0: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.064: INFO: At 2021-05-15 02:10:16 +0000 UTC - event for pod-6345af86-d298-43b9-a648-e4bd833ab81c: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.064: INFO: At 2021-05-15 02:10:16 +0000 UTC - event for pod-6345af86-d298-43b9-a648-e4bd833ab81c: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:16 +0000 UTC - event for pod-ec39befc-6a99-45df-8eb8-d594af0627d8: {multus } AddedInterface: Add eth0 [10.244.3.230/24] May 15 02:14:55.064: INFO: At 2021-05-15 02:10:16 +0000 UTC - event for pod-ec39befc-6a99-45df-8eb8-d594af0627d8: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:17 +0000 UTC - event for pod-0269144a-4370-4704-a1c2-c0f407d98f0a: {multus } AddedInterface: Add eth0 [10.244.3.231/24] May 15 02:14:55.064: INFO: At 2021-05-15 02:10:17 +0000 UTC - event for pod-0269144a-4370-4704-a1c2-c0f407d98f0a: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:17 +0000 UTC - event for pod-64d96337-bd3e-4aca-8392-d465ee5a1ed7: {multus } AddedInterface: Add eth0 [10.244.3.232/24] May 15 02:14:55.064: INFO: At 2021-05-15 02:10:17 +0000 UTC - event for pod-64d96337-bd3e-4aca-8392-d465ee5a1ed7: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:18 +0000 UTC - event for pod-06eab932-084d-490d-a3d9-59feee07adaf: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.064: INFO: At 2021-05-15 02:10:18 +0000 UTC - event for pod-06eab932-084d-490d-a3d9-59feee07adaf: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.064: INFO: At 2021-05-15 02:10:18 +0000 UTC - event for pod-5a5ebf17-19c6-430e-a7df-2dcbd0f4dfe8: {multus } AddedInterface: Add eth0 [10.244.3.233/24] May 15 02:14:55.064: INFO: At 2021-05-15 02:10:18 +0000 UTC - event for pod-5a5ebf17-19c6-430e-a7df-2dcbd0f4dfe8: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:19 +0000 UTC - event for pod-67841c2d-5cbe-4e06-9ae9-e4099e36c13d: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.064: INFO: At 2021-05-15 02:10:19 +0000 UTC - event for pod-67841c2d-5cbe-4e06-9ae9-e4099e36c13d: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.064: INFO: At 2021-05-15 02:10:19 +0000 UTC - event for pod-6f552dde-8b50-4c1b-9f75-29301f222304: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:19 +0000 UTC - event for pod-6f552dde-8b50-4c1b-9f75-29301f222304: {multus } AddedInterface: Add eth0 [10.244.3.234/24] May 15 02:14:55.064: INFO: At 2021-05-15 02:10:20 +0000 UTC - event for pod-06eab932-084d-490d-a3d9-59feee07adaf: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 15 02:14:55.064: INFO: At 2021-05-15 02:10:20 +0000 UTC - event for pod-7b8d60b1-8f1e-499c-b17f-816abbe56315: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.064: INFO: At 2021-05-15 02:10:20 +0000 UTC - event for pod-7b8d60b1-8f1e-499c-b17f-816abbe56315: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.064: INFO: At 2021-05-15 02:10:20 +0000 UTC - event for pod-aa71f4e5-b094-4438-a0d0-11bf0093a83d: {multus } AddedInterface: Add eth0 [10.244.3.235/24] May 15 02:14:55.064: INFO: At 2021-05-15 02:10:20 +0000 UTC - event for pod-aa71f4e5-b094-4438-a0d0-11bf0093a83d: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:21 +0000 UTC - event for pod-9d74f724-4c10-4c23-9a96-6aed362ad077: {multus } AddedInterface: Add eth0 [10.244.3.236/24] May 15 02:14:55.064: INFO: At 2021-05-15 02:10:21 +0000 UTC - event for pod-9d74f724-4c10-4c23-9a96-6aed362ad077: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:21 +0000 UTC - event for pod-a53b4edb-f546-4ebf-b7aa-b0e273692384: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.064: INFO: At 2021-05-15 02:10:21 +0000 UTC - event for pod-a53b4edb-f546-4ebf-b7aa-b0e273692384: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.064: INFO: At 2021-05-15 02:10:22 +0000 UTC - event for pod-67841c2d-5cbe-4e06-9ae9-e4099e36c13d: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.064: INFO: At 2021-05-15 02:10:22 +0000 UTC - event for pod-67841c2d-5cbe-4e06-9ae9-e4099e36c13d: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:22 +0000 UTC - event for pod-7b61d18a-43e9-478f-9427-9173e0e8b16b: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.064: INFO: At 2021-05-15 02:10:22 +0000 UTC - event for pod-7b61d18a-43e9-478f-9427-9173e0e8b16b: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.064: INFO: At 2021-05-15 02:10:22 +0000 UTC - event for pod-7b8d60b1-8f1e-499c-b17f-816abbe56315: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.064: INFO: At 2021-05-15 02:10:22 +0000 UTC - event for pod-7b8d60b1-8f1e-499c-b17f-816abbe56315: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:22 +0000 UTC - event for pod-8e6b4f06-4653-426d-8861-79606a1bc5ac: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:22 +0000 UTC - event for pod-8e6b4f06-4653-426d-8861-79606a1bc5ac: {multus } AddedInterface: Add eth0 [10.244.3.237/24] May 15 02:14:55.064: INFO: At 2021-05-15 02:10:24 +0000 UTC - event for pod-06eab932-084d-490d-a3d9-59feee07adaf: {multus } AddedInterface: Add eth0 [10.244.3.239/24] May 15 02:14:55.064: INFO: At 2021-05-15 02:10:24 +0000 UTC - event for pod-06eab932-084d-490d-a3d9-59feee07adaf: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.064: INFO: At 2021-05-15 02:10:24 +0000 UTC - event for pod-06eab932-084d-490d-a3d9-59feee07adaf: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.064: INFO: At 2021-05-15 02:10:24 +0000 UTC - event for pod-68664722-e47b-48b2-b06e-91155e4f3594: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:24 +0000 UTC - event for pod-68664722-e47b-48b2-b06e-91155e4f3594: {multus } AddedInterface: Add eth0 [10.244.3.238/24] May 15 02:14:55.065: INFO: At 2021-05-15 02:10:24 +0000 UTC - event for pod-7b61d18a-43e9-478f-9427-9173e0e8b16b: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.065: INFO: At 2021-05-15 02:10:24 +0000 UTC - event for pod-7b61d18a-43e9-478f-9427-9173e0e8b16b: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:24 +0000 UTC - event for pod-a53b4edb-f546-4ebf-b7aa-b0e273692384: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.065: INFO: At 2021-05-15 02:10:24 +0000 UTC - event for pod-a53b4edb-f546-4ebf-b7aa-b0e273692384: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:24 +0000 UTC - event for pod-ec39befc-6a99-45df-8eb8-d594af0627d8: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.065: INFO: At 2021-05-15 02:10:24 +0000 UTC - event for pod-ec39befc-6a99-45df-8eb8-d594af0627d8: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.065: INFO: At 2021-05-15 02:10:25 +0000 UTC - event for pod-3b141899-67ee-47dd-a614-717a00fdc43e: {multus } AddedInterface: Add eth0 [10.244.3.240/24] May 15 02:14:55.065: INFO: At 2021-05-15 02:10:26 +0000 UTC - event for pod-0269144a-4370-4704-a1c2-c0f407d98f0a: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.065: INFO: At 2021-05-15 02:10:26 +0000 UTC - event for pod-0269144a-4370-4704-a1c2-c0f407d98f0a: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.065: INFO: At 2021-05-15 02:10:26 +0000 UTC - event for pod-3b141899-67ee-47dd-a614-717a00fdc43e: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:26 +0000 UTC - event for pod-f5252834-4a86-4465-a5e9-5e8afac874af: {multus } AddedInterface: Add eth0 [10.244.3.241/24] May 15 02:14:55.065: INFO: At 2021-05-15 02:10:26 +0000 UTC - event for pod-f5252834-4a86-4465-a5e9-5e8afac874af: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:27 +0000 UTC - event for pod-ec39befc-6a99-45df-8eb8-d594af0627d8: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:27 +0000 UTC - event for pod-ec39befc-6a99-45df-8eb8-d594af0627d8: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.065: INFO: At 2021-05-15 02:10:28 +0000 UTC - event for pod-baf0825d-cd75-468c-b208-e00c655b3984: {multus } AddedInterface: Add eth0 [10.244.3.242/24] May 15 02:14:55.065: INFO: At 2021-05-15 02:10:28 +0000 UTC - event for pod-baf0825d-cd75-468c-b208-e00c655b3984: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:29 +0000 UTC - event for pod-0269144a-4370-4704-a1c2-c0f407d98f0a: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.065: INFO: At 2021-05-15 02:10:29 +0000 UTC - event for pod-0269144a-4370-4704-a1c2-c0f407d98f0a: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:29 +0000 UTC - event for pod-64d96337-bd3e-4aca-8392-d465ee5a1ed7: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.065: INFO: At 2021-05-15 02:10:29 +0000 UTC - event for pod-64d96337-bd3e-4aca-8392-d465ee5a1ed7: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.065: INFO: At 2021-05-15 02:10:29 +0000 UTC - event for pod-7c3e1e12-24c7-4dbb-968a-c5a5ea611b09: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:29 +0000 UTC - event for pod-7c3e1e12-24c7-4dbb-968a-c5a5ea611b09: {multus } AddedInterface: Add eth0 [10.244.3.244/24] May 15 02:14:55.065: INFO: At 2021-05-15 02:10:29 +0000 UTC - event for pod-85dcad4d-875a-461c-be23-212028dd9cc4: {multus } AddedInterface: Add eth0 [10.244.3.243/24] May 15 02:14:55.065: INFO: At 2021-05-15 02:10:29 +0000 UTC - event for pod-85dcad4d-875a-461c-be23-212028dd9cc4: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:30 +0000 UTC - event for pod-5a5ebf17-19c6-430e-a7df-2dcbd0f4dfe8: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.065: INFO: At 2021-05-15 02:10:30 +0000 UTC - event for pod-5a5ebf17-19c6-430e-a7df-2dcbd0f4dfe8: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.065: INFO: At 2021-05-15 02:10:30 +0000 UTC - event for pod-e5db5676-3fa1-43b8-99da-988e664767e5: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:30 +0000 UTC - event for pod-e5db5676-3fa1-43b8-99da-988e664767e5: {multus } AddedInterface: Add eth0 [10.244.3.245/24] May 15 02:14:55.065: INFO: At 2021-05-15 02:10:31 +0000 UTC - event for pod-5a5ebf17-19c6-430e-a7df-2dcbd0f4dfe8: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.065: INFO: At 2021-05-15 02:10:31 +0000 UTC - event for pod-5a5ebf17-19c6-430e-a7df-2dcbd0f4dfe8: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:31 +0000 UTC - event for pod-64d96337-bd3e-4aca-8392-d465ee5a1ed7: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:31 +0000 UTC - event for pod-64d96337-bd3e-4aca-8392-d465ee5a1ed7: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.065: INFO: At 2021-05-15 02:10:31 +0000 UTC - event for pod-cccb383e-37ae-496b-bc61-e84155c15470: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:31 +0000 UTC - event for pod-cccb383e-37ae-496b-bc61-e84155c15470: {multus } AddedInterface: Add eth0 [10.244.3.246/24] May 15 02:14:55.065: INFO: At 2021-05-15 02:10:32 +0000 UTC - event for pod-46816fb0-59fd-4b80-86f6-e5de68239a20: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:32 +0000 UTC - event for pod-46816fb0-59fd-4b80-86f6-e5de68239a20: {multus } AddedInterface: Add eth0 [10.244.3.247/24] May 15 02:14:55.065: INFO: At 2021-05-15 02:10:32 +0000 UTC - event for pod-6f552dde-8b50-4c1b-9f75-29301f222304: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.065: INFO: At 2021-05-15 02:10:32 +0000 UTC - event for pod-6f552dde-8b50-4c1b-9f75-29301f222304: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.065: INFO: At 2021-05-15 02:10:33 +0000 UTC - event for pod-99a28b03-2a0e-4f4e-93a6-4f8f1349c374: {multus } AddedInterface: Add eth0 [10.244.3.248/24] May 15 02:14:55.065: INFO: At 2021-05-15 02:10:33 +0000 UTC - event for pod-99a28b03-2a0e-4f4e-93a6-4f8f1349c374: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:33 +0000 UTC - event for pod-aa71f4e5-b094-4438-a0d0-11bf0093a83d: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.065: INFO: At 2021-05-15 02:10:33 +0000 UTC - event for pod-aa71f4e5-b094-4438-a0d0-11bf0093a83d: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.065: INFO: At 2021-05-15 02:10:34 +0000 UTC - event for pod-6f552dde-8b50-4c1b-9f75-29301f222304: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 15 02:14:55.065: INFO: At 2021-05-15 02:10:34 +0000 UTC - event for pod-8e6b4f06-4653-426d-8861-79606a1bc5ac: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.065: INFO: At 2021-05-15 02:10:34 +0000 UTC - event for pod-8e6b4f06-4653-426d-8861-79606a1bc5ac: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.065: INFO: At 2021-05-15 02:10:34 +0000 UTC - event for pod-9d74f724-4c10-4c23-9a96-6aed362ad077: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.065: INFO: At 2021-05-15 02:10:34 +0000 UTC - event for pod-9d74f724-4c10-4c23-9a96-6aed362ad077: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.065: INFO: At 2021-05-15 02:10:34 +0000 UTC - event for pod-efd8f062-dcf7-4e99-8049-16a688458438: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:34 +0000 UTC - event for pod-efd8f062-dcf7-4e99-8049-16a688458438: {multus } AddedInterface: Add eth0 [10.244.3.249/24] May 15 02:14:55.065: INFO: At 2021-05-15 02:10:35 +0000 UTC - event for pod-5e0b2962-1beb-4f0e-b00d-3c3fb5ff4f67: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:35 +0000 UTC - event for pod-5e0b2962-1beb-4f0e-b00d-3c3fb5ff4f67: {multus } AddedInterface: Add eth0 [10.244.3.251/24] May 15 02:14:55.065: INFO: At 2021-05-15 02:10:35 +0000 UTC - event for pod-659d7a78-330f-41c0-9363-c970094088ab: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:35 +0000 UTC - event for pod-659d7a78-330f-41c0-9363-c970094088ab: {multus } AddedInterface: Add eth0 [10.244.3.250/24] May 15 02:14:55.065: INFO: At 2021-05-15 02:10:35 +0000 UTC - event for pod-68664722-e47b-48b2-b06e-91155e4f3594: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.065: INFO: At 2021-05-15 02:10:35 +0000 UTC - event for pod-68664722-e47b-48b2-b06e-91155e4f3594: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.065: INFO: At 2021-05-15 02:10:36 +0000 UTC - event for pod-6994c576-03e5-4748-87d6-f7d52695b943: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:36 +0000 UTC - event for pod-6994c576-03e5-4748-87d6-f7d52695b943: {multus } AddedInterface: Add eth0 [10.244.3.252/24] May 15 02:14:55.065: INFO: At 2021-05-15 02:10:37 +0000 UTC - event for pod-3b141899-67ee-47dd-a614-717a00fdc43e: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.065: INFO: At 2021-05-15 02:10:37 +0000 UTC - event for pod-3b141899-67ee-47dd-a614-717a00fdc43e: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.065: INFO: At 2021-05-15 02:10:37 +0000 UTC - event for pod-8e6b4f06-4653-426d-8861-79606a1bc5ac: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.065: INFO: At 2021-05-15 02:10:37 +0000 UTC - event for pod-8e6b4f06-4653-426d-8861-79606a1bc5ac: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:37 +0000 UTC - event for pod-9d74f724-4c10-4c23-9a96-6aed362ad077: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.065: INFO: At 2021-05-15 02:10:37 +0000 UTC - event for pod-9d74f724-4c10-4c23-9a96-6aed362ad077: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:37 +0000 UTC - event for pod-aa71f4e5-b094-4438-a0d0-11bf0093a83d: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:37 +0000 UTC - event for pod-aa71f4e5-b094-4438-a0d0-11bf0093a83d: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.065: INFO: At 2021-05-15 02:10:37 +0000 UTC - event for pod-fb25a31c-ab40-4711-bcf9-eff4f0db5a45: {multus } AddedInterface: Add eth0 [10.244.3.253/24] May 15 02:14:55.065: INFO: At 2021-05-15 02:10:37 +0000 UTC - event for pod-fb25a31c-ab40-4711-bcf9-eff4f0db5a45: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:38 +0000 UTC - event for pod-8832974e-719a-4bac-990d-6917f5e29ff6: {multus } AddedInterface: Add eth0 [10.244.3.254/24] May 15 02:14:55.065: INFO: At 2021-05-15 02:10:38 +0000 UTC - event for pod-8832974e-719a-4bac-990d-6917f5e29ff6: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:39 +0000 UTC - event for pod-f75db98e-2a08-4812-91d2-70d06c53eff2: {multus } AddedInterface: Add eth0 [10.244.3.2/24] May 15 02:14:55.065: INFO: At 2021-05-15 02:10:39 +0000 UTC - event for pod-f75db98e-2a08-4812-91d2-70d06c53eff2: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:40 +0000 UTC - event for pod-3b141899-67ee-47dd-a614-717a00fdc43e: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.065: INFO: At 2021-05-15 02:10:40 +0000 UTC - event for pod-3b141899-67ee-47dd-a614-717a00fdc43e: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:40 +0000 UTC - event for pod-68664722-e47b-48b2-b06e-91155e4f3594: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.065: INFO: At 2021-05-15 02:10:40 +0000 UTC - event for pod-68664722-e47b-48b2-b06e-91155e4f3594: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:40 +0000 UTC - event for pod-f5252834-4a86-4465-a5e9-5e8afac874af: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.065: INFO: At 2021-05-15 02:10:40 +0000 UTC - event for pod-f5252834-4a86-4465-a5e9-5e8afac874af: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.065: INFO: At 2021-05-15 02:10:41 +0000 UTC - event for pod-3fd03fb4-184e-412a-b852-6979ed7a93d2: {multus } AddedInterface: Add eth0 [10.244.3.4/24] May 15 02:14:55.065: INFO: At 2021-05-15 02:10:41 +0000 UTC - event for pod-3fd03fb4-184e-412a-b852-6979ed7a93d2: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:41 +0000 UTC - event for pod-6f552dde-8b50-4c1b-9f75-29301f222304: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.065: INFO: At 2021-05-15 02:10:41 +0000 UTC - event for pod-6f552dde-8b50-4c1b-9f75-29301f222304: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:41 +0000 UTC - event for pod-6f552dde-8b50-4c1b-9f75-29301f222304: {multus } AddedInterface: Add eth0 [10.244.3.3/24] May 15 02:14:55.065: INFO: At 2021-05-15 02:10:42 +0000 UTC - event for pod-0780a016-d244-4bbb-8f47-2f297e59fd58: {multus } AddedInterface: Add eth0 [10.244.3.5/24] May 15 02:14:55.065: INFO: At 2021-05-15 02:10:42 +0000 UTC - event for pod-0780a016-d244-4bbb-8f47-2f297e59fd58: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:43 +0000 UTC - event for pod-63421429-f7ce-4603-ae23-470172db27ac: {multus } AddedInterface: Add eth0 [10.244.3.6/24] May 15 02:14:55.065: INFO: At 2021-05-15 02:10:43 +0000 UTC - event for pod-63421429-f7ce-4603-ae23-470172db27ac: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:43 +0000 UTC - event for pod-baf0825d-cd75-468c-b208-e00c655b3984: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.065: INFO: At 2021-05-15 02:10:43 +0000 UTC - event for pod-baf0825d-cd75-468c-b208-e00c655b3984: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.065: INFO: At 2021-05-15 02:10:43 +0000 UTC - event for pod-f5252834-4a86-4465-a5e9-5e8afac874af: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.065: INFO: At 2021-05-15 02:10:43 +0000 UTC - event for pod-f5252834-4a86-4465-a5e9-5e8afac874af: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:44 +0000 UTC - event for pod-6281ba5b-4d09-4cb6-9b6a-c82141ad2e06: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.065: INFO: At 2021-05-15 02:10:44 +0000 UTC - event for pod-6281ba5b-4d09-4cb6-9b6a-c82141ad2e06: {multus } AddedInterface: Add eth0 [10.244.3.7/24] May 15 02:14:55.065: INFO: At 2021-05-15 02:10:44 +0000 UTC - event for pod-85dcad4d-875a-461c-be23-212028dd9cc4: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.066: INFO: At 2021-05-15 02:10:44 +0000 UTC - event for pod-85dcad4d-875a-461c-be23-212028dd9cc4: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.066: INFO: At 2021-05-15 02:10:45 +0000 UTC - event for pod-0a52f19d-bad1-4ee5-9a99-98f71ef763d7: {multus } AddedInterface: Add eth0 [10.244.3.8/24] May 15 02:14:55.066: INFO: At 2021-05-15 02:10:45 +0000 UTC - event for pod-0a52f19d-bad1-4ee5-9a99-98f71ef763d7: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.066: INFO: At 2021-05-15 02:10:45 +0000 UTC - event for pod-7c3e1e12-24c7-4dbb-968a-c5a5ea611b09: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.066: INFO: At 2021-05-15 02:10:45 +0000 UTC - event for pod-7c3e1e12-24c7-4dbb-968a-c5a5ea611b09: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.066: INFO: At 2021-05-15 02:10:45 +0000 UTC - event for pod-baf0825d-cd75-468c-b208-e00c655b3984: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.066: INFO: At 2021-05-15 02:10:45 +0000 UTC - event for pod-baf0825d-cd75-468c-b208-e00c655b3984: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.066: INFO: At 2021-05-15 02:10:46 +0000 UTC - event for pod-2f132e16-9b60-4044-8856-1c537d83e924: {multus } AddedInterface: Add eth0 [10.244.3.9/24] May 15 02:14:55.066: INFO: At 2021-05-15 02:10:46 +0000 UTC - event for pod-2f132e16-9b60-4044-8856-1c537d83e924: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.066: INFO: At 2021-05-15 02:10:46 +0000 UTC - event for pod-7c3e1e12-24c7-4dbb-968a-c5a5ea611b09: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.066: INFO: At 2021-05-15 02:10:46 +0000 UTC - event for pod-7c3e1e12-24c7-4dbb-968a-c5a5ea611b09: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.066: INFO: At 2021-05-15 02:10:46 +0000 UTC - event for pod-85dcad4d-875a-461c-be23-212028dd9cc4: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.066: INFO: At 2021-05-15 02:10:46 +0000 UTC - event for pod-85dcad4d-875a-461c-be23-212028dd9cc4: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.066: INFO: At 2021-05-15 02:10:46 +0000 UTC - event for pod-9741d55e-19d2-4907-9709-31aab1dc27a7: {multus } AddedInterface: Add eth0 [10.244.3.10/24] May 15 02:14:55.066: INFO: At 2021-05-15 02:10:46 +0000 UTC - event for pod-9741d55e-19d2-4907-9709-31aab1dc27a7: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.066: INFO: At 2021-05-15 02:10:47 +0000 UTC - event for pod-adf20463-6227-4b36-9ec9-be2637a72af1: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.066: INFO: At 2021-05-15 02:10:47 +0000 UTC - event for pod-adf20463-6227-4b36-9ec9-be2637a72af1: {multus } AddedInterface: Add eth0 [10.244.3.11/24] May 15 02:14:55.066: INFO: At 2021-05-15 02:10:47 +0000 UTC - event for pod-e5db5676-3fa1-43b8-99da-988e664767e5: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.066: INFO: At 2021-05-15 02:10:47 +0000 UTC - event for pod-e5db5676-3fa1-43b8-99da-988e664767e5: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.066: INFO: At 2021-05-15 02:10:48 +0000 UTC - event for pod-cccb383e-37ae-496b-bc61-e84155c15470: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.066: INFO: At 2021-05-15 02:10:48 +0000 UTC - event for pod-cccb383e-37ae-496b-bc61-e84155c15470: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.066: INFO: At 2021-05-15 02:10:48 +0000 UTC - event for pod-e5db5676-3fa1-43b8-99da-988e664767e5: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.066: INFO: At 2021-05-15 02:10:48 +0000 UTC - event for pod-e5db5676-3fa1-43b8-99da-988e664767e5: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.066: INFO: At 2021-05-15 02:10:49 +0000 UTC - event for pod-46816fb0-59fd-4b80-86f6-e5de68239a20: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.066: INFO: At 2021-05-15 02:10:49 +0000 UTC - event for pod-46816fb0-59fd-4b80-86f6-e5de68239a20: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.066: INFO: At 2021-05-15 02:10:49 +0000 UTC - event for pod-cccb383e-37ae-496b-bc61-e84155c15470: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.066: INFO: At 2021-05-15 02:10:49 +0000 UTC - event for pod-cccb383e-37ae-496b-bc61-e84155c15470: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.066: INFO: At 2021-05-15 02:10:50 +0000 UTC - event for pod-46816fb0-59fd-4b80-86f6-e5de68239a20: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.066: INFO: At 2021-05-15 02:10:50 +0000 UTC - event for pod-46816fb0-59fd-4b80-86f6-e5de68239a20: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.066: INFO: At 2021-05-15 02:10:50 +0000 UTC - event for pod-99a28b03-2a0e-4f4e-93a6-4f8f1349c374: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.066: INFO: At 2021-05-15 02:10:50 +0000 UTC - event for pod-99a28b03-2a0e-4f4e-93a6-4f8f1349c374: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.066: INFO: At 2021-05-15 02:10:51 +0000 UTC - event for pod-99a28b03-2a0e-4f4e-93a6-4f8f1349c374: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.066: INFO: At 2021-05-15 02:10:51 +0000 UTC - event for pod-99a28b03-2a0e-4f4e-93a6-4f8f1349c374: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.066: INFO: At 2021-05-15 02:10:51 +0000 UTC - event for pod-efd8f062-dcf7-4e99-8049-16a688458438: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.066: INFO: At 2021-05-15 02:10:51 +0000 UTC - event for pod-efd8f062-dcf7-4e99-8049-16a688458438: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.066: INFO: At 2021-05-15 02:10:51 +0000 UTC - event for pod-efd8f062-dcf7-4e99-8049-16a688458438: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.066: INFO: At 2021-05-15 02:10:51 +0000 UTC - event for pod-efd8f062-dcf7-4e99-8049-16a688458438: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.066: INFO: At 2021-05-15 02:10:54 +0000 UTC - event for pod-659d7a78-330f-41c0-9363-c970094088ab: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.066: INFO: At 2021-05-15 02:10:54 +0000 UTC - event for pod-659d7a78-330f-41c0-9363-c970094088ab: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.066: INFO: At 2021-05-15 02:10:54 +0000 UTC - event for pod-659d7a78-330f-41c0-9363-c970094088ab: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.066: INFO: At 2021-05-15 02:10:54 +0000 UTC - event for pod-659d7a78-330f-41c0-9363-c970094088ab: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.066: INFO: At 2021-05-15 02:10:56 +0000 UTC - event for pod-5e0b2962-1beb-4f0e-b00d-3c3fb5ff4f67: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.066: INFO: At 2021-05-15 02:10:56 +0000 UTC - event for pod-5e0b2962-1beb-4f0e-b00d-3c3fb5ff4f67: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.066: INFO: At 2021-05-15 02:10:57 +0000 UTC - event for pod-5e0b2962-1beb-4f0e-b00d-3c3fb5ff4f67: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 15 02:14:55.066: INFO: At 2021-05-15 02:10:57 +0000 UTC - event for pod-6994c576-03e5-4748-87d6-f7d52695b943: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.066: INFO: At 2021-05-15 02:10:57 +0000 UTC - event for pod-6994c576-03e5-4748-87d6-f7d52695b943: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.066: INFO: At 2021-05-15 02:10:58 +0000 UTC - event for pod-6994c576-03e5-4748-87d6-f7d52695b943: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.066: INFO: At 2021-05-15 02:10:58 +0000 UTC - event for pod-6994c576-03e5-4748-87d6-f7d52695b943: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.066: INFO: At 2021-05-15 02:10:59 +0000 UTC - event for pod-5e0b2962-1beb-4f0e-b00d-3c3fb5ff4f67: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.066: INFO: At 2021-05-15 02:10:59 +0000 UTC - event for pod-5e0b2962-1beb-4f0e-b00d-3c3fb5ff4f67: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.066: INFO: At 2021-05-15 02:10:59 +0000 UTC - event for pod-5e0b2962-1beb-4f0e-b00d-3c3fb5ff4f67: {multus } AddedInterface: Add eth0 [10.244.3.12/24] May 15 02:14:55.066: INFO: At 2021-05-15 02:10:59 +0000 UTC - event for pod-fb25a31c-ab40-4711-bcf9-eff4f0db5a45: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.066: INFO: At 2021-05-15 02:10:59 +0000 UTC - event for pod-fb25a31c-ab40-4711-bcf9-eff4f0db5a45: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.066: INFO: At 2021-05-15 02:11:00 +0000 UTC - event for pod-8832974e-719a-4bac-990d-6917f5e29ff6: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.066: INFO: At 2021-05-15 02:11:00 +0000 UTC - event for pod-8832974e-719a-4bac-990d-6917f5e29ff6: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.066: INFO: At 2021-05-15 02:11:00 +0000 UTC - event for pod-fb25a31c-ab40-4711-bcf9-eff4f0db5a45: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.066: INFO: At 2021-05-15 02:11:00 +0000 UTC - event for pod-fb25a31c-ab40-4711-bcf9-eff4f0db5a45: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.066: INFO: At 2021-05-15 02:11:01 +0000 UTC - event for pod-8832974e-719a-4bac-990d-6917f5e29ff6: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.066: INFO: At 2021-05-15 02:11:01 +0000 UTC - event for pod-8832974e-719a-4bac-990d-6917f5e29ff6: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.066: INFO: At 2021-05-15 02:11:01 +0000 UTC - event for pod-f75db98e-2a08-4812-91d2-70d06c53eff2: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.066: INFO: At 2021-05-15 02:11:01 +0000 UTC - event for pod-f75db98e-2a08-4812-91d2-70d06c53eff2: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.066: INFO: At 2021-05-15 02:11:02 +0000 UTC - event for pod-f75db98e-2a08-4812-91d2-70d06c53eff2: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.066: INFO: At 2021-05-15 02:11:02 +0000 UTC - event for pod-f75db98e-2a08-4812-91d2-70d06c53eff2: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.066: INFO: At 2021-05-15 02:11:03 +0000 UTC - event for pod-3fd03fb4-184e-412a-b852-6979ed7a93d2: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.066: INFO: At 2021-05-15 02:11:03 +0000 UTC - event for pod-3fd03fb4-184e-412a-b852-6979ed7a93d2: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.066: INFO: At 2021-05-15 02:11:04 +0000 UTC - event for pod-3fd03fb4-184e-412a-b852-6979ed7a93d2: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.066: INFO: At 2021-05-15 02:11:04 +0000 UTC - event for pod-3fd03fb4-184e-412a-b852-6979ed7a93d2: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.066: INFO: At 2021-05-15 02:11:06 +0000 UTC - event for pod-0780a016-d244-4bbb-8f47-2f297e59fd58: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.066: INFO: At 2021-05-15 02:11:06 +0000 UTC - event for pod-0780a016-d244-4bbb-8f47-2f297e59fd58: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.066: INFO: At 2021-05-15 02:11:06 +0000 UTC - event for pod-0780a016-d244-4bbb-8f47-2f297e59fd58: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.066: INFO: At 2021-05-15 02:11:06 +0000 UTC - event for pod-0780a016-d244-4bbb-8f47-2f297e59fd58: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.066: INFO: At 2021-05-15 02:11:10 +0000 UTC - event for pod-63421429-f7ce-4603-ae23-470172db27ac: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.066: INFO: At 2021-05-15 02:11:10 +0000 UTC - event for pod-63421429-f7ce-4603-ae23-470172db27ac: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.066: INFO: At 2021-05-15 02:11:10 +0000 UTC - event for pod-63421429-f7ce-4603-ae23-470172db27ac: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.066: INFO: At 2021-05-15 02:11:10 +0000 UTC - event for pod-63421429-f7ce-4603-ae23-470172db27ac: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.066: INFO: At 2021-05-15 02:11:11 +0000 UTC - event for pod-6281ba5b-4d09-4cb6-9b6a-c82141ad2e06: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.066: INFO: At 2021-05-15 02:11:11 +0000 UTC - event for pod-6281ba5b-4d09-4cb6-9b6a-c82141ad2e06: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.066: INFO: At 2021-05-15 02:11:11 +0000 UTC - event for pod-6281ba5b-4d09-4cb6-9b6a-c82141ad2e06: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.066: INFO: At 2021-05-15 02:11:11 +0000 UTC - event for pod-6281ba5b-4d09-4cb6-9b6a-c82141ad2e06: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.066: INFO: At 2021-05-15 02:11:14 +0000 UTC - event for pod-0a52f19d-bad1-4ee5-9a99-98f71ef763d7: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 15 02:14:55.066: INFO: At 2021-05-15 02:11:14 +0000 UTC - event for pod-0a52f19d-bad1-4ee5-9a99-98f71ef763d7: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.066: INFO: At 2021-05-15 02:11:14 +0000 UTC - event for pod-0a52f19d-bad1-4ee5-9a99-98f71ef763d7: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.066: INFO: At 2021-05-15 02:11:15 +0000 UTC - event for pod-2f132e16-9b60-4044-8856-1c537d83e924: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.066: INFO: At 2021-05-15 02:11:15 +0000 UTC - event for pod-2f132e16-9b60-4044-8856-1c537d83e924: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 15 02:14:55.066: INFO: At 2021-05-15 02:11:15 +0000 UTC - event for pod-2f132e16-9b60-4044-8856-1c537d83e924: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.066: INFO: At 2021-05-15 02:11:15 +0000 UTC - event for pod-9741d55e-19d2-4907-9709-31aab1dc27a7: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.066: INFO: At 2021-05-15 02:11:15 +0000 UTC - event for pod-9741d55e-19d2-4907-9709-31aab1dc27a7: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.066: INFO: At 2021-05-15 02:11:16 +0000 UTC - event for pod-0a52f19d-bad1-4ee5-9a99-98f71ef763d7: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.067: INFO: At 2021-05-15 02:11:16 +0000 UTC - event for pod-0a52f19d-bad1-4ee5-9a99-98f71ef763d7: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.067: INFO: At 2021-05-15 02:11:16 +0000 UTC - event for pod-0a52f19d-bad1-4ee5-9a99-98f71ef763d7: {multus } AddedInterface: Add eth0 [10.244.3.13/24] May 15 02:14:55.067: INFO: At 2021-05-15 02:11:16 +0000 UTC - event for pod-9741d55e-19d2-4907-9709-31aab1dc27a7: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 15 02:14:55.067: INFO: At 2021-05-15 02:11:17 +0000 UTC - event for pod-adf20463-6227-4b36-9ec9-be2637a72af1: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.067: INFO: At 2021-05-15 02:11:17 +0000 UTC - event for pod-adf20463-6227-4b36-9ec9-be2637a72af1: {kubelet node1} Failed: Error: ErrImagePull May 15 02:14:55.067: INFO: At 2021-05-15 02:11:18 +0000 UTC - event for pod-adf20463-6227-4b36-9ec9-be2637a72af1: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 15 02:14:55.067: INFO: At 2021-05-15 02:11:20 +0000 UTC - event for pod-2f132e16-9b60-4044-8856-1c537d83e924: {multus } AddedInterface: Add eth0 [10.244.3.14/24] May 15 02:14:55.067: INFO: At 2021-05-15 02:11:20 +0000 UTC - event for pod-2f132e16-9b60-4044-8856-1c537d83e924: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.067: INFO: At 2021-05-15 02:11:20 +0000 UTC - event for pod-2f132e16-9b60-4044-8856-1c537d83e924: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.067: INFO: At 2021-05-15 02:11:20 +0000 UTC - event for pod-9741d55e-19d2-4907-9709-31aab1dc27a7: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.067: INFO: At 2021-05-15 02:11:20 +0000 UTC - event for pod-9741d55e-19d2-4907-9709-31aab1dc27a7: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.067: INFO: At 2021-05-15 02:11:21 +0000 UTC - event for pod-adf20463-6227-4b36-9ec9-be2637a72af1: {multus } AddedInterface: Add eth0 [10.244.3.16/24] May 15 02:14:55.067: INFO: At 2021-05-15 02:11:21 +0000 UTC - event for pod-adf20463-6227-4b36-9ec9-be2637a72af1: {kubelet node1} Failed: Error: ImagePullBackOff May 15 02:14:55.067: INFO: At 2021-05-15 02:11:21 +0000 UTC - event for pod-adf20463-6227-4b36-9ec9-be2637a72af1: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 15 02:14:55.067: INFO: At 2021-05-15 02:11:24 +0000 UTC - event for pod-9741d55e-19d2-4907-9709-31aab1dc27a7: {multus } AddedInterface: Add eth0 [10.244.3.17/24] May 15 02:14:55.067: INFO: At 2021-05-15 02:11:26 +0000 UTC - event for pod-3b141899-67ee-47dd-a614-717a00fdc43e: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.067: INFO: At 2021-05-15 02:13:13 +0000 UTC - event for pod-659d7a78-330f-41c0-9363-c970094088ab: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 15 02:14:55.075: INFO: POD NODE PHASE GRACE CONDITIONS May 15 02:14:55.075: INFO: pod-0269144a-4370-4704-a1c2-c0f407d98f0a node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.075: INFO: pod-06eab932-084d-490d-a3d9-59feee07adaf node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.075: INFO: pod-0780a016-d244-4bbb-8f47-2f297e59fd58 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.075: INFO: pod-088478ea-3873-48e9-8e63-839f8c352771 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.075: INFO: pod-0a52f19d-bad1-4ee5-9a99-98f71ef763d7 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.075: INFO: pod-1380abd4-33aa-487c-aff8-089358ae5a39 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.075: INFO: pod-182bb37f-fbf1-482a-8aa8-03563d7aefd0 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.075: INFO: pod-26feecc3-b990-46b9-9186-2f6cfea28725 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.075: INFO: pod-2f132e16-9b60-4044-8856-1c537d83e924 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.075: INFO: pod-3b141899-67ee-47dd-a614-717a00fdc43e node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.075: INFO: pod-3fd03fb4-184e-412a-b852-6979ed7a93d2 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.075: INFO: pod-46816fb0-59fd-4b80-86f6-e5de68239a20 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.075: INFO: pod-57a1b9d5-6d9d-47e8-9b20-995213945154 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.075: INFO: pod-5a5ebf17-19c6-430e-a7df-2dcbd0f4dfe8 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.075: INFO: pod-5e0b2962-1beb-4f0e-b00d-3c3fb5ff4f67 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.075: INFO: pod-6281ba5b-4d09-4cb6-9b6a-c82141ad2e06 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.075: INFO: pod-63421429-f7ce-4603-ae23-470172db27ac node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.075: INFO: pod-6345af86-d298-43b9-a648-e4bd833ab81c node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.075: INFO: pod-64d96337-bd3e-4aca-8392-d465ee5a1ed7 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:55 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:55 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:55 +0000 UTC }] May 15 02:14:55.076: INFO: pod-659d7a78-330f-41c0-9363-c970094088ab node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-67841c2d-5cbe-4e06-9ae9-e4099e36c13d node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-68664722-e47b-48b2-b06e-91155e4f3594 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-6994c576-03e5-4748-87d6-f7d52695b943 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-6a079afc-c913-4898-aef2-69394c05288d node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-6f552dde-8b50-4c1b-9f75-29301f222304 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-7b61d18a-43e9-478f-9427-9173e0e8b16b node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-7b8d60b1-8f1e-499c-b17f-816abbe56315 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-7c3e1e12-24c7-4dbb-968a-c5a5ea611b09 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-7ff0a3c2-aa1f-43ee-95ba-bfe31b61df87 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-85dcad4d-875a-461c-be23-212028dd9cc4 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-8832974e-719a-4bac-990d-6917f5e29ff6 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-8e6b4f06-4653-426d-8861-79606a1bc5ac node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-944e7ffd-8685-41e6-8af8-eff3f2685db3 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-9741d55e-19d2-4907-9709-31aab1dc27a7 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-99a28b03-2a0e-4f4e-93a6-4f8f1349c374 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-9d1c8f1b-b2c2-4604-9cae-7ccae3143396 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-9d74f724-4c10-4c23-9a96-6aed362ad077 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-a53b4edb-f546-4ebf-b7aa-b0e273692384 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-aa71f4e5-b094-4438-a0d0-11bf0093a83d node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-adf20463-6227-4b36-9ec9-be2637a72af1 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-baf0825d-cd75-468c-b208-e00c655b3984 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-cccb383e-37ae-496b-bc61-e84155c15470 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-d414a6bb-0f77-423b-bbe3-b10a586ff867 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-d9940773-2114-4c06-a526-c795dec59ce0 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-e5db5676-3fa1-43b8-99da-988e664767e5 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-ec39befc-6a99-45df-8eb8-d594af0627d8 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-efd8f062-dcf7-4e99-8049-16a688458438 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-f5252834-4a86-4465-a5e9-5e8afac874af node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:55 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:55 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:55 +0000 UTC }] May 15 02:14:55.076: INFO: pod-f75db98e-2a08-4812-91d2-70d06c53eff2 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: pod-fb25a31c-ab40-4711-bcf9-eff4f0db5a45 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-15 02:09:54 +0000 UTC }] May 15 02:14:55.076: INFO: May 15 02:14:55.080: INFO: Logging node info for node master1 May 15 02:14:55.082: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 8e20012e-a811-456d-9add-2ea316e23700 169520 0 2021-05-14 19:56:35 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"a6:a3:7b:a0:c9:75"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-14 19:56:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-14 19:56:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-14 19:59:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-14 19:59:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-14 20:06:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-14 20:02:00 +0000 UTC,LastTransitionTime:2021-05-14 20:02:00 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-15 02:14:54 +0000 UTC,LastTransitionTime:2021-05-14 19:56:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-15 02:14:54 +0000 UTC,LastTransitionTime:2021-05-14 19:56:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-15 02:14:54 +0000 UTC,LastTransitionTime:2021-05-14 19:56:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-15 02:14:54 +0000 UTC,LastTransitionTime:2021-05-14 19:59:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a88b162033bc4931ba0342c7f78a28b9,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:ba5ed4e5-a8ef-4986-946f-e7e2d91395d2,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726663003,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:e3157cfba16d361ffec06306dd0154c7dca1931cbc4569e3c5822e30e311948b tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:a43c7fdd150533238a300ad84ac906e551111f9b57273afcb8781ee675fd23b3 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 15 02:14:55.083: INFO: Logging kubelet events for node master1 May 15 02:14:55.086: INFO: Logging pods the kubelet thinks is on node master1 May 15 02:14:55.101: INFO: kube-apiserver-master1 started at 2021-05-14 19:57:39 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.101: INFO: Container kube-apiserver ready: true, restart count 0 May 15 02:14:55.101: INFO: kubernetes-metrics-scraper-678c97765c-fswrn started at 2021-05-15 00:18:49 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.101: INFO: Container kubernetes-metrics-scraper ready: true, restart count 0 May 15 02:14:55.101: INFO: kube-proxy-v2c76 started at 2021-05-14 19:58:24 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.101: INFO: Container kube-proxy ready: true, restart count 2 May 15 02:14:55.101: INFO: kube-multus-ds-amd64-m54v2 started at 2021-05-14 19:59:07 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.101: INFO: Container kube-multus ready: true, restart count 1 May 15 02:14:55.101: INFO: docker-registry-docker-registry-56cbc7bc58-bjc5h started at 2021-05-14 20:02:43 +0000 UTC (0+2 container statuses recorded) May 15 02:14:55.101: INFO: Container docker-registry ready: true, restart count 0 May 15 02:14:55.101: INFO: Container nginx ready: true, restart count 0 May 15 02:14:55.101: INFO: node-feature-discovery-controller-5bf5c49849-27v77 started at 2021-05-14 20:05:52 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.101: INFO: Container nfd-controller ready: true, restart count 0 May 15 02:14:55.101: INFO: kube-controller-manager-master1 started at 2021-05-14 20:01:22 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.101: INFO: Container kube-controller-manager ready: true, restart count 2 May 15 02:14:55.101: INFO: kube-flannel-cx7s6 started at 2021-05-14 19:58:58 +0000 UTC (1+1 container statuses recorded) May 15 02:14:55.101: INFO: Init container install-cni ready: true, restart count 0 May 15 02:14:55.101: INFO: Container kube-flannel ready: true, restart count 1 May 15 02:14:55.101: INFO: node-exporter-nvrxr started at 2021-05-14 20:09:56 +0000 UTC (0+2 container statuses recorded) May 15 02:14:55.101: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 15 02:14:55.101: INFO: Container node-exporter ready: true, restart count 0 May 15 02:14:55.101: INFO: kube-scheduler-master1 started at 2021-05-14 19:57:39 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.101: INFO: Container kube-scheduler ready: true, restart count 0 W0515 02:14:55.112805 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 15 02:14:55.145: INFO: Latency metrics for node master1 May 15 02:14:55.145: INFO: Logging node info for node master2 May 15 02:14:55.147: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 e8f1881f-5ded-4c6c-b7e6-eb354b7962e2 169507 0 2021-05-14 19:57:04 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"0a:97:9a:eb:9d:a8"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-14 19:57:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-14 19:57:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-14 19:59:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-14 19:59:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-14 20:02:07 +0000 UTC,LastTransitionTime:2021-05-14 20:02:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-15 02:14:52 +0000 UTC,LastTransitionTime:2021-05-14 19:57:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-15 02:14:52 +0000 UTC,LastTransitionTime:2021-05-14 19:57:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-15 02:14:52 +0000 UTC,LastTransitionTime:2021-05-14 19:57:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-15 02:14:52 +0000 UTC,LastTransitionTime:2021-05-14 19:59:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:14c4cdd0613041bb923c5f9b84e0bcde,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:9bdca68c-a5fc-48f7-b392-63d2c04d224d,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726663003,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 15 02:14:55.147: INFO: Logging kubelet events for node master2 May 15 02:14:55.149: INFO: Logging pods the kubelet thinks is on node master2 May 15 02:14:55.163: INFO: kube-proxy-qcgpm started at 2021-05-14 19:58:24 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.163: INFO: Container kube-proxy ready: true, restart count 2 May 15 02:14:55.163: INFO: kube-multus-ds-amd64-bt5kr started at 2021-05-14 19:59:07 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.163: INFO: Container kube-multus ready: true, restart count 1 May 15 02:14:55.163: INFO: coredns-7677f9bb54-96w24 started at 2021-05-15 01:12:26 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.163: INFO: Container coredns ready: true, restart count 0 May 15 02:14:55.163: INFO: kube-apiserver-master2 started at 2021-05-14 20:05:10 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.163: INFO: Container kube-apiserver ready: true, restart count 0 May 15 02:14:55.163: INFO: kube-controller-manager-master2 started at 2021-05-14 20:01:22 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.163: INFO: Container kube-controller-manager ready: true, restart count 2 May 15 02:14:55.163: INFO: kube-scheduler-master2 started at 2021-05-14 19:57:39 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.163: INFO: Container kube-scheduler ready: true, restart count 2 May 15 02:14:55.163: INFO: kube-flannel-fc4sf started at 2021-05-14 19:58:58 +0000 UTC (1+1 container statuses recorded) May 15 02:14:55.163: INFO: Init container install-cni ready: true, restart count 0 May 15 02:14:55.163: INFO: Container kube-flannel ready: true, restart count 1 May 15 02:14:55.163: INFO: dns-autoscaler-5b7b5c9b6f-fgzqp started at 2021-05-14 19:59:30 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.163: INFO: Container autoscaler ready: true, restart count 2 May 15 02:14:55.163: INFO: node-exporter-gjrtc started at 2021-05-14 20:09:56 +0000 UTC (0+2 container statuses recorded) May 15 02:14:55.163: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 15 02:14:55.163: INFO: Container node-exporter ready: true, restart count 0 May 15 02:14:55.163: INFO: prometheus-operator-5bb8cb9d8f-fqb87 started at 2021-05-15 00:18:49 +0000 UTC (0+2 container statuses recorded) May 15 02:14:55.163: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 15 02:14:55.163: INFO: Container prometheus-operator ready: true, restart count 0 W0515 02:14:55.176655 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 15 02:14:55.201: INFO: Latency metrics for node master2 May 15 02:14:55.201: INFO: Logging node info for node master3 May 15 02:14:55.203: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 29fd0a5d-1350-4e28-a4cb-b26dd82cd397 169499 0 2021-05-14 19:57:14 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"ae:27:37:b7:ad:a5"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-14 19:57:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-14 19:57:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-14 19:59:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-14 19:59:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-14 20:01:03 +0000 UTC,LastTransitionTime:2021-05-14 20:01:03 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-15 02:14:50 +0000 UTC,LastTransitionTime:2021-05-14 19:57:14 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-15 02:14:50 +0000 UTC,LastTransitionTime:2021-05-14 19:57:14 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-15 02:14:50 +0000 UTC,LastTransitionTime:2021-05-14 19:57:14 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-15 02:14:50 +0000 UTC,LastTransitionTime:2021-05-14 20:00:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f3fe601830d34e59967ed389af552f25,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:56dd60e2-98fe-4d87-81d9-95db820e7426,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726663003,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 15 02:14:55.203: INFO: Logging kubelet events for node master3 May 15 02:14:55.205: INFO: Logging pods the kubelet thinks is on node master3 May 15 02:14:55.220: INFO: coredns-7677f9bb54-rpj8c started at 2021-05-15 01:12:26 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.221: INFO: Container coredns ready: true, restart count 0 May 15 02:14:55.221: INFO: node-exporter-4cgbq started at 2021-05-14 20:09:56 +0000 UTC (0+2 container statuses recorded) May 15 02:14:55.221: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 15 02:14:55.221: INFO: Container node-exporter ready: true, restart count 0 May 15 02:14:55.221: INFO: kube-controller-manager-master3 started at 2021-05-14 20:00:41 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.221: INFO: Container kube-controller-manager ready: true, restart count 3 May 15 02:14:55.221: INFO: kube-scheduler-master3 started at 2021-05-14 19:57:39 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.221: INFO: Container kube-scheduler ready: true, restart count 3 May 15 02:14:55.221: INFO: kube-apiserver-master3 started at 2021-05-14 19:57:39 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.221: INFO: Container kube-apiserver ready: true, restart count 0 May 15 02:14:55.221: INFO: kube-proxy-2crs2 started at 2021-05-14 19:58:24 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.221: INFO: Container kube-proxy ready: true, restart count 1 May 15 02:14:55.221: INFO: kube-flannel-cl8jf started at 2021-05-14 19:58:58 +0000 UTC (1+1 container statuses recorded) May 15 02:14:55.221: INFO: Init container install-cni ready: true, restart count 0 May 15 02:14:55.221: INFO: Container kube-flannel ready: true, restart count 2 May 15 02:14:55.221: INFO: kube-multus-ds-amd64-hp6bp started at 2021-05-14 19:59:07 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.221: INFO: Container kube-multus ready: true, restart count 1 W0515 02:14:55.233792 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 15 02:14:55.268: INFO: Latency metrics for node master3 May 15 02:14:55.268: INFO: Logging node info for node node1 May 15 02:14:55.271: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 5e4c0fba-b5fa-4177-b834-f3e04c846ff3 169481 0 2021-05-14 19:58:21 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1602":"csi-mock-csi-mock-volumes-1602","csi-mock-csi-mock-volumes-253":"csi-mock-csi-mock-volumes-253","csi-mock-csi-mock-volumes-2591":"csi-mock-csi-mock-volumes-2591","csi-mock-csi-mock-volumes-2993":"csi-mock-csi-mock-volumes-2993","csi-mock-csi-mock-volumes-6288":"csi-mock-csi-mock-volumes-6288","csi-mock-csi-mock-volumes-7734":"csi-mock-csi-mock-volumes-7734","csi-mock-csi-mock-volumes-895":"csi-mock-csi-mock-volumes-895","csi-mock-csi-mock-volumes-9232":"csi-mock-csi-mock-volumes-9232","csi-mock-csi-mock-volumes-9474":"csi-mock-csi-mock-volumes-9474","csi-mock-csi-mock-volumes-9728":"csi-mock-csi-mock-volumes-9728","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} flannel.alpha.coreos.com/backend-data:{"VtepMAC":"ba:ee:c6:a6:52:03"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-05-14 19:58:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-14 19:59:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-14 20:06:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-14 20:08:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-05-15 01:37:00 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}}} {kube-controller-manager Update v1 2021-05-15 01:52:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubelet Update v1 2021-05-15 01:53:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-14 20:00:44 +0000 UTC,LastTransitionTime:2021-05-14 20:00:44 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-15 02:14:46 +0000 UTC,LastTransitionTime:2021-05-14 19:58:21 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-15 02:14:46 +0000 UTC,LastTransitionTime:2021-05-14 19:58:21 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-15 02:14:46 +0000 UTC,LastTransitionTime:2021-05-14 19:58:21 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-15 02:14:46 +0000 UTC,LastTransitionTime:2021-05-14 20:00:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4b96d01fdbcb4fadb4a59fca2e1bf368,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:93c238b3-1895-423c-a1aa-193fbcf8b55f,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:869f7b3516c269b43448f1227c57d536e8a4cf723eeef3b5f8b8e224ecbcfd8e localhost:30500/barometer-collectd:stable],SizeBytes:1464261626,},ContainerImage{Names:[@ :],SizeBytes:1002487751,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:f759b012c29126f880575ac543d09301d45f0b2b9d0f5329849ea40e65017dde cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726663003,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:1636899c10870ab66c48d960a9df620f4f9e86a0c72fbacf36032d27404e7e6c golang:alpine3.12],SizeBytes:301156062,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:df13abe416e37eb3db4722840dd479b00ba193ac6606e7902331dcea50f4f1f2 nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:48281550,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:46041582,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:948a93bc3803d61dd66ab49f99d4cc657e87273682aec7dd5955a000fd17a7e5 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392645,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:0b4273abac4c241fa3d70aaf52b0d79a133d2737081f4a5c5dea4949f6c45dc3 k8s.gcr.io/sig-storage/mock-driver:v3.1.0],SizeBytes:18687618,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:16322467,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 15 02:14:55.272: INFO: Logging kubelet events for node node1 May 15 02:14:55.274: INFO: Logging pods the kubelet thinks is on node node1 May 15 02:14:55.736: INFO: pod-7b8d60b1-8f1e-499c-b17f-816abbe56315 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.736: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.736: INFO: pod-182bb37f-fbf1-482a-8aa8-03563d7aefd0 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.736: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.736: INFO: pod-06eab932-084d-490d-a3d9-59feee07adaf started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.736: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.736: INFO: pod-6345af86-d298-43b9-a648-e4bd833ab81c started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.736: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.736: INFO: pod-a53b4edb-f546-4ebf-b7aa-b0e273692384 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.736: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.736: INFO: pod-67841c2d-5cbe-4e06-9ae9-e4099e36c13d started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.736: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.736: INFO: node-exporter-flvqz started at 2021-05-15 00:18:55 +0000 UTC (0+2 container statuses recorded) May 15 02:14:55.736: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 15 02:14:55.736: INFO: Container node-exporter ready: true, restart count 0 May 15 02:14:55.736: INFO: pod-944e7ffd-8685-41e6-8af8-eff3f2685db3 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.736: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.736: INFO: pod-57a1b9d5-6d9d-47e8-9b20-995213945154 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.736: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.736: INFO: pod-6a079afc-c913-4898-aef2-69394c05288d started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.736: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.736: INFO: pod-d414a6bb-0f77-423b-bbe3-b10a586ff867 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.736: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.736: INFO: cmk-4s6dm started at 2021-05-15 00:18:54 +0000 UTC (0+2 container statuses recorded) May 15 02:14:55.736: INFO: Container nodereport ready: true, restart count 0 May 15 02:14:55.736: INFO: Container reconcile ready: true, restart count 0 May 15 02:14:55.736: INFO: pod-7b61d18a-43e9-478f-9427-9173e0e8b16b started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.736: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.736: INFO: pod-ec39befc-6a99-45df-8eb8-d594af0627d8 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.736: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.736: INFO: pod-0269144a-4370-4704-a1c2-c0f407d98f0a started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.736: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.736: INFO: pod-9d1c8f1b-b2c2-4604-9cae-7ccae3143396 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.736: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.736: INFO: pod-64d96337-bd3e-4aca-8392-d465ee5a1ed7 started at 2021-05-15 02:09:55 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.736: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.736: INFO: pod-aa71f4e5-b094-4438-a0d0-11bf0093a83d started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.736: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.736: INFO: pod-1380abd4-33aa-487c-aff8-089358ae5a39 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.736: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.736: INFO: pod-5a5ebf17-19c6-430e-a7df-2dcbd0f4dfe8 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.736: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-6f552dde-8b50-4c1b-9f75-29301f222304 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-f5252834-4a86-4465-a5e9-5e8afac874af started at 2021-05-15 02:09:55 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-9d74f724-4c10-4c23-9a96-6aed362ad077 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-8e6b4f06-4653-426d-8861-79606a1bc5ac started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-68664722-e47b-48b2-b06e-91155e4f3594 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-3b141899-67ee-47dd-a614-717a00fdc43e started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-7c3e1e12-24c7-4dbb-968a-c5a5ea611b09 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-baf0825d-cd75-468c-b208-e00c655b3984 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: node-feature-discovery-worker-bw8zg started at 2021-05-15 00:18:56 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container nfd-worker ready: true, restart count 0 May 15 02:14:55.737: INFO: pod-85dcad4d-875a-461c-be23-212028dd9cc4 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-659d7a78-330f-41c0-9363-c970094088ab started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-e5db5676-3fa1-43b8-99da-988e664767e5 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: kube-multus-ds-amd64-jhf4c started at 2021-05-14 19:59:07 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container kube-multus ready: true, restart count 1 May 15 02:14:55.737: INFO: prometheus-k8s-0 started at 2021-05-15 00:19:01 +0000 UTC (0+5 container statuses recorded) May 15 02:14:55.737: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 15 02:14:55.737: INFO: Container grafana ready: true, restart count 0 May 15 02:14:55.737: INFO: Container prometheus ready: true, restart count 26 May 15 02:14:55.737: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 15 02:14:55.737: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 15 02:14:55.737: INFO: pod-46816fb0-59fd-4b80-86f6-e5de68239a20 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-cccb383e-37ae-496b-bc61-e84155c15470 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-efd8f062-dcf7-4e99-8049-16a688458438 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-99a28b03-2a0e-4f4e-93a6-4f8f1349c374 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-fb25a31c-ab40-4711-bcf9-eff4f0db5a45 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-d9940773-2114-4c06-a526-c795dec59ce0 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-6994c576-03e5-4748-87d6-f7d52695b943 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-5e0b2962-1beb-4f0e-b00d-3c3fb5ff4f67 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-f75db98e-2a08-4812-91d2-70d06c53eff2 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-0780a016-d244-4bbb-8f47-2f297e59fd58 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-8832974e-719a-4bac-990d-6917f5e29ff6 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: kube-proxy-l7697 started at 2021-05-14 19:58:24 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container kube-proxy ready: true, restart count 2 May 15 02:14:55.737: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-4vxcc started at 2021-05-15 00:19:00 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container kube-sriovdp ready: true, restart count 0 May 15 02:14:55.737: INFO: pod-7ff0a3c2-aa1f-43ee-95ba-bfe31b61df87 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-3fd03fb4-184e-412a-b852-6979ed7a93d2 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: nginx-proxy-node1 started at 2021-05-14 20:05:10 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container nginx-proxy ready: true, restart count 2 May 15 02:14:55.737: INFO: pod-26feecc3-b990-46b9-9186-2f6cfea28725 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-63421429-f7ce-4603-ae23-470172db27ac started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-6281ba5b-4d09-4cb6-9b6a-c82141ad2e06 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-088478ea-3873-48e9-8e63-839f8c352771 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-0a52f19d-bad1-4ee5-9a99-98f71ef763d7 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-2f132e16-9b60-4044-8856-1c537d83e924 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: kube-flannel-hj8sj started at 2021-05-14 19:58:58 +0000 UTC (1+1 container statuses recorded) May 15 02:14:55.737: INFO: Init container install-cni ready: true, restart count 0 May 15 02:14:55.737: INFO: Container kube-flannel ready: true, restart count 1 May 15 02:14:55.737: INFO: collectd-mrzps started at 2021-05-15 00:19:22 +0000 UTC (0+3 container statuses recorded) May 15 02:14:55.737: INFO: Container collectd ready: true, restart count 0 May 15 02:14:55.737: INFO: Container collectd-exporter ready: true, restart count 0 May 15 02:14:55.737: INFO: Container rbac-proxy ready: true, restart count 0 May 15 02:14:55.737: INFO: pod-9741d55e-19d2-4907-9709-31aab1dc27a7 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 May 15 02:14:55.737: INFO: pod-adf20463-6227-4b36-9ec9-be2637a72af1 started at 2021-05-15 02:09:54 +0000 UTC (0+1 container statuses recorded) May 15 02:14:55.737: INFO: Container write-pod ready: false, restart count 0 W0515 02:14:55.748685 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 15 02:14:56.379: INFO: Latency metrics for node node1 May 15 02:14:56.379: INFO: Logging node info for node node2 May 15 02:14:56.382: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 0bae98dc-2ebc-4849-b99e-7780a3bea71e 169488 0 2021-05-14 19:58:22 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1663":"csi-mock-csi-mock-volumes-1663","csi-mock-csi-mock-volumes-3052":"csi-mock-csi-mock-volumes-3052","csi-mock-csi-mock-volumes-5200":"csi-mock-csi-mock-volumes-5200","csi-mock-csi-mock-volumes-5678":"csi-mock-csi-mock-volumes-5678","csi-mock-csi-mock-volumes-8760":"csi-mock-csi-mock-volumes-8760","csi-mock-csi-mock-volumes-9624":"csi-mock-csi-mock-volumes-9624"} flannel.alpha.coreos.com/backend-data:{"VtepMAC":"c6:18:ed:95:bb:1a"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-05-14 19:58:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-14 19:59:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-14 20:06:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-14 20:08:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-05-15 01:37:46 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{}}}}} {kube-controller-manager Update v1 2021-05-15 01:52:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubelet Update v1 2021-05-15 01:52:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-14 20:01:27 +0000 UTC,LastTransitionTime:2021-05-14 20:01:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-15 02:14:47 +0000 UTC,LastTransitionTime:2021-05-14 19:58:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-15 02:14:47 +0000 UTC,LastTransitionTime:2021-05-14 19:58:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-15 02:14:47 +0000 UTC,LastTransitionTime:2021-05-14 19:58:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-15 02:14:47 +0000 UTC,LastTransitionTime:2021-05-14 19:59:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a3f22fbf9e534ba1819f7a549414a8a6,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:166b6e45-ba8b-4b89-80b0-befc9a0152b8,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:869f7b3516c269b43448f1227c57d536e8a4cf723eeef3b5f8b8e224ecbcfd8e localhost:30500/barometer-collectd:stable],SizeBytes:1464261626,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726663003,},ContainerImage{Names:[localhost:30500/cmk@sha256:f759b012c29126f880575ac543d09301d45f0b2b9d0f5329849ea40e65017dde localhost:30500/cmk:v1.5.1],SizeBytes:726663003,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 gluster/glusterdynamic-provisioner:v1.0],SizeBytes:373281573,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[nginx@sha256:df13abe416e37eb3db4722840dd479b00ba193ac6606e7902331dcea50f4f1f2 nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:48281550,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:948a93bc3803d61dd66ab49f99d4cc657e87273682aec7dd5955a000fd17a7e5 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392645,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:e3157cfba16d361ffec06306dd0154c7dca1931cbc4569e3c5822e30e311948b localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:a43c7fdd150533238a300ad84ac906e551111f9b57273afcb8781ee675fd23b3 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:0b4273abac4c241fa3d70aaf52b0d79a133d2737081f4a5c5dea4949f6c45dc3 k8s.gcr.io/sig-storage/mock-driver:v3.1.0],SizeBytes:18687618,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:16322467,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 15 02:14:56.382: INFO: Logging kubelet events for node node2 May 15 02:14:56.384: INFO: Logging pods the kubelet thinks is on node node2 May 15 02:14:56.401: INFO: kube-proxy-t524z started at 2021-05-14 19:58:24 +0000 UTC (0+1 container statuses recorded) May 15 02:14:56.401: INFO: Container kube-proxy ready: true, restart count 2 May 15 02:14:56.401: INFO: kube-multus-ds-amd64-n7cb2 started at 2021-05-14 19:59:07 +0000 UTC (0+1 container statuses recorded) May 15 02:14:56.401: INFO: Container kube-multus ready: true, restart count 1 May 15 02:14:56.401: INFO: kubernetes-dashboard-86c6f9df5b-ndntg started at 2021-05-14 19:59:31 +0000 UTC (0+1 container statuses recorded) May 15 02:14:56.401: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 15 02:14:56.401: INFO: nginx-proxy-node2 started at 2021-05-14 20:05:10 +0000 UTC (0+1 container statuses recorded) May 15 02:14:56.401: INFO: Container nginx-proxy ready: true, restart count 2 May 15 02:14:56.401: INFO: kube-flannel-rqcwp started at 2021-05-14 19:58:58 +0000 UTC (1+1 container statuses recorded) May 15 02:14:56.401: INFO: Init container install-cni ready: true, restart count 1 May 15 02:14:56.401: INFO: Container kube-flannel ready: true, restart count 4 May 15 02:14:56.401: INFO: node-exporter-rnd5f started at 2021-05-14 20:09:56 +0000 UTC (0+2 container statuses recorded) May 15 02:14:56.401: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 15 02:14:56.401: INFO: Container node-exporter ready: true, restart count 0 May 15 02:14:56.401: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-l5hlq started at 2021-05-14 20:12:48 +0000 UTC (0+2 container statuses recorded) May 15 02:14:56.401: INFO: Container tas-controller ready: true, restart count 0 May 15 02:14:56.401: INFO: Container tas-extender ready: true, restart count 0 May 15 02:14:56.401: INFO: collectd-xzrgs started at 2021-05-14 20:15:36 +0000 UTC (0+3 container statuses recorded) May 15 02:14:56.401: INFO: Container collectd ready: true, restart count 0 May 15 02:14:56.401: INFO: Container collectd-exporter ready: true, restart count 0 May 15 02:14:56.401: INFO: Container rbac-proxy ready: true, restart count 0 May 15 02:14:56.401: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2c2pw started at 2021-05-14 20:06:38 +0000 UTC (0+1 container statuses recorded) May 15 02:14:56.401: INFO: Container kube-sriovdp ready: true, restart count 0 May 15 02:14:56.401: INFO: node-feature-discovery-worker-76m6b started at 2021-05-14 20:05:42 +0000 UTC (0+1 container statuses recorded) May 15 02:14:56.401: INFO: Container nfd-worker ready: true, restart count 0 May 15 02:14:56.401: INFO: cmk-d2qwf started at 2021-05-14 20:09:04 +0000 UTC (0+2 container statuses recorded) May 15 02:14:56.401: INFO: Container nodereport ready: true, restart count 0 May 15 02:14:56.401: INFO: Container reconcile ready: true, restart count 0 May 15 02:14:56.401: INFO: cmk-webhook-6c9d5f8578-pjgxh started at 2021-05-14 20:09:04 +0000 UTC (0+1 container statuses recorded) May 15 02:14:56.401: INFO: Container cmk-webhook ready: true, restart count 0 May 15 02:14:56.401: INFO: cmk-init-discover-node2-j75ff started at 2021-05-14 20:08:41 +0000 UTC (0+3 container statuses recorded) May 15 02:14:56.401: INFO: Container discover ready: false, restart count 0 May 15 02:14:56.401: INFO: Container init ready: false, restart count 0 May 15 02:14:56.401: INFO: Container install ready: false, restart count 0 W0515 02:14:56.413852 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 15 02:14:56.456: INFO: Latency metrics for node node2 May 15 02:14:56.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8617" for this suite. • Failure [301.989 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:614 all pods should be running [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:642 May 15 02:14:55.042: Some pods are not running within 5m0s Unexpected error: <*errors.errorString | 0xc0002bc200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:683 ------------------------------ {"msg":"FAILED [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running","total":17,"completed":0,"skipped":3535,"failed":2,"failures":["[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Pod Disks [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:457 [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 02:14:56.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 [It] [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:457 May 15 02:14:56.497: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 02:14:56.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-4913" for this suite. S [SKIPPING] [0.040 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Serial] attach on previously attached volumes should work [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:457 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create volume metrics in Volume Manager /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:291 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 02:14:56.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 15 02:14:56.540: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 02:14:56.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-8838" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics in Volume Manager [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:291 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create metrics for total number of volumes in A/D Controller /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:321 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 02:14:56.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 15 02:14:56.571: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 02:14:56.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3008" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.027 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total number of volumes in A/D Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:321 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:282 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 02:14:56.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 May 15 02:15:48.626: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-3317 PodName:hostexec-node1-7ffvw ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:15:48.626: INFO: >>> kubeConfig: /root/.kube/config May 15 02:15:49.013: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 15 02:15:49.013: INFO: exec node1: stdout: "0\n" May 15 02:15:49.013: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" May 15 02:15:49.013: INFO: exec node1: exit code: 0 May 15 02:15:49.013: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 02:15:49.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3317" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [52.445 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:256 should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:282 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create volume metrics with the correct PVC ref /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:203 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 02:15:49.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 15 02:15:49.052: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 02:15:49.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-9953" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics with the correct PVC ref [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:203 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:477 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 02:15:49.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 15 02:15:49.088: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 02:15:49.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-1245" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.036 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:477 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 02:15:49.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 May 15 02:15:55.154: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-4158 PodName:hostexec-node1-cgsq6 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 15 02:15:55.154: INFO: >>> kubeConfig: /root/.kube/config May 15 02:15:55.274: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 15 02:15:55.274: INFO: exec node1: stdout: "0\n" May 15 02:15:55.274: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" May 15 02:15:55.274: INFO: exec node1: exit code: 0 May 15 02:15:55.274: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 02:15:55.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-4158" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [6.177 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSMay 15 02:15:55.285: INFO: Running AfterSuite actions on all nodes May 15 02:15:55.285: INFO: Running AfterSuite actions on node 1 May 15 02:15:55.285: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_storage_serial/junit_01.xml {"msg":"Test Suite completed","total":17,"completed":0,"skipped":5482,"failed":2,"failures":["[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","[sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running"]} Summarizing 2 Failures: [Fail] [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] [It] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:610 [Fail] [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] [It] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:683 Ran 2 of 5484 Specs in 686.368 seconds FAIL! -- 0 Passed | 2 Failed | 0 Pending | 5482 Skipped --- FAIL: TestE2E (686.51s) FAIL Ginkgo ran 1 suite in 11m27.68249731s Test Suite Failed