I0828 03:18:46.006380 23 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0828 03:18:46.006502 23 e2e.go:129] Starting e2e run "76cdccf5-5f0c-4a6d-a4c1-99914f2a77e7" on Ginkgo node 1 {"msg":"Test Suite starting","total":20,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1630120724 - Will randomize all specs Will run 20 of 5484 specs Aug 28 03:18:46.110: INFO: >>> kubeConfig: /root/.kube/config Aug 28 03:18:46.115: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 28 03:18:46.145: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 28 03:18:46.208: INFO: The status of Pod cmk-init-discover-node1-spg26 is Succeeded, skipping waiting Aug 28 03:18:46.208: INFO: The status of Pod cmk-init-discover-node2-l9qjd is Succeeded, skipping waiting Aug 28 03:18:46.208: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 28 03:18:46.208: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Aug 28 03:18:46.208: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 28 03:18:46.225: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Aug 28 03:18:46.225: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Aug 28 03:18:46.225: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Aug 28 03:18:46.225: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Aug 28 03:18:46.225: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Aug 28 03:18:46.225: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Aug 28 03:18:46.225: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Aug 28 03:18:46.225: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 28 03:18:46.225: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Aug 28 03:18:46.225: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Aug 28 03:18:46.225: INFO: e2e test version: v1.19.14 Aug 28 03:18:46.226: INFO: kube-apiserver version: v1.19.8 Aug 28 03:18:46.226: INFO: >>> kubeConfig: /root/.kube/config Aug 28 03:18:46.232: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:477 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 03:18:46.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv Aug 28 03:18:46.254: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Aug 28 03:18:46.257: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Aug 28 03:18:46.259: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 03:18:46.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-9362" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.037 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:477 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create volume metrics with the correct PVC ref /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:203 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 03:18:46.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Aug 28 03:18:46.291: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 03:18:46.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3286" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.028 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics with the correct PVC ref [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:203 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create volume metrics in Volume Manager /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:291 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 03:18:46.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Aug 28 03:18:46.320: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 03:18:46.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-551" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.028 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics in Volume Manager [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:291 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:490 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 03:18:46.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Aug 28 03:18:46.353: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 03:18:46.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-225" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.028 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:490 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:263 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 03:18:46.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 Aug 28 03:18:48.412: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-3280 PodName:hostexec-node1-6blnl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:18:48.412: INFO: >>> kubeConfig: /root/.kube/config Aug 28 03:18:48.553: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Aug 28 03:18:48.553: INFO: exec node1: stdout: "0\n" Aug 28 03:18:48.553: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Aug 28 03:18:48.553: INFO: exec node1: exit code: 0 Aug 28 03:18:48.553: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 03:18:48.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3280" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.198 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:256 should set fsGroup for one pod [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:263 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create prometheus metrics for volume provisioning and attach/detach /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:100 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 03:18:48.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Aug 28 03:18:48.588: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 03:18:48.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-6923" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning and attach/detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:100 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:282 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 03:18:48.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 Aug 28 03:18:50.663: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-6422 PodName:hostexec-node1-knn7z ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:18:50.663: INFO: >>> kubeConfig: /root/.kube/config Aug 28 03:18:50.781: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Aug 28 03:18:50.781: INFO: exec node1: stdout: "0\n" Aug 28 03:18:50.781: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Aug 28 03:18:50.781: INFO: exec node1: exit code: 0 Aug 28 03:18:50.781: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 03:18:50.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6422" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.182 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:256 should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:282 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:270 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 03:18:50.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 Aug 28 03:18:54.836: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-2397 PodName:hostexec-node1-l4xdk ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:18:54.836: INFO: >>> kubeConfig: /root/.kube/config Aug 28 03:18:54.951: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Aug 28 03:18:54.951: INFO: exec node1: stdout: "0\n" Aug 28 03:18:54.951: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Aug 28 03:18:54.951: INFO: exec node1: exit code: 0 Aug 28 03:18:54.951: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 03:18:54.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2397" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.171 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:256 should set same fsGroup for two pods simultaneously [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:270 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 03:18:54.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 Aug 28 03:18:59.022: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-1499 PodName:hostexec-node1-wlwfs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:18:59.022: INFO: >>> kubeConfig: /root/.kube/config Aug 28 03:18:59.142: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Aug 28 03:18:59.142: INFO: exec node1: stdout: "0\n" Aug 28 03:18:59.142: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Aug 28 03:18:59.142: INFO: exec node1: exit code: 0 Aug 28 03:18:59.142: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 03:18:59.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-1499" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.187 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Two pods mounting a local volume at the same time [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:481 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 03:18:59.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Aug 28 03:18:59.180: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 03:18:59.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3139" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:481 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create metrics for total number of volumes in A/D Controller /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:321 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 03:18:59.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Aug 28 03:18:59.218: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 03:18:59.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-5166" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total number of volumes in A/D Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:321 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:642 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 03:18:59.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:619 [It] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:642 STEP: Create a PVC STEP: Create 50 pods to use this PVC STEP: Wait for all pods are running Aug 28 03:23:59.771: FAIL: Some pods are not running within 5m0s Unexpected error: <*errors.errorString | 0xc0002bc200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func20.7.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:683 +0x748 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002f9fb00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc002f9fb00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc002f9fb00, 0x4dec428) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:633 STEP: Clean PV local-pvgmj49 [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "persistent-local-volumes-test-6444". STEP: Found 362 events. Aug 28 03:23:59.797: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-0060bc15-11ee-4849-99cd-c6890c1dd7ca: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-0060bc15-11ee-4849-99cd-c6890c1dd7ca to node1 Aug 28 03:23:59.797: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-040d1c3d-c9a6-43d6-b3a5-c038d898c4f6: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-040d1c3d-c9a6-43d6-b3a5-c038d898c4f6 to node1 Aug 28 03:23:59.797: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-0d413a9e-b1b5-42e9-908b-bbaab02a64f5: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-0d413a9e-b1b5-42e9-908b-bbaab02a64f5 to node1 Aug 28 03:23:59.797: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-12081747-e86d-4f67-92b7-87d3bbd5ee2b: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-12081747-e86d-4f67-92b7-87d3bbd5ee2b to node1 Aug 28 03:23:59.797: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-191a05a0-fc3e-4acd-8404-02e7f1ec1073: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-191a05a0-fc3e-4acd-8404-02e7f1ec1073 to node1 Aug 28 03:23:59.797: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-1f68adb8-907d-462d-83c0-81a62875a18d: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-1f68adb8-907d-462d-83c0-81a62875a18d to node1 Aug 28 03:23:59.797: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-20a68870-7734-44e5-8250-ed0ff242396e: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-20a68870-7734-44e5-8250-ed0ff242396e to node1 Aug 28 03:23:59.797: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-218eadb1-6012-4f8c-9774-8589a6a8dc9e: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-218eadb1-6012-4f8c-9774-8589a6a8dc9e to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-288eaa76-ebb1-46fc-8e8f-988380fe6721: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-288eaa76-ebb1-46fc-8e8f-988380fe6721 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-2ae1600f-b24a-43d9-b495-0983881d2cfb: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-2ae1600f-b24a-43d9-b495-0983881d2cfb to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-2be1822f-e531-4d53-9b6d-02d7fb262b65: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-2be1822f-e531-4d53-9b6d-02d7fb262b65 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-40b6d641-9a1e-4d20-a39e-22884ceeff53: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-40b6d641-9a1e-4d20-a39e-22884ceeff53 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-43aad80b-6645-4b9f-8fc8-57460ad3a112: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-43aad80b-6645-4b9f-8fc8-57460ad3a112 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-4c261d29-ebf6-4faf-91c3-02b125f83e60: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-4c261d29-ebf6-4faf-91c3-02b125f83e60 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-4dfe96ed-dd96-400f-89ce-3db1557c2dae: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-4dfe96ed-dd96-400f-89ce-3db1557c2dae to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-51996145-ecbd-4979-986e-a2566b79da14: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-51996145-ecbd-4979-986e-a2566b79da14 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-53b2c341-b919-49f9-95a2-a09a7a40e6d3: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-53b2c341-b919-49f9-95a2-a09a7a40e6d3 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-55ea1243-4891-4255-b8a2-0b46337d8af8: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-55ea1243-4891-4255-b8a2-0b46337d8af8 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-59151975-c844-4b23-bbdf-d83ce085296c: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-59151975-c844-4b23-bbdf-d83ce085296c to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-5b56657d-f3b1-4ad6-8213-9d96ccd953f1: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-5b56657d-f3b1-4ad6-8213-9d96ccd953f1 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-5e6bc330-443b-4cc9-9ab2-87a195e7710c: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-5e6bc330-443b-4cc9-9ab2-87a195e7710c to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-5e9afa04-0b49-4697-afe0-338f12fb3988: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-5e9afa04-0b49-4697-afe0-338f12fb3988 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-5ea2304e-d792-4a89-aea4-870a0fa30bc5: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-5ea2304e-d792-4a89-aea4-870a0fa30bc5 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-5fac7398-60e9-45fa-b0df-b15ab9e417cd: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-5fac7398-60e9-45fa-b0df-b15ab9e417cd to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-66755853-0db4-499d-803e-6c1f031c3599: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-66755853-0db4-499d-803e-6c1f031c3599 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-6773c694-d791-4f26-b07e-696d2e05173a: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-6773c694-d791-4f26-b07e-696d2e05173a to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-7b0ea0bb-9d30-431d-8f7e-775e42fc55f6: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-7b0ea0bb-9d30-431d-8f7e-775e42fc55f6 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-896b36c5-e368-4ca9-a7e8-d8270fe997ee: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-896b36c5-e368-4ca9-a7e8-d8270fe997ee to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-924019d9-0795-4ada-97f0-a0e57615146f: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-924019d9-0795-4ada-97f0-a0e57615146f to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-937811e1-044d-4f1c-8190-5334572e7730: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-937811e1-044d-4f1c-8190-5334572e7730 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-995e1cf4-a34e-4b81-855d-47943a293de3: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-995e1cf4-a34e-4b81-855d-47943a293de3 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-a047018f-fb61-4df6-b9e7-7bfa79737f38: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-a047018f-fb61-4df6-b9e7-7bfa79737f38 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-a5a3b3da-4552-4ff6-bc9f-e75390846609: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-a5a3b3da-4552-4ff6-bc9f-e75390846609 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-ae2f271e-271c-45d6-86a2-7ce0689b4804: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-ae2f271e-271c-45d6-86a2-7ce0689b4804 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-b01a9e1f-17de-4787-8291-e347f7207c04: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-b01a9e1f-17de-4787-8291-e347f7207c04 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-b13afcd6-8433-47ff-9e72-b57988ec6003: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-b13afcd6-8433-47ff-9e72-b57988ec6003 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-b4ee6f2f-11af-4876-8546-d54074074034: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-b4ee6f2f-11af-4876-8546-d54074074034 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-b6a94036-c85c-4d64-89e4-f7311c941a43: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-b6a94036-c85c-4d64-89e4-f7311c941a43 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-b6bfdd59-6d87-4899-91e9-ceddda7efb04: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-b6bfdd59-6d87-4899-91e9-ceddda7efb04 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-b7e65917-0844-4e05-a20d-dc417d72619b: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-b7e65917-0844-4e05-a20d-dc417d72619b to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-ba671871-0ee2-47f2-a7b3-65a00e132522: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-ba671871-0ee2-47f2-a7b3-65a00e132522 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-c52ec868-e9e2-4829-9426-3970f11ddac2: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-c52ec868-e9e2-4829-9426-3970f11ddac2 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-c69a9389-323c-42cb-b323-b34e1025ff51: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-c69a9389-323c-42cb-b323-b34e1025ff51 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-cf12187e-5863-4f10-87e5-f48b90a7f2b0: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-cf12187e-5863-4f10-87e5-f48b90a7f2b0 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-d476e108-c35b-43ef-8fa7-2513c6fc7006: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-d476e108-c35b-43ef-8fa7-2513c6fc7006 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-df39cbf5-9b17-4568-9cbe-843e91275e44: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-df39cbf5-9b17-4568-9cbe-843e91275e44 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-ebf66acd-613a-4ed7-8603-9dc6a5dbd69c: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-ebf66acd-613a-4ed7-8603-9dc6a5dbd69c to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-ec474e3c-90de-4ce7-9ad5-aa7c30e86f9f: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-ec474e3c-90de-4ce7-9ad5-aa7c30e86f9f to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-ff33ff1c-3196-454f-a95c-7de0a1c57900: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-ff33ff1c-3196-454f-a95c-7de0a1c57900 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:18:59 +0000 UTC - event for pod-ffb71798-1721-46a4-a60d-5a3f16b3e467: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-6444/pod-ffb71798-1721-46a4-a60d-5a3f16b3e467 to node1 Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:01 +0000 UTC - event for pod-218eadb1-6012-4f8c-9774-8589a6a8dc9e: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:01 +0000 UTC - event for pod-218eadb1-6012-4f8c-9774-8589a6a8dc9e: {multus } AddedInterface: Add eth0 [10.244.3.241/24] Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:03 +0000 UTC - event for pod-218eadb1-6012-4f8c-9774-8589a6a8dc9e: {kubelet node1} Pulled: Successfully pulled image "docker.io/library/busybox:1.29" in 1.415974236s Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:03 +0000 UTC - event for pod-218eadb1-6012-4f8c-9774-8589a6a8dc9e: {kubelet node1} Created: Created container write-pod Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:03 +0000 UTC - event for pod-b13afcd6-8433-47ff-9e72-b57988ec6003: {multus } AddedInterface: Add eth0 [10.244.3.242/24] Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:03 +0000 UTC - event for pod-b13afcd6-8433-47ff-9e72-b57988ec6003: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:04 +0000 UTC - event for pod-218eadb1-6012-4f8c-9774-8589a6a8dc9e: {kubelet node1} Started: Started container write-pod Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:04 +0000 UTC - event for pod-288eaa76-ebb1-46fc-8e8f-988380fe6721: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:04 +0000 UTC - event for pod-288eaa76-ebb1-46fc-8e8f-988380fe6721: {multus } AddedInterface: Add eth0 [10.244.3.243/24] Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:04 +0000 UTC - event for pod-b13afcd6-8433-47ff-9e72-b57988ec6003: {kubelet node1} Pulled: Successfully pulled image "docker.io/library/busybox:1.29" in 1.61114506s Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:05 +0000 UTC - event for pod-1f68adb8-907d-462d-83c0-81a62875a18d: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:05 +0000 UTC - event for pod-1f68adb8-907d-462d-83c0-81a62875a18d: {multus } AddedInterface: Add eth0 [10.244.3.245/24] Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:05 +0000 UTC - event for pod-2be1822f-e531-4d53-9b6d-02d7fb262b65: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:05 +0000 UTC - event for pod-2be1822f-e531-4d53-9b6d-02d7fb262b65: {multus } AddedInterface: Add eth0 [10.244.3.244/24] Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:05 +0000 UTC - event for pod-b13afcd6-8433-47ff-9e72-b57988ec6003: {kubelet node1} Created: Created container write-pod Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:06 +0000 UTC - event for pod-288eaa76-ebb1-46fc-8e8f-988380fe6721: {kubelet node1} Created: Created container write-pod Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:06 +0000 UTC - event for pod-288eaa76-ebb1-46fc-8e8f-988380fe6721: {kubelet node1} Pulled: Successfully pulled image "docker.io/library/busybox:1.29" in 1.923245631s Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:07 +0000 UTC - event for pod-288eaa76-ebb1-46fc-8e8f-988380fe6721: {kubelet node1} Started: Started container write-pod Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:07 +0000 UTC - event for pod-2be1822f-e531-4d53-9b6d-02d7fb262b65: {kubelet node1} Pulled: Successfully pulled image "docker.io/library/busybox:1.29" in 2.218834881s Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:07 +0000 UTC - event for pod-b13afcd6-8433-47ff-9e72-b57988ec6003: {kubelet node1} Started: Started container write-pod Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:07 +0000 UTC - event for pod-ebf66acd-613a-4ed7-8603-9dc6a5dbd69c: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:07 +0000 UTC - event for pod-ebf66acd-613a-4ed7-8603-9dc6a5dbd69c: {multus } AddedInterface: Add eth0 [10.244.3.246/24] Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:08 +0000 UTC - event for pod-1f68adb8-907d-462d-83c0-81a62875a18d: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:08 +0000 UTC - event for pod-1f68adb8-907d-462d-83c0-81a62875a18d: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:08 +0000 UTC - event for pod-2be1822f-e531-4d53-9b6d-02d7fb262b65: {kubelet node1} Created: Created container write-pod Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:08 +0000 UTC - event for pod-c69a9389-323c-42cb-b323-b34e1025ff51: {multus } AddedInterface: Add eth0 [10.244.3.247/24] Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:08 +0000 UTC - event for pod-c69a9389-323c-42cb-b323-b34e1025ff51: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:08 +0000 UTC - event for pod-ffb71798-1721-46a4-a60d-5a3f16b3e467: {multus } AddedInterface: Add eth0 [10.244.3.248/24] Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:08 +0000 UTC - event for pod-ffb71798-1721-46a4-a60d-5a3f16b3e467: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:09 +0000 UTC - event for pod-0060bc15-11ee-4849-99cd-c6890c1dd7ca: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:09 +0000 UTC - event for pod-0060bc15-11ee-4849-99cd-c6890c1dd7ca: {multus } AddedInterface: Add eth0 [10.244.3.249/24] Aug 28 03:23:59.798: INFO: At 2021-08-28 03:19:09 +0000 UTC - event for pod-2be1822f-e531-4d53-9b6d-02d7fb262b65: {kubelet node1} Started: Started container write-pod Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:09 +0000 UTC - event for pod-ebf66acd-613a-4ed7-8603-9dc6a5dbd69c: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:09 +0000 UTC - event for pod-ebf66acd-613a-4ed7-8603-9dc6a5dbd69c: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:10 +0000 UTC - event for pod-1f68adb8-907d-462d-83c0-81a62875a18d: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:10 +0000 UTC - event for pod-1f68adb8-907d-462d-83c0-81a62875a18d: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:10 +0000 UTC - event for pod-c69a9389-323c-42cb-b323-b34e1025ff51: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:10 +0000 UTC - event for pod-c69a9389-323c-42cb-b323-b34e1025ff51: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:10 +0000 UTC - event for pod-ff33ff1c-3196-454f-a95c-7de0a1c57900: {multus } AddedInterface: Add eth0 [10.244.3.250/24] Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:10 +0000 UTC - event for pod-ff33ff1c-3196-454f-a95c-7de0a1c57900: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:11 +0000 UTC - event for pod-ae2f271e-271c-45d6-86a2-7ce0689b4804: {multus } AddedInterface: Add eth0 [10.244.3.251/24] Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:11 +0000 UTC - event for pod-ae2f271e-271c-45d6-86a2-7ce0689b4804: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:12 +0000 UTC - event for pod-5e6bc330-443b-4cc9-9ab2-87a195e7710c: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:12 +0000 UTC - event for pod-5e6bc330-443b-4cc9-9ab2-87a195e7710c: {multus } AddedInterface: Add eth0 [10.244.3.252/24] Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:12 +0000 UTC - event for pod-ffb71798-1721-46a4-a60d-5a3f16b3e467: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:12 +0000 UTC - event for pod-ffb71798-1721-46a4-a60d-5a3f16b3e467: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:13 +0000 UTC - event for pod-0060bc15-11ee-4849-99cd-c6890c1dd7ca: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:13 +0000 UTC - event for pod-0060bc15-11ee-4849-99cd-c6890c1dd7ca: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:13 +0000 UTC - event for pod-5b56657d-f3b1-4ad6-8213-9d96ccd953f1: {multus } AddedInterface: Add eth0 [10.244.3.253/24] Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:13 +0000 UTC - event for pod-5b56657d-f3b1-4ad6-8213-9d96ccd953f1: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:13 +0000 UTC - event for pod-c69a9389-323c-42cb-b323-b34e1025ff51: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:13 +0000 UTC - event for pod-c69a9389-323c-42cb-b323-b34e1025ff51: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:13 +0000 UTC - event for pod-ebf66acd-613a-4ed7-8603-9dc6a5dbd69c: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:13 +0000 UTC - event for pod-ebf66acd-613a-4ed7-8603-9dc6a5dbd69c: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:14 +0000 UTC - event for pod-924019d9-0795-4ada-97f0-a0e57615146f: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:14 +0000 UTC - event for pod-924019d9-0795-4ada-97f0-a0e57615146f: {multus } AddedInterface: Add eth0 [10.244.3.254/24] Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:14 +0000 UTC - event for pod-ff33ff1c-3196-454f-a95c-7de0a1c57900: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:14 +0000 UTC - event for pod-ff33ff1c-3196-454f-a95c-7de0a1c57900: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:15 +0000 UTC - event for pod-0060bc15-11ee-4849-99cd-c6890c1dd7ca: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:15 +0000 UTC - event for pod-0060bc15-11ee-4849-99cd-c6890c1dd7ca: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:15 +0000 UTC - event for pod-a5a3b3da-4552-4ff6-bc9f-e75390846609: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:15 +0000 UTC - event for pod-a5a3b3da-4552-4ff6-bc9f-e75390846609: {multus } AddedInterface: Add eth0 [10.244.3.2/24] Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:15 +0000 UTC - event for pod-ae2f271e-271c-45d6-86a2-7ce0689b4804: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:15 +0000 UTC - event for pod-ae2f271e-271c-45d6-86a2-7ce0689b4804: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:15 +0000 UTC - event for pod-ff33ff1c-3196-454f-a95c-7de0a1c57900: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:15 +0000 UTC - event for pod-ff33ff1c-3196-454f-a95c-7de0a1c57900: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:15 +0000 UTC - event for pod-ffb71798-1721-46a4-a60d-5a3f16b3e467: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:15 +0000 UTC - event for pod-ffb71798-1721-46a4-a60d-5a3f16b3e467: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:16 +0000 UTC - event for pod-5e6bc330-443b-4cc9-9ab2-87a195e7710c: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:16 +0000 UTC - event for pod-5e6bc330-443b-4cc9-9ab2-87a195e7710c: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:16 +0000 UTC - event for pod-b6bfdd59-6d87-4899-91e9-ceddda7efb04: {multus } AddedInterface: Add eth0 [10.244.3.3/24] Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:16 +0000 UTC - event for pod-b6bfdd59-6d87-4899-91e9-ceddda7efb04: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:17 +0000 UTC - event for pod-040d1c3d-c9a6-43d6-b3a5-c038d898c4f6: {multus } AddedInterface: Add eth0 [10.244.3.6/24] Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:17 +0000 UTC - event for pod-040d1c3d-c9a6-43d6-b3a5-c038d898c4f6: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:17 +0000 UTC - event for pod-191a05a0-fc3e-4acd-8404-02e7f1ec1073: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:17 +0000 UTC - event for pod-191a05a0-fc3e-4acd-8404-02e7f1ec1073: {multus } AddedInterface: Add eth0 [10.244.3.9/24] Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:17 +0000 UTC - event for pod-5b56657d-f3b1-4ad6-8213-9d96ccd953f1: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:17 +0000 UTC - event for pod-5b56657d-f3b1-4ad6-8213-9d96ccd953f1: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:18 +0000 UTC - event for pod-5e6bc330-443b-4cc9-9ab2-87a195e7710c: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:18 +0000 UTC - event for pod-5e6bc330-443b-4cc9-9ab2-87a195e7710c: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:18 +0000 UTC - event for pod-7b0ea0bb-9d30-431d-8f7e-775e42fc55f6: {multus } AddedInterface: Add eth0 [10.244.3.10/24] Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:18 +0000 UTC - event for pod-7b0ea0bb-9d30-431d-8f7e-775e42fc55f6: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:18 +0000 UTC - event for pod-924019d9-0795-4ada-97f0-a0e57615146f: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:18 +0000 UTC - event for pod-924019d9-0795-4ada-97f0-a0e57615146f: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:18 +0000 UTC - event for pod-ae2f271e-271c-45d6-86a2-7ce0689b4804: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:18 +0000 UTC - event for pod-ae2f271e-271c-45d6-86a2-7ce0689b4804: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:19 +0000 UTC - event for pod-937811e1-044d-4f1c-8190-5334572e7730: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:19 +0000 UTC - event for pod-937811e1-044d-4f1c-8190-5334572e7730: {multus } AddedInterface: Add eth0 [10.244.3.11/24] Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:19 +0000 UTC - event for pod-a5a3b3da-4552-4ff6-bc9f-e75390846609: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:19 +0000 UTC - event for pod-a5a3b3da-4552-4ff6-bc9f-e75390846609: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:20 +0000 UTC - event for pod-5b56657d-f3b1-4ad6-8213-9d96ccd953f1: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:20 +0000 UTC - event for pod-5b56657d-f3b1-4ad6-8213-9d96ccd953f1: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:20 +0000 UTC - event for pod-924019d9-0795-4ada-97f0-a0e57615146f: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:20 +0000 UTC - event for pod-924019d9-0795-4ada-97f0-a0e57615146f: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:20 +0000 UTC - event for pod-b6bfdd59-6d87-4899-91e9-ceddda7efb04: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:20 +0000 UTC - event for pod-b6bfdd59-6d87-4899-91e9-ceddda7efb04: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:21 +0000 UTC - event for pod-040d1c3d-c9a6-43d6-b3a5-c038d898c4f6: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:21 +0000 UTC - event for pod-040d1c3d-c9a6-43d6-b3a5-c038d898c4f6: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:21 +0000 UTC - event for pod-0d413a9e-b1b5-42e9-908b-bbaab02a64f5: {multus } AddedInterface: Add eth0 [10.244.3.13/24] Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:21 +0000 UTC - event for pod-0d413a9e-b1b5-42e9-908b-bbaab02a64f5: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:21 +0000 UTC - event for pod-59151975-c844-4b23-bbdf-d83ce085296c: {multus } AddedInterface: Add eth0 [10.244.3.12/24] Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:21 +0000 UTC - event for pod-59151975-c844-4b23-bbdf-d83ce085296c: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:22 +0000 UTC - event for pod-191a05a0-fc3e-4acd-8404-02e7f1ec1073: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:22 +0000 UTC - event for pod-191a05a0-fc3e-4acd-8404-02e7f1ec1073: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:22 +0000 UTC - event for pod-a5a3b3da-4552-4ff6-bc9f-e75390846609: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:22 +0000 UTC - event for pod-a5a3b3da-4552-4ff6-bc9f-e75390846609: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:22 +0000 UTC - event for pod-b6bfdd59-6d87-4899-91e9-ceddda7efb04: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:22 +0000 UTC - event for pod-b6bfdd59-6d87-4899-91e9-ceddda7efb04: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:22 +0000 UTC - event for pod-c52ec868-e9e2-4829-9426-3970f11ddac2: {multus } AddedInterface: Add eth0 [10.244.3.14/24] Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:22 +0000 UTC - event for pod-c52ec868-e9e2-4829-9426-3970f11ddac2: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.799: INFO: At 2021-08-28 03:19:23 +0000 UTC - event for pod-4c261d29-ebf6-4faf-91c3-02b125f83e60: {multus } AddedInterface: Add eth0 [10.244.3.15/24] Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:23 +0000 UTC - event for pod-4c261d29-ebf6-4faf-91c3-02b125f83e60: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:23 +0000 UTC - event for pod-7b0ea0bb-9d30-431d-8f7e-775e42fc55f6: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:23 +0000 UTC - event for pod-7b0ea0bb-9d30-431d-8f7e-775e42fc55f6: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:24 +0000 UTC - event for pod-040d1c3d-c9a6-43d6-b3a5-c038d898c4f6: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:24 +0000 UTC - event for pod-040d1c3d-c9a6-43d6-b3a5-c038d898c4f6: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:24 +0000 UTC - event for pod-191a05a0-fc3e-4acd-8404-02e7f1ec1073: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:24 +0000 UTC - event for pod-191a05a0-fc3e-4acd-8404-02e7f1ec1073: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:24 +0000 UTC - event for pod-53b2c341-b919-49f9-95a2-a09a7a40e6d3: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:24 +0000 UTC - event for pod-53b2c341-b919-49f9-95a2-a09a7a40e6d3: {multus } AddedInterface: Add eth0 [10.244.3.16/24] Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:24 +0000 UTC - event for pod-7b0ea0bb-9d30-431d-8f7e-775e42fc55f6: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:24 +0000 UTC - event for pod-7b0ea0bb-9d30-431d-8f7e-775e42fc55f6: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:24 +0000 UTC - event for pod-937811e1-044d-4f1c-8190-5334572e7730: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:24 +0000 UTC - event for pod-937811e1-044d-4f1c-8190-5334572e7730: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:25 +0000 UTC - event for pod-5ea2304e-d792-4a89-aea4-870a0fa30bc5: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:25 +0000 UTC - event for pod-5ea2304e-d792-4a89-aea4-870a0fa30bc5: {multus } AddedInterface: Add eth0 [10.244.3.17/24] Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:26 +0000 UTC - event for pod-59151975-c844-4b23-bbdf-d83ce085296c: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:26 +0000 UTC - event for pod-59151975-c844-4b23-bbdf-d83ce085296c: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:27 +0000 UTC - event for pod-0d413a9e-b1b5-42e9-908b-bbaab02a64f5: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:27 +0000 UTC - event for pod-0d413a9e-b1b5-42e9-908b-bbaab02a64f5: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:29 +0000 UTC - event for pod-4dfe96ed-dd96-400f-89ce-3db1557c2dae: {multus } AddedInterface: Add eth0 [10.244.3.19/24] Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:29 +0000 UTC - event for pod-4dfe96ed-dd96-400f-89ce-3db1557c2dae: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:29 +0000 UTC - event for pod-896b36c5-e368-4ca9-a7e8-d8270fe997ee: {multus } AddedInterface: Add eth0 [10.244.3.18/24] Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:29 +0000 UTC - event for pod-896b36c5-e368-4ca9-a7e8-d8270fe997ee: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:29 +0000 UTC - event for pod-c52ec868-e9e2-4829-9426-3970f11ddac2: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:29 +0000 UTC - event for pod-c52ec868-e9e2-4829-9426-3970f11ddac2: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:30 +0000 UTC - event for pod-4c261d29-ebf6-4faf-91c3-02b125f83e60: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:30 +0000 UTC - event for pod-4c261d29-ebf6-4faf-91c3-02b125f83e60: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:30 +0000 UTC - event for pod-937811e1-044d-4f1c-8190-5334572e7730: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:30 +0000 UTC - event for pod-937811e1-044d-4f1c-8190-5334572e7730: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:31 +0000 UTC - event for pod-53b2c341-b919-49f9-95a2-a09a7a40e6d3: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:31 +0000 UTC - event for pod-53b2c341-b919-49f9-95a2-a09a7a40e6d3: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:31 +0000 UTC - event for pod-d476e108-c35b-43ef-8fa7-2513c6fc7006: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:31 +0000 UTC - event for pod-d476e108-c35b-43ef-8fa7-2513c6fc7006: {multus } AddedInterface: Add eth0 [10.244.3.20/24] Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:32 +0000 UTC - event for pod-5ea2304e-d792-4a89-aea4-870a0fa30bc5: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:32 +0000 UTC - event for pod-5ea2304e-d792-4a89-aea4-870a0fa30bc5: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:32 +0000 UTC - event for pod-66755853-0db4-499d-803e-6c1f031c3599: {multus } AddedInterface: Add eth0 [10.244.3.21/24] Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:32 +0000 UTC - event for pod-66755853-0db4-499d-803e-6c1f031c3599: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:32 +0000 UTC - event for pod-6773c694-d791-4f26-b07e-696d2e05173a: {multus } AddedInterface: Add eth0 [10.244.3.22/24] Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:33 +0000 UTC - event for pod-6773c694-d791-4f26-b07e-696d2e05173a: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:33 +0000 UTC - event for pod-ec474e3c-90de-4ce7-9ad5-aa7c30e86f9f: {multus } AddedInterface: Add eth0 [10.244.3.23/24] Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:33 +0000 UTC - event for pod-ec474e3c-90de-4ce7-9ad5-aa7c30e86f9f: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:35 +0000 UTC - event for pod-0d413a9e-b1b5-42e9-908b-bbaab02a64f5: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:35 +0000 UTC - event for pod-4c261d29-ebf6-4faf-91c3-02b125f83e60: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:35 +0000 UTC - event for pod-4c261d29-ebf6-4faf-91c3-02b125f83e60: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:35 +0000 UTC - event for pod-59151975-c844-4b23-bbdf-d83ce085296c: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:35 +0000 UTC - event for pod-59151975-c844-4b23-bbdf-d83ce085296c: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:35 +0000 UTC - event for pod-b6a94036-c85c-4d64-89e4-f7311c941a43: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:35 +0000 UTC - event for pod-b6a94036-c85c-4d64-89e4-f7311c941a43: {multus } AddedInterface: Add eth0 [10.244.3.24/24] Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:35 +0000 UTC - event for pod-c52ec868-e9e2-4829-9426-3970f11ddac2: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:35 +0000 UTC - event for pod-c52ec868-e9e2-4829-9426-3970f11ddac2: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:36 +0000 UTC - event for pod-a047018f-fb61-4df6-b9e7-7bfa79737f38: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:36 +0000 UTC - event for pod-a047018f-fb61-4df6-b9e7-7bfa79737f38: {multus } AddedInterface: Add eth0 [10.244.3.25/24] Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:37 +0000 UTC - event for pod-5fac7398-60e9-45fa-b0df-b15ab9e417cd: {multus } AddedInterface: Add eth0 [10.244.3.26/24] Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:37 +0000 UTC - event for pod-cf12187e-5863-4f10-87e5-f48b90a7f2b0: {multus } AddedInterface: Add eth0 [10.244.3.27/24] Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:38 +0000 UTC - event for pod-2ae1600f-b24a-43d9-b495-0983881d2cfb: {multus } AddedInterface: Add eth0 [10.244.3.28/24] Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:38 +0000 UTC - event for pod-2ae1600f-b24a-43d9-b495-0983881d2cfb: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:38 +0000 UTC - event for pod-53b2c341-b919-49f9-95a2-a09a7a40e6d3: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:38 +0000 UTC - event for pod-53b2c341-b919-49f9-95a2-a09a7a40e6d3: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:38 +0000 UTC - event for pod-5ea2304e-d792-4a89-aea4-870a0fa30bc5: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:38 +0000 UTC - event for pod-5ea2304e-d792-4a89-aea4-870a0fa30bc5: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:38 +0000 UTC - event for pod-5fac7398-60e9-45fa-b0df-b15ab9e417cd: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:38 +0000 UTC - event for pod-896b36c5-e368-4ca9-a7e8-d8270fe997ee: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:38 +0000 UTC - event for pod-896b36c5-e368-4ca9-a7e8-d8270fe997ee: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:38 +0000 UTC - event for pod-cf12187e-5863-4f10-87e5-f48b90a7f2b0: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:39 +0000 UTC - event for pod-40b6d641-9a1e-4d20-a39e-22884ceeff53: {multus } AddedInterface: Add eth0 [10.244.3.30/24] Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:39 +0000 UTC - event for pod-40b6d641-9a1e-4d20-a39e-22884ceeff53: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:39 +0000 UTC - event for pod-43aad80b-6645-4b9f-8fc8-57460ad3a112: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:39 +0000 UTC - event for pod-43aad80b-6645-4b9f-8fc8-57460ad3a112: {multus } AddedInterface: Add eth0 [10.244.3.29/24] Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:40 +0000 UTC - event for pod-4dfe96ed-dd96-400f-89ce-3db1557c2dae: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:40 +0000 UTC - event for pod-4dfe96ed-dd96-400f-89ce-3db1557c2dae: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:40 +0000 UTC - event for pod-896b36c5-e368-4ca9-a7e8-d8270fe997ee: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:42 +0000 UTC - event for pod-0d413a9e-b1b5-42e9-908b-bbaab02a64f5: {multus } AddedInterface: Add eth0 [10.244.3.31/24] Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:42 +0000 UTC - event for pod-20a68870-7734-44e5-8250-ed0ff242396e: {multus } AddedInterface: Add eth0 [10.244.3.33/24] Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:42 +0000 UTC - event for pod-20a68870-7734-44e5-8250-ed0ff242396e: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:42 +0000 UTC - event for pod-4dfe96ed-dd96-400f-89ce-3db1557c2dae: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:42 +0000 UTC - event for pod-4dfe96ed-dd96-400f-89ce-3db1557c2dae: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:42 +0000 UTC - event for pod-51996145-ecbd-4979-986e-a2566b79da14: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:42 +0000 UTC - event for pod-51996145-ecbd-4979-986e-a2566b79da14: {multus } AddedInterface: Add eth0 [10.244.3.32/24] Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:42 +0000 UTC - event for pod-d476e108-c35b-43ef-8fa7-2513c6fc7006: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.800: INFO: At 2021-08-28 03:19:42 +0000 UTC - event for pod-d476e108-c35b-43ef-8fa7-2513c6fc7006: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:43 +0000 UTC - event for pod-b01a9e1f-17de-4787-8291-e347f7207c04: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:43 +0000 UTC - event for pod-b01a9e1f-17de-4787-8291-e347f7207c04: {multus } AddedInterface: Add eth0 [10.244.3.34/24] Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:44 +0000 UTC - event for pod-66755853-0db4-499d-803e-6c1f031c3599: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:44 +0000 UTC - event for pod-66755853-0db4-499d-803e-6c1f031c3599: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:44 +0000 UTC - event for pod-d476e108-c35b-43ef-8fa7-2513c6fc7006: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:44 +0000 UTC - event for pod-d476e108-c35b-43ef-8fa7-2513c6fc7006: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:45 +0000 UTC - event for pod-6773c694-d791-4f26-b07e-696d2e05173a: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:45 +0000 UTC - event for pod-6773c694-d791-4f26-b07e-696d2e05173a: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:46 +0000 UTC - event for pod-12081747-e86d-4f67-92b7-87d3bbd5ee2b: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:46 +0000 UTC - event for pod-12081747-e86d-4f67-92b7-87d3bbd5ee2b: {multus } AddedInterface: Add eth0 [10.244.3.36/24] Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:46 +0000 UTC - event for pod-896b36c5-e368-4ca9-a7e8-d8270fe997ee: {multus } AddedInterface: Add eth0 [10.244.3.35/24] Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:46 +0000 UTC - event for pod-896b36c5-e368-4ca9-a7e8-d8270fe997ee: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:46 +0000 UTC - event for pod-896b36c5-e368-4ca9-a7e8-d8270fe997ee: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:46 +0000 UTC - event for pod-b7e65917-0844-4e05-a20d-dc417d72619b: {multus } AddedInterface: Add eth0 [10.244.3.37/24] Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:46 +0000 UTC - event for pod-b7e65917-0844-4e05-a20d-dc417d72619b: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:46 +0000 UTC - event for pod-ec474e3c-90de-4ce7-9ad5-aa7c30e86f9f: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:46 +0000 UTC - event for pod-ec474e3c-90de-4ce7-9ad5-aa7c30e86f9f: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:47 +0000 UTC - event for pod-66755853-0db4-499d-803e-6c1f031c3599: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:47 +0000 UTC - event for pod-66755853-0db4-499d-803e-6c1f031c3599: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:47 +0000 UTC - event for pod-6773c694-d791-4f26-b07e-696d2e05173a: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:47 +0000 UTC - event for pod-6773c694-d791-4f26-b07e-696d2e05173a: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:49 +0000 UTC - event for pod-5e9afa04-0b49-4697-afe0-338f12fb3988: {multus } AddedInterface: Add eth0 [10.244.3.38/24] Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:49 +0000 UTC - event for pod-5e9afa04-0b49-4697-afe0-338f12fb3988: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:49 +0000 UTC - event for pod-ba671871-0ee2-47f2-a7b3-65a00e132522: {multus } AddedInterface: Add eth0 [10.244.3.39/24] Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:49 +0000 UTC - event for pod-ba671871-0ee2-47f2-a7b3-65a00e132522: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:50 +0000 UTC - event for pod-995e1cf4-a34e-4b81-855d-47943a293de3: {multus } AddedInterface: Add eth0 [10.244.3.40/24] Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:50 +0000 UTC - event for pod-995e1cf4-a34e-4b81-855d-47943a293de3: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:50 +0000 UTC - event for pod-ec474e3c-90de-4ce7-9ad5-aa7c30e86f9f: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:50 +0000 UTC - event for pod-ec474e3c-90de-4ce7-9ad5-aa7c30e86f9f: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:51 +0000 UTC - event for pod-df39cbf5-9b17-4568-9cbe-843e91275e44: {multus } AddedInterface: Add eth0 [10.244.3.41/24] Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:51 +0000 UTC - event for pod-df39cbf5-9b17-4568-9cbe-843e91275e44: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:52 +0000 UTC - event for pod-55ea1243-4891-4255-b8a2-0b46337d8af8: {multus } AddedInterface: Add eth0 [10.244.3.42/24] Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:52 +0000 UTC - event for pod-55ea1243-4891-4255-b8a2-0b46337d8af8: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:52 +0000 UTC - event for pod-b4ee6f2f-11af-4876-8546-d54074074034: {multus } AddedInterface: Add eth0 [10.244.3.43/24] Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:52 +0000 UTC - event for pod-b4ee6f2f-11af-4876-8546-d54074074034: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:53 +0000 UTC - event for pod-b6a94036-c85c-4d64-89e4-f7311c941a43: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:53 +0000 UTC - event for pod-b6a94036-c85c-4d64-89e4-f7311c941a43: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:54 +0000 UTC - event for pod-a047018f-fb61-4df6-b9e7-7bfa79737f38: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:54 +0000 UTC - event for pod-a047018f-fb61-4df6-b9e7-7bfa79737f38: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:54 +0000 UTC - event for pod-b6a94036-c85c-4d64-89e4-f7311c941a43: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:54 +0000 UTC - event for pod-b6a94036-c85c-4d64-89e4-f7311c941a43: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:55 +0000 UTC - event for pod-a047018f-fb61-4df6-b9e7-7bfa79737f38: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:55 +0000 UTC - event for pod-a047018f-fb61-4df6-b9e7-7bfa79737f38: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:55 +0000 UTC - event for pod-cf12187e-5863-4f10-87e5-f48b90a7f2b0: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:55 +0000 UTC - event for pod-cf12187e-5863-4f10-87e5-f48b90a7f2b0: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:56 +0000 UTC - event for pod-cf12187e-5863-4f10-87e5-f48b90a7f2b0: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:56 +0000 UTC - event for pod-cf12187e-5863-4f10-87e5-f48b90a7f2b0: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:57 +0000 UTC - event for pod-5fac7398-60e9-45fa-b0df-b15ab9e417cd: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:57 +0000 UTC - event for pod-5fac7398-60e9-45fa-b0df-b15ab9e417cd: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:57 +0000 UTC - event for pod-5fac7398-60e9-45fa-b0df-b15ab9e417cd: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:57 +0000 UTC - event for pod-5fac7398-60e9-45fa-b0df-b15ab9e417cd: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:58 +0000 UTC - event for pod-2ae1600f-b24a-43d9-b495-0983881d2cfb: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:58 +0000 UTC - event for pod-2ae1600f-b24a-43d9-b495-0983881d2cfb: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:58 +0000 UTC - event for pod-2ae1600f-b24a-43d9-b495-0983881d2cfb: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:58 +0000 UTC - event for pod-2ae1600f-b24a-43d9-b495-0983881d2cfb: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:59 +0000 UTC - event for pod-43aad80b-6645-4b9f-8fc8-57460ad3a112: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.801: INFO: At 2021-08-28 03:19:59 +0000 UTC - event for pod-43aad80b-6645-4b9f-8fc8-57460ad3a112: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.801: INFO: At 2021-08-28 03:20:00 +0000 UTC - event for pod-40b6d641-9a1e-4d20-a39e-22884ceeff53: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.801: INFO: At 2021-08-28 03:20:00 +0000 UTC - event for pod-40b6d641-9a1e-4d20-a39e-22884ceeff53: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.801: INFO: At 2021-08-28 03:20:00 +0000 UTC - event for pod-43aad80b-6645-4b9f-8fc8-57460ad3a112: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.801: INFO: At 2021-08-28 03:20:00 +0000 UTC - event for pod-43aad80b-6645-4b9f-8fc8-57460ad3a112: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.801: INFO: At 2021-08-28 03:20:01 +0000 UTC - event for pod-40b6d641-9a1e-4d20-a39e-22884ceeff53: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.801: INFO: At 2021-08-28 03:20:01 +0000 UTC - event for pod-40b6d641-9a1e-4d20-a39e-22884ceeff53: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.801: INFO: At 2021-08-28 03:20:02 +0000 UTC - event for pod-51996145-ecbd-4979-986e-a2566b79da14: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.801: INFO: At 2021-08-28 03:20:02 +0000 UTC - event for pod-51996145-ecbd-4979-986e-a2566b79da14: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.801: INFO: At 2021-08-28 03:20:03 +0000 UTC - event for pod-20a68870-7734-44e5-8250-ed0ff242396e: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.801: INFO: At 2021-08-28 03:20:03 +0000 UTC - event for pod-20a68870-7734-44e5-8250-ed0ff242396e: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.801: INFO: At 2021-08-28 03:20:04 +0000 UTC - event for pod-20a68870-7734-44e5-8250-ed0ff242396e: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.801: INFO: At 2021-08-28 03:20:04 +0000 UTC - event for pod-20a68870-7734-44e5-8250-ed0ff242396e: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.801: INFO: At 2021-08-28 03:20:04 +0000 UTC - event for pod-51996145-ecbd-4979-986e-a2566b79da14: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Aug 28 03:23:59.801: INFO: At 2021-08-28 03:20:04 +0000 UTC - event for pod-b01a9e1f-17de-4787-8291-e347f7207c04: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.801: INFO: At 2021-08-28 03:20:04 +0000 UTC - event for pod-b01a9e1f-17de-4787-8291-e347f7207c04: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.801: INFO: At 2021-08-28 03:20:05 +0000 UTC - event for pod-0d413a9e-b1b5-42e9-908b-bbaab02a64f5: {multus } AddedInterface: Add eth0 [10.244.3.44/24] Aug 28 03:23:59.801: INFO: At 2021-08-28 03:20:05 +0000 UTC - event for pod-0d413a9e-b1b5-42e9-908b-bbaab02a64f5: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.801: INFO: At 2021-08-28 03:20:05 +0000 UTC - event for pod-0d413a9e-b1b5-42e9-908b-bbaab02a64f5: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.801: INFO: At 2021-08-28 03:20:05 +0000 UTC - event for pod-b01a9e1f-17de-4787-8291-e347f7207c04: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.801: INFO: At 2021-08-28 03:20:05 +0000 UTC - event for pod-b01a9e1f-17de-4787-8291-e347f7207c04: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.801: INFO: At 2021-08-28 03:20:06 +0000 UTC - event for pod-51996145-ecbd-4979-986e-a2566b79da14: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.801: INFO: At 2021-08-28 03:20:06 +0000 UTC - event for pod-51996145-ecbd-4979-986e-a2566b79da14: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.801: INFO: At 2021-08-28 03:20:06 +0000 UTC - event for pod-51996145-ecbd-4979-986e-a2566b79da14: {multus } AddedInterface: Add eth0 [10.244.3.45/24] Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:06 +0000 UTC - event for pod-b7e65917-0844-4e05-a20d-dc417d72619b: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:06 +0000 UTC - event for pod-b7e65917-0844-4e05-a20d-dc417d72619b: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:07 +0000 UTC - event for pod-12081747-e86d-4f67-92b7-87d3bbd5ee2b: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:07 +0000 UTC - event for pod-12081747-e86d-4f67-92b7-87d3bbd5ee2b: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:07 +0000 UTC - event for pod-b7e65917-0844-4e05-a20d-dc417d72619b: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:07 +0000 UTC - event for pod-b7e65917-0844-4e05-a20d-dc417d72619b: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:08 +0000 UTC - event for pod-12081747-e86d-4f67-92b7-87d3bbd5ee2b: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:08 +0000 UTC - event for pod-12081747-e86d-4f67-92b7-87d3bbd5ee2b: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:08 +0000 UTC - event for pod-51996145-ecbd-4979-986e-a2566b79da14: {multus } AddedInterface: Add eth0 [10.244.3.46/24] Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:08 +0000 UTC - event for pod-ba671871-0ee2-47f2-a7b3-65a00e132522: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:08 +0000 UTC - event for pod-ba671871-0ee2-47f2-a7b3-65a00e132522: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:09 +0000 UTC - event for pod-5e9afa04-0b49-4697-afe0-338f12fb3988: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:09 +0000 UTC - event for pod-5e9afa04-0b49-4697-afe0-338f12fb3988: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:09 +0000 UTC - event for pod-ba671871-0ee2-47f2-a7b3-65a00e132522: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:09 +0000 UTC - event for pod-ba671871-0ee2-47f2-a7b3-65a00e132522: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:10 +0000 UTC - event for pod-5e9afa04-0b49-4697-afe0-338f12fb3988: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:10 +0000 UTC - event for pod-5e9afa04-0b49-4697-afe0-338f12fb3988: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:13 +0000 UTC - event for pod-995e1cf4-a34e-4b81-855d-47943a293de3: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:13 +0000 UTC - event for pod-995e1cf4-a34e-4b81-855d-47943a293de3: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:14 +0000 UTC - event for pod-995e1cf4-a34e-4b81-855d-47943a293de3: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:14 +0000 UTC - event for pod-995e1cf4-a34e-4b81-855d-47943a293de3: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:16 +0000 UTC - event for pod-df39cbf5-9b17-4568-9cbe-843e91275e44: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:16 +0000 UTC - event for pod-df39cbf5-9b17-4568-9cbe-843e91275e44: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:17 +0000 UTC - event for pod-b4ee6f2f-11af-4876-8546-d54074074034: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:17 +0000 UTC - event for pod-b4ee6f2f-11af-4876-8546-d54074074034: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:17 +0000 UTC - event for pod-b4ee6f2f-11af-4876-8546-d54074074034: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:17 +0000 UTC - event for pod-df39cbf5-9b17-4568-9cbe-843e91275e44: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:17 +0000 UTC - event for pod-df39cbf5-9b17-4568-9cbe-843e91275e44: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:18 +0000 UTC - event for pod-55ea1243-4891-4255-b8a2-0b46337d8af8: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:18 +0000 UTC - event for pod-55ea1243-4891-4255-b8a2-0b46337d8af8: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:18 +0000 UTC - event for pod-55ea1243-4891-4255-b8a2-0b46337d8af8: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:19 +0000 UTC - event for pod-b4ee6f2f-11af-4876-8546-d54074074034: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:19 +0000 UTC - event for pod-b4ee6f2f-11af-4876-8546-d54074074034: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:19 +0000 UTC - event for pod-b4ee6f2f-11af-4876-8546-d54074074034: {multus } AddedInterface: Add eth0 [10.244.3.47/24] Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:20 +0000 UTC - event for pod-55ea1243-4891-4255-b8a2-0b46337d8af8: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:20 +0000 UTC - event for pod-55ea1243-4891-4255-b8a2-0b46337d8af8: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:20 +0000 UTC - event for pod-55ea1243-4891-4255-b8a2-0b46337d8af8: {multus } AddedInterface: Add eth0 [10.244.3.48/24] Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:23 +0000 UTC - event for pod-55ea1243-4891-4255-b8a2-0b46337d8af8: {multus } AddedInterface: Add eth0 [10.244.3.49/24] Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:32 +0000 UTC - event for pod-b6a94036-c85c-4d64-89e4-f7311c941a43: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.802: INFO: At 2021-08-28 03:20:46 +0000 UTC - event for pod-7b0ea0bb-9d30-431d-8f7e-775e42fc55f6: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.802: INFO: At 2021-08-28 03:21:40 +0000 UTC - event for pod-a5a3b3da-4552-4ff6-bc9f-e75390846609: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:23:59.812: INFO: POD NODE PHASE GRACE CONDITIONS Aug 28 03:23:59.812: INFO: pod-0060bc15-11ee-4849-99cd-c6890c1dd7ca node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.812: INFO: pod-040d1c3d-c9a6-43d6-b3a5-c038d898c4f6 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.812: INFO: pod-0d413a9e-b1b5-42e9-908b-bbaab02a64f5 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.812: INFO: pod-12081747-e86d-4f67-92b7-87d3bbd5ee2b node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.812: INFO: pod-191a05a0-fc3e-4acd-8404-02e7f1ec1073 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.812: INFO: pod-1f68adb8-907d-462d-83c0-81a62875a18d node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.812: INFO: pod-20a68870-7734-44e5-8250-ed0ff242396e node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.812: INFO: pod-218eadb1-6012-4f8c-9774-8589a6a8dc9e node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:19:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:19:04 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.812: INFO: pod-288eaa76-ebb1-46fc-8e8f-988380fe6721 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:19:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:19:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.812: INFO: pod-2ae1600f-b24a-43d9-b495-0983881d2cfb node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.812: INFO: pod-2be1822f-e531-4d53-9b6d-02d7fb262b65 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:19:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:19:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.812: INFO: pod-40b6d641-9a1e-4d20-a39e-22884ceeff53 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.812: INFO: pod-43aad80b-6645-4b9f-8fc8-57460ad3a112 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.812: INFO: pod-4c261d29-ebf6-4faf-91c3-02b125f83e60 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.812: INFO: pod-4dfe96ed-dd96-400f-89ce-3db1557c2dae node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.812: INFO: pod-51996145-ecbd-4979-986e-a2566b79da14 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.812: INFO: pod-53b2c341-b919-49f9-95a2-a09a7a40e6d3 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.812: INFO: pod-55ea1243-4891-4255-b8a2-0b46337d8af8 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.812: INFO: pod-59151975-c844-4b23-bbdf-d83ce085296c node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.812: INFO: pod-5b56657d-f3b1-4ad6-8213-9d96ccd953f1 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-5e6bc330-443b-4cc9-9ab2-87a195e7710c node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-5e9afa04-0b49-4697-afe0-338f12fb3988 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-5ea2304e-d792-4a89-aea4-870a0fa30bc5 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-5fac7398-60e9-45fa-b0df-b15ab9e417cd node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-66755853-0db4-499d-803e-6c1f031c3599 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-6773c694-d791-4f26-b07e-696d2e05173a node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-7b0ea0bb-9d30-431d-8f7e-775e42fc55f6 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-896b36c5-e368-4ca9-a7e8-d8270fe997ee node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-924019d9-0795-4ada-97f0-a0e57615146f node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-937811e1-044d-4f1c-8190-5334572e7730 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-995e1cf4-a34e-4b81-855d-47943a293de3 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-a047018f-fb61-4df6-b9e7-7bfa79737f38 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-a5a3b3da-4552-4ff6-bc9f-e75390846609 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-ae2f271e-271c-45d6-86a2-7ce0689b4804 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-b01a9e1f-17de-4787-8291-e347f7207c04 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-b13afcd6-8433-47ff-9e72-b57988ec6003 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:19:07 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:19:07 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-b4ee6f2f-11af-4876-8546-d54074074034 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-b6a94036-c85c-4d64-89e4-f7311c941a43 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-b6bfdd59-6d87-4899-91e9-ceddda7efb04 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-b7e65917-0844-4e05-a20d-dc417d72619b node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-ba671871-0ee2-47f2-a7b3-65a00e132522 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-c52ec868-e9e2-4829-9426-3970f11ddac2 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-c69a9389-323c-42cb-b323-b34e1025ff51 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-cf12187e-5863-4f10-87e5-f48b90a7f2b0 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-d476e108-c35b-43ef-8fa7-2513c6fc7006 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-df39cbf5-9b17-4568-9cbe-843e91275e44 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.813: INFO: pod-ebf66acd-613a-4ed7-8603-9dc6a5dbd69c node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.814: INFO: pod-ec474e3c-90de-4ce7-9ad5-aa7c30e86f9f node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.814: INFO: pod-ff33ff1c-3196-454f-a95c-7de0a1c57900 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.814: INFO: pod-ffb71798-1721-46a4-a60d-5a3f16b3e467 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:18:59 +0000 UTC }] Aug 28 03:23:59.814: INFO: Aug 28 03:23:59.818: INFO: Logging node info for node master1 Aug 28 03:23:59.820: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 3af53387-5aee-42c1-b0e7-644cf9161d48 153670 0 2021-08-27 20:46:13 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"92:f8:b6:72:e4:be"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-08-27 20:46:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-08-27 20:46:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-08-27 20:46:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-08-27 20:48:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-08-27 20:50:56 +0000 UTC,LastTransitionTime:2021-08-27 20:50:56 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-08-28 03:23:50 +0000 UTC,LastTransitionTime:2021-08-27 20:46:13 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-08-28 03:23:50 +0000 UTC,LastTransitionTime:2021-08-27 20:46:13 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-08-28 03:23:50 +0000 UTC,LastTransitionTime:2021-08-27 20:46:13 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-08-28 03:23:50 +0000 UTC,LastTransitionTime:2021-08-27 20:50:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5d584023135a46ecb77596bf48ed7f2f,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:587cafa0-6de3-49f8-906e-06315a8ff104,KernelVersion:3.10.0-1160.36.2.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723496952,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:ae72171f047a37ee5423e0692df7429830919af16e9d668ab0c80b723863d102 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:cd98d1edca8e5e2e3ea42cbc463812483e5d069d10f0974ca9d484b5a7bd68db tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Aug 28 03:23:59.821: INFO: Logging kubelet events for node master1 Aug 28 03:23:59.823: INFO: Logging pods the kubelet thinks is on node master1 Aug 28 03:23:59.837: INFO: kube-scheduler-master1 started at 2021-08-27 20:47:27 +0000 UTC (0+1 container statuses recorded) Aug 28 03:23:59.837: INFO: Container kube-scheduler ready: true, restart count 0 Aug 28 03:23:59.837: INFO: kube-flannel-pp7vp started at 2021-08-27 20:48:48 +0000 UTC (1+1 container statuses recorded) Aug 28 03:23:59.838: INFO: Init container install-cni ready: true, restart count 0 Aug 28 03:23:59.838: INFO: Container kube-flannel ready: true, restart count 3 Aug 28 03:23:59.838: INFO: kube-multus-ds-amd64-sfr9k started at 2021-08-27 20:48:56 +0000 UTC (0+1 container statuses recorded) Aug 28 03:23:59.838: INFO: Container kube-multus ready: true, restart count 1 Aug 28 03:23:59.838: INFO: docker-registry-docker-registry-56cbc7bc58-cthtt started at 2021-08-27 20:51:47 +0000 UTC (0+2 container statuses recorded) Aug 28 03:23:59.838: INFO: Container docker-registry ready: true, restart count 0 Aug 28 03:23:59.838: INFO: Container nginx ready: true, restart count 0 Aug 28 03:23:59.838: INFO: kube-apiserver-master1 started at 2021-08-27 20:54:17 +0000 UTC (0+1 container statuses recorded) Aug 28 03:23:59.838: INFO: Container kube-apiserver ready: true, restart count 0 Aug 28 03:23:59.838: INFO: kube-controller-manager-master1 started at 2021-08-27 20:54:27 +0000 UTC (0+1 container statuses recorded) Aug 28 03:23:59.838: INFO: Container kube-controller-manager ready: true, restart count 3 Aug 28 03:23:59.838: INFO: kube-proxy-rb5p6 started at 2021-08-27 20:48:12 +0000 UTC (0+1 container statuses recorded) Aug 28 03:23:59.838: INFO: Container kube-proxy ready: true, restart count 1 Aug 28 03:23:59.838: INFO: coredns-7677f9bb54-dwtp5 started at 2021-08-27 20:49:16 +0000 UTC (0+1 container statuses recorded) Aug 28 03:23:59.838: INFO: Container coredns ready: true, restart count 1 Aug 28 03:23:59.838: INFO: node-exporter-z2ngr started at 2021-08-27 20:59:13 +0000 UTC (0+2 container statuses recorded) Aug 28 03:23:59.838: INFO: Container kube-rbac-proxy ready: true, restart count 0 Aug 28 03:23:59.838: INFO: Container node-exporter ready: true, restart count 0 W0828 03:23:59.851139 23 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 28 03:23:59.877: INFO: Latency metrics for node master1 Aug 28 03:23:59.877: INFO: Logging node info for node master2 Aug 28 03:23:59.880: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 666e473f-d9e6-4c06-8b56-06474d788f70 153708 0 2021-08-27 20:46:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"42:86:ff:30:bd:4d"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-08-27 20:46:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {kubelet Update v1 2021-08-27 20:46:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {flanneld Update v1 2021-08-27 20:48:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-08-27 20:48:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-08-27 20:55:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-08-27 20:51:53 +0000 UTC,LastTransitionTime:2021-08-27 20:51:53 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-08-28 03:23:59 +0000 UTC,LastTransitionTime:2021-08-27 20:46:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-08-28 03:23:59 +0000 UTC,LastTransitionTime:2021-08-27 20:46:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-08-28 03:23:59 +0000 UTC,LastTransitionTime:2021-08-27 20:46:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-08-28 03:23:59 +0000 UTC,LastTransitionTime:2021-08-27 20:48:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2835065974a64998811b9acd85de209b,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:309b9155-1e2a-4ebd-900f-bba5abfc3a5d,KernelVersion:3.10.0-1160.36.2.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723496952,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Aug 28 03:23:59.881: INFO: Logging kubelet events for node master2 Aug 28 03:23:59.882: INFO: Logging pods the kubelet thinks is on node master2 Aug 28 03:23:59.897: INFO: kube-apiserver-master2 started at 2021-08-27 20:54:17 +0000 UTC (0+1 container statuses recorded) Aug 28 03:23:59.897: INFO: Container kube-apiserver ready: true, restart count 0 Aug 28 03:23:59.897: INFO: kube-scheduler-master2 started at 2021-08-27 20:47:27 +0000 UTC (0+1 container statuses recorded) Aug 28 03:23:59.897: INFO: Container kube-scheduler ready: true, restart count 2 Aug 28 03:23:59.897: INFO: kube-proxy-b4mn9 started at 2021-08-27 20:48:12 +0000 UTC (0+1 container statuses recorded) Aug 28 03:23:59.897: INFO: Container kube-proxy ready: true, restart count 1 Aug 28 03:23:59.897: INFO: node-feature-discovery-controller-5bf5c49849-zr9zd started at 2021-08-27 20:55:09 +0000 UTC (0+1 container statuses recorded) Aug 28 03:23:59.897: INFO: Container nfd-controller ready: true, restart count 0 Aug 28 03:23:59.897: INFO: kube-controller-manager-master2 started at 2021-08-27 20:51:01 +0000 UTC (0+1 container statuses recorded) Aug 28 03:23:59.897: INFO: Container kube-controller-manager ready: true, restart count 2 Aug 28 03:23:59.897: INFO: kube-flannel-4znnq started at 2021-08-27 20:48:48 +0000 UTC (1+1 container statuses recorded) Aug 28 03:23:59.897: INFO: Init container install-cni ready: true, restart count 2 Aug 28 03:23:59.897: INFO: Container kube-flannel ready: true, restart count 2 Aug 28 03:23:59.897: INFO: kube-multus-ds-amd64-4mgbk started at 2021-08-27 20:48:56 +0000 UTC (0+1 container statuses recorded) Aug 28 03:23:59.897: INFO: Container kube-multus ready: true, restart count 1 Aug 28 03:23:59.897: INFO: prometheus-operator-5bb8cb9d8f-whr5p started at 2021-08-27 20:59:06 +0000 UTC (0+2 container statuses recorded) Aug 28 03:23:59.897: INFO: Container kube-rbac-proxy ready: true, restart count 0 Aug 28 03:23:59.897: INFO: Container prometheus-operator ready: true, restart count 0 Aug 28 03:23:59.897: INFO: node-exporter-96jk5 started at 2021-08-27 20:59:13 +0000 UTC (0+2 container statuses recorded) Aug 28 03:23:59.897: INFO: Container kube-rbac-proxy ready: true, restart count 0 Aug 28 03:23:59.897: INFO: Container node-exporter ready: true, restart count 0 W0828 03:23:59.909140 23 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 28 03:23:59.933: INFO: Latency metrics for node master2 Aug 28 03:23:59.933: INFO: Logging node info for node master3 Aug 28 03:23:59.934: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 620b6dba-f2c5-46e9-b2ff-d2f4197167d0 153698 0 2021-08-27 20:47:02 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"ce:de:ea:c3:40:4f"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-08-27 20:47:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-08-27 20:47:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-08-27 20:48:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-08-27 20:48:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-08-27 20:51:44 +0000 UTC,LastTransitionTime:2021-08-27 20:51:44 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-08-28 03:23:56 +0000 UTC,LastTransitionTime:2021-08-27 20:47:02 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-08-28 03:23:56 +0000 UTC,LastTransitionTime:2021-08-27 20:47:02 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-08-28 03:23:56 +0000 UTC,LastTransitionTime:2021-08-27 20:47:02 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-08-28 03:23:56 +0000 UTC,LastTransitionTime:2021-08-27 20:48:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:37decbffe0e84048b5801289ad3be5bf,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:c96fe8b9-1ce0-44cd-935a-b58987e26570,KernelVersion:3.10.0-1160.36.2.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723496891,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Aug 28 03:23:59.935: INFO: Logging kubelet events for node master3 Aug 28 03:23:59.937: INFO: Logging pods the kubelet thinks is on node master3 Aug 28 03:23:59.951: INFO: kube-apiserver-master3 started at 2021-08-27 20:54:17 +0000 UTC (0+1 container statuses recorded) Aug 28 03:23:59.951: INFO: Container kube-apiserver ready: true, restart count 0 Aug 28 03:23:59.951: INFO: kube-scheduler-master3 started at 2021-08-27 20:47:27 +0000 UTC (0+1 container statuses recorded) Aug 28 03:23:59.951: INFO: Container kube-scheduler ready: true, restart count 2 Aug 28 03:23:59.951: INFO: kube-multus-ds-amd64-wwcgv started at 2021-08-27 20:48:56 +0000 UTC (0+1 container statuses recorded) Aug 28 03:23:59.951: INFO: Container kube-multus ready: true, restart count 1 Aug 28 03:23:59.951: INFO: node-exporter-d4m7q started at 2021-08-27 20:59:13 +0000 UTC (0+2 container statuses recorded) Aug 28 03:23:59.951: INFO: Container kube-rbac-proxy ready: true, restart count 0 Aug 28 03:23:59.951: INFO: Container node-exporter ready: true, restart count 0 Aug 28 03:23:59.951: INFO: kube-controller-manager-master3 started at 2021-08-27 20:47:27 +0000 UTC (0+1 container statuses recorded) Aug 28 03:23:59.951: INFO: Container kube-controller-manager ready: true, restart count 2 Aug 28 03:23:59.951: INFO: kube-proxy-8sxhm started at 2021-08-27 20:48:12 +0000 UTC (0+1 container statuses recorded) Aug 28 03:23:59.951: INFO: Container kube-proxy ready: true, restart count 1 Aug 28 03:23:59.951: INFO: kube-flannel-fkz5d started at 2021-08-27 20:48:48 +0000 UTC (1+1 container statuses recorded) Aug 28 03:23:59.951: INFO: Init container install-cni ready: true, restart count 0 Aug 28 03:23:59.951: INFO: Container kube-flannel ready: true, restart count 1 Aug 28 03:23:59.951: INFO: dns-autoscaler-5b7b5c9b6f-54xch started at 2021-08-27 20:49:19 +0000 UTC (0+1 container statuses recorded) Aug 28 03:23:59.951: INFO: Container autoscaler ready: true, restart count 1 Aug 28 03:23:59.951: INFO: coredns-7677f9bb54-rxplt started at 2021-08-27 20:49:21 +0000 UTC (0+1 container statuses recorded) Aug 28 03:23:59.951: INFO: Container coredns ready: true, restart count 1 W0828 03:23:59.964697 23 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 28 03:23:59.997: INFO: Latency metrics for node master3 Aug 28 03:23:59.997: INFO: Logging node info for node node1 Aug 28 03:24:00.000: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 e7a2481e-32db-4c83-bd9f-4a0687258e7a 153687 0 2021-08-27 20:48:09 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.36.2.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1382":"csi-mock-csi-mock-volumes-1382","csi-mock-csi-mock-volumes-1844":"csi-mock-csi-mock-volumes-1844","csi-mock-csi-mock-volumes-2691":"csi-mock-csi-mock-volumes-2691","csi-mock-csi-mock-volumes-2731":"csi-mock-csi-mock-volumes-2731","csi-mock-csi-mock-volumes-4084":"csi-mock-csi-mock-volumes-4084","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6248":"csi-mock-csi-mock-volumes-6248","csi-mock-csi-mock-volumes-6493":"csi-mock-csi-mock-volumes-6493","csi-mock-csi-mock-volumes-7027":"csi-mock-csi-mock-volumes-7027","csi-mock-csi-mock-volumes-7859":"csi-mock-csi-mock-volumes-7859","csi-mock-csi-mock-volumes-7866":"csi-mock-csi-mock-volumes-7866","csi-mock-csi-mock-volumes-9157":"csi-mock-csi-mock-volumes-9157","csi-mock-csi-mock-volumes-9165":"csi-mock-csi-mock-volumes-9165","csi-mock-csi-mock-volumes-9410":"csi-mock-csi-mock-volumes-9410"} flannel.alpha.coreos.com/backend-data:{"VtepMAC":"56:c7:37:40:51:ca"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-08-27 20:48:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-08-27 20:48:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-08-27 20:55:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-08-27 20:57:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-08-28 02:45:58 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}}} {kube-controller-manager Update v1 2021-08-28 03:10:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubelet Update v1 2021-08-28 03:10:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-08-27 20:51:49 +0000 UTC,LastTransitionTime:2021-08-27 20:51:49 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-08-28 03:23:53 +0000 UTC,LastTransitionTime:2021-08-27 20:48:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-08-28 03:23:53 +0000 UTC,LastTransitionTime:2021-08-27 20:48:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-08-28 03:23:53 +0000 UTC,LastTransitionTime:2021-08-27 20:48:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-08-28 03:23:53 +0000 UTC,LastTransitionTime:2021-08-27 20:48:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c1e38e80ea114a5f96601202301ce842,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:e769e86d-15c0-442c-a93b-bcc6c33ff1cd,KernelVersion:3.10.0-1160.36.2.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:331c6faa8b0d5ec72cf105e87d35df0a2f2baeec3d6217a51faa73f9460f937f localhost:30500/barometer-collectd:stable],SizeBytes:1238704157,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:a7cea43d9d2f67c38fbf0407786edbe660ee9072945f7bb272b55fd255e8eaca opnfv/barometer-collectd:stable],SizeBytes:1075746799,},ContainerImage{Names:[@ :],SizeBytes:1003787960,},ContainerImage{Names:[localhost:30500/cmk@sha256:fd1487b0c07556a087eff669e70c501a704720dcd53ff75183593de6720585f2 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723496952,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:df13abe416e37eb3db4722840dd479b00ba193ac6606e7902331dcea50f4f1f2 nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:48281550,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:46041582,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:d7300ccf7ff3e9cea2111d275143b8050618bbc1d1ffe41f46286b1696261243 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44393508,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64@sha256:3b36bd80b97c532a774e7f6246797b8575d97037982f353476c703ba6686c75c gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64:1.0],SizeBytes:19227369,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:0b4273abac4c241fa3d70aaf52b0d79a133d2737081f4a5c5dea4949f6c45dc3 k8s.gcr.io/sig-storage/mock-driver:v3.1.0],SizeBytes:18687618,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:16322467,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:de25c7fc6c4f3a27c7f0c2dff454e4671823a34d88abd533f210848d527e0fbb alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Aug 28 03:24:00.001: INFO: Logging kubelet events for node node1 Aug 28 03:24:00.003: INFO: Logging pods the kubelet thinks is on node node1 Aug 28 03:24:00.500: INFO: pod-b13afcd6-8433-47ff-9e72-b57988ec6003 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.500: INFO: Container write-pod ready: true, restart count 0 Aug 28 03:24:00.500: INFO: pod-218eadb1-6012-4f8c-9774-8589a6a8dc9e started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.500: INFO: Container write-pod ready: true, restart count 0 Aug 28 03:24:00.500: INFO: pod-2ae1600f-b24a-43d9-b495-0983881d2cfb started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.500: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.500: INFO: pod-40b6d641-9a1e-4d20-a39e-22884ceeff53 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.500: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.500: INFO: prometheus-k8s-0 started at 2021-08-27 20:59:29 +0000 UTC (0+5 container statuses recorded) Aug 28 03:24:00.500: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Aug 28 03:24:00.500: INFO: Container grafana ready: true, restart count 0 Aug 28 03:24:00.500: INFO: Container prometheus ready: true, restart count 1 Aug 28 03:24:00.500: INFO: Container prometheus-config-reloader ready: true, restart count 0 Aug 28 03:24:00.500: INFO: Container rules-configmap-reloader ready: true, restart count 0 Aug 28 03:24:00.500: INFO: pod-288eaa76-ebb1-46fc-8e8f-988380fe6721 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.500: INFO: Container write-pod ready: true, restart count 0 Aug 28 03:24:00.500: INFO: pod-51996145-ecbd-4979-986e-a2566b79da14 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.500: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.500: INFO: pod-12081747-e86d-4f67-92b7-87d3bbd5ee2b started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.500: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.500: INFO: pod-20a68870-7734-44e5-8250-ed0ff242396e started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.500: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.500: INFO: pod-43aad80b-6645-4b9f-8fc8-57460ad3a112 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.500: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.500: INFO: pod-2be1822f-e531-4d53-9b6d-02d7fb262b65 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.500: INFO: Container write-pod ready: true, restart count 0 Aug 28 03:24:00.500: INFO: pod-ba671871-0ee2-47f2-a7b3-65a00e132522 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.500: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.500: INFO: pod-b01a9e1f-17de-4787-8291-e347f7207c04 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.500: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.500: INFO: kube-multus-ds-amd64-nn7bl started at 2021-08-27 20:48:56 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.500: INFO: Container kube-multus ready: true, restart count 1 Aug 28 03:24:00.500: INFO: kubernetes-metrics-scraper-678c97765c-gtp5x started at 2021-08-27 20:49:21 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.500: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Aug 28 03:24:00.500: INFO: cmk-init-discover-node1-spg26 started at 2021-08-27 20:57:37 +0000 UTC (0+3 container statuses recorded) Aug 28 03:24:00.500: INFO: Container discover ready: false, restart count 0 Aug 28 03:24:00.500: INFO: Container init ready: false, restart count 0 Aug 28 03:24:00.500: INFO: Container install ready: false, restart count 0 Aug 28 03:24:00.500: INFO: pod-1f68adb8-907d-462d-83c0-81a62875a18d started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.500: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.500: INFO: pod-b7e65917-0844-4e05-a20d-dc417d72619b started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.500: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.500: INFO: nginx-proxy-node1 started at 2021-08-27 20:54:17 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.500: INFO: Container nginx-proxy ready: true, restart count 2 Aug 28 03:24:00.500: INFO: pod-ebf66acd-613a-4ed7-8603-9dc6a5dbd69c started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.500: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.500: INFO: pod-5e9afa04-0b49-4697-afe0-338f12fb3988 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-df39cbf5-9b17-4568-9cbe-843e91275e44 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-995e1cf4-a34e-4b81-855d-47943a293de3 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-c69a9389-323c-42cb-b323-b34e1025ff51 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-55ea1243-4891-4255-b8a2-0b46337d8af8 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-ffb71798-1721-46a4-a60d-5a3f16b3e467 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-b4ee6f2f-11af-4876-8546-d54074074034 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-0060bc15-11ee-4849-99cd-c6890c1dd7ca started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-5e6bc330-443b-4cc9-9ab2-87a195e7710c started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-5b56657d-f3b1-4ad6-8213-9d96ccd953f1 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-924019d9-0795-4ada-97f0-a0e57615146f started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: kube-proxy-pb5bl started at 2021-08-27 20:48:12 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container kube-proxy ready: true, restart count 2 Aug 28 03:24:00.501: INFO: collectd-ccvwg started at 2021-08-27 21:04:15 +0000 UTC (0+3 container statuses recorded) Aug 28 03:24:00.501: INFO: Container collectd ready: true, restart count 0 Aug 28 03:24:00.501: INFO: Container collectd-exporter ready: true, restart count 0 Aug 28 03:24:00.501: INFO: Container rbac-proxy ready: true, restart count 0 Aug 28 03:24:00.501: INFO: pod-a5a3b3da-4552-4ff6-bc9f-e75390846609 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-b6bfdd59-6d87-4899-91e9-ceddda7efb04 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-040d1c3d-c9a6-43d6-b3a5-c038d898c4f6 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-191a05a0-fc3e-4acd-8404-02e7f1ec1073 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: kube-flannel-ssxn7 started at 2021-08-27 20:48:48 +0000 UTC (1+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Init container install-cni ready: true, restart count 0 Aug 28 03:24:00.501: INFO: Container kube-flannel ready: true, restart count 1 Aug 28 03:24:00.501: INFO: pod-7b0ea0bb-9d30-431d-8f7e-775e42fc55f6 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-937811e1-044d-4f1c-8190-5334572e7730 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: cmk-jw4m6 started at 2021-08-27 20:58:19 +0000 UTC (0+2 container statuses recorded) Aug 28 03:24:00.501: INFO: Container nodereport ready: true, restart count 0 Aug 28 03:24:00.501: INFO: Container reconcile ready: true, restart count 0 Aug 28 03:24:00.501: INFO: pod-ff33ff1c-3196-454f-a95c-7de0a1c57900 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-4c261d29-ebf6-4faf-91c3-02b125f83e60 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-53b2c341-b919-49f9-95a2-a09a7a40e6d3 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-896b36c5-e368-4ca9-a7e8-d8270fe997ee started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-0d413a9e-b1b5-42e9-908b-bbaab02a64f5 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-59151975-c844-4b23-bbdf-d83ce085296c started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-c52ec868-e9e2-4829-9426-3970f11ddac2 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-5ea2304e-d792-4a89-aea4-870a0fa30bc5 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9lndx started at 2021-08-27 20:55:51 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container kube-sriovdp ready: true, restart count 0 Aug 28 03:24:00.501: INFO: pod-d476e108-c35b-43ef-8fa7-2513c6fc7006 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-4dfe96ed-dd96-400f-89ce-3db1557c2dae started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-66755853-0db4-499d-803e-6c1f031c3599 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: node-exporter-4cvlq started at 2021-08-27 20:59:13 +0000 UTC (0+2 container statuses recorded) Aug 28 03:24:00.501: INFO: Container kube-rbac-proxy ready: true, restart count 0 Aug 28 03:24:00.501: INFO: Container node-exporter ready: true, restart count 0 Aug 28 03:24:00.501: INFO: pod-a047018f-fb61-4df6-b9e7-7bfa79737f38 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-6773c694-d791-4f26-b07e-696d2e05173a started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-ec474e3c-90de-4ce7-9ad5-aa7c30e86f9f started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-b6a94036-c85c-4d64-89e4-f7311c941a43 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-5fac7398-60e9-45fa-b0df-b15ab9e417cd started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: kubernetes-dashboard-86c6f9df5b-c56fg started at 2021-08-27 20:49:21 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container kubernetes-dashboard ready: true, restart count 1 Aug 28 03:24:00.501: INFO: node-feature-discovery-worker-bd9kg started at 2021-08-27 20:55:06 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container nfd-worker ready: true, restart count 0 Aug 28 03:24:00.501: INFO: pod-ae2f271e-271c-45d6-86a2-7ce0689b4804 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:24:00.501: INFO: pod-cf12187e-5863-4f10-87e5-f48b90a7f2b0 started at 2021-08-28 03:18:59 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:00.501: INFO: Container write-pod ready: false, restart count 0 W0828 03:24:00.514681 23 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 28 03:24:01.118: INFO: Latency metrics for node node1 Aug 28 03:24:01.118: INFO: Logging node info for node node2 Aug 28 03:24:01.121: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 eb7da7c0-513f-4072-a078-ad3d24f88114 153718 0 2021-08-27 20:48:09 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.36.2.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1442":"csi-mock-csi-mock-volumes-1442","csi-mock-csi-mock-volumes-1936":"csi-mock-csi-mock-volumes-1936","csi-mock-csi-mock-volumes-2652":"csi-mock-csi-mock-volumes-2652","csi-mock-csi-mock-volumes-2831":"csi-mock-csi-mock-volumes-2831","csi-mock-csi-mock-volumes-2967":"csi-mock-csi-mock-volumes-2967","csi-mock-csi-mock-volumes-3285":"csi-mock-csi-mock-volumes-3285","csi-mock-csi-mock-volumes-5585":"csi-mock-csi-mock-volumes-5585","csi-mock-csi-mock-volumes-7451":"csi-mock-csi-mock-volumes-7451","csi-mock-csi-mock-volumes-8521":"csi-mock-csi-mock-volumes-8521"} flannel.alpha.coreos.com/backend-data:{"VtepMAC":"9a:88:9c:d1:39:68"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-08-27 20:48:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-08-27 20:48:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-08-27 20:55:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-08-27 20:58:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-08-28 02:45:47 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{}}}}} {kube-controller-manager Update v1 2021-08-28 03:01:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubelet Update v1 2021-08-28 03:02:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-08-27 20:50:36 +0000 UTC,LastTransitionTime:2021-08-27 20:50:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-08-28 03:24:00 +0000 UTC,LastTransitionTime:2021-08-27 20:48:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-08-28 03:24:00 +0000 UTC,LastTransitionTime:2021-08-27 20:48:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-08-28 03:24:00 +0000 UTC,LastTransitionTime:2021-08-27 20:48:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-08-28 03:24:00 +0000 UTC,LastTransitionTime:2021-08-27 20:48:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0ccfc2a4a9b7400c9ca53b5de0ca4970,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:26b02690-0814-4c92-9f6d-d315df796ce6,KernelVersion:3.10.0-1160.36.2.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:331c6faa8b0d5ec72cf105e87d35df0a2f2baeec3d6217a51faa73f9460f937f localhost:30500/barometer-collectd:stable],SizeBytes:1238704157,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:723496952,},ContainerImage{Names:[localhost:30500/cmk@sha256:fd1487b0c07556a087eff669e70c501a704720dcd53ff75183593de6720585f2 localhost:30500/cmk:v1.5.1],SizeBytes:723496952,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[k8s.gcr.io/sig-storage/nfs-provisioner@sha256:c1bedac8758029948afe060bf8f6ee63ea489b5e08d29745f44fab68ee0d46ca k8s.gcr.io/sig-storage/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 gluster/glusterdynamic-provisioner:v1.0],SizeBytes:373281573,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[nginx@sha256:df13abe416e37eb3db4722840dd479b00ba193ac6606e7902331dcea50f4f1f2 nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:48281550,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:46041582,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:d7300ccf7ff3e9cea2111d275143b8050618bbc1d1ffe41f46286b1696261243 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44393508,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:ae72171f047a37ee5423e0692df7429830919af16e9d668ab0c80b723863d102 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:cd98d1edca8e5e2e3ea42cbc463812483e5d069d10f0974ca9d484b5a7bd68db localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:0b4273abac4c241fa3d70aaf52b0d79a133d2737081f4a5c5dea4949f6c45dc3 k8s.gcr.io/sig-storage/mock-driver:v3.1.0],SizeBytes:18687618,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:16322467,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Aug 28 03:24:01.122: INFO: Logging kubelet events for node node2 Aug 28 03:24:01.124: INFO: Logging pods the kubelet thinks is on node node2 Aug 28 03:24:01.140: INFO: nginx-proxy-node2 started at 2021-08-27 20:54:17 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:01.140: INFO: Container nginx-proxy ready: true, restart count 1 Aug 28 03:24:01.140: INFO: kube-proxy-r4q4t started at 2021-08-27 20:48:12 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:01.140: INFO: Container kube-proxy ready: true, restart count 2 Aug 28 03:24:01.140: INFO: kube-multus-ds-amd64-tfffk started at 2021-08-27 20:48:56 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:01.140: INFO: Container kube-multus ready: true, restart count 1 Aug 28 03:24:01.140: INFO: node-exporter-p6h5h started at 2021-08-27 20:59:13 +0000 UTC (0+2 container statuses recorded) Aug 28 03:24:01.140: INFO: Container kube-rbac-proxy ready: true, restart count 0 Aug 28 03:24:01.140: INFO: Container node-exporter ready: true, restart count 0 Aug 28 03:24:01.140: INFO: cmk-fzjgr started at 2021-08-27 20:58:20 +0000 UTC (0+2 container statuses recorded) Aug 28 03:24:01.140: INFO: Container nodereport ready: true, restart count 0 Aug 28 03:24:01.140: INFO: Container reconcile ready: true, restart count 0 Aug 28 03:24:01.140: INFO: collectd-64dp2 started at 2021-08-27 21:04:15 +0000 UTC (0+3 container statuses recorded) Aug 28 03:24:01.140: INFO: Container collectd ready: true, restart count 0 Aug 28 03:24:01.140: INFO: Container collectd-exporter ready: true, restart count 0 Aug 28 03:24:01.140: INFO: Container rbac-proxy ready: true, restart count 0 Aug 28 03:24:01.140: INFO: cmk-webhook-6c9d5f8578-ndbx2 started at 2021-08-27 20:58:20 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:01.140: INFO: Container cmk-webhook ready: true, restart count 0 Aug 28 03:24:01.140: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-v99df started at 2021-08-27 21:02:08 +0000 UTC (0+2 container statuses recorded) Aug 28 03:24:01.140: INFO: Container tas-controller ready: true, restart count 0 Aug 28 03:24:01.140: INFO: Container tas-extender ready: true, restart count 0 Aug 28 03:24:01.140: INFO: kube-flannel-t9qv4 started at 2021-08-27 20:48:48 +0000 UTC (1+1 container statuses recorded) Aug 28 03:24:01.140: INFO: Init container install-cni ready: true, restart count 0 Aug 28 03:24:01.140: INFO: Container kube-flannel ready: true, restart count 2 Aug 28 03:24:01.140: INFO: cmk-init-discover-node2-l9qjd started at 2021-08-27 20:57:57 +0000 UTC (0+3 container statuses recorded) Aug 28 03:24:01.140: INFO: Container discover ready: false, restart count 0 Aug 28 03:24:01.140: INFO: Container init ready: false, restart count 0 Aug 28 03:24:01.140: INFO: Container install ready: false, restart count 0 Aug 28 03:24:01.140: INFO: node-feature-discovery-worker-54lfh started at 2021-08-27 20:55:06 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:01.140: INFO: Container nfd-worker ready: true, restart count 0 Aug 28 03:24:01.140: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-4f962 started at 2021-08-27 20:55:51 +0000 UTC (0+1 container statuses recorded) Aug 28 03:24:01.140: INFO: Container kube-sriovdp ready: true, restart count 0 W0828 03:24:01.160567 23 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 28 03:24:01.202: INFO: Latency metrics for node node2 Aug 28 03:24:01.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6444" for this suite. • Failure [301.983 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:614 all pods should be running [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:642 Aug 28 03:23:59.771: Some pods are not running within 5m0s Unexpected error: <*errors.errorString | 0xc0002bc200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:683 ------------------------------ {"msg":"FAILED [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running","total":20,"completed":0,"skipped":3250,"failed":1,"failures":["[sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:517 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 03:24:01.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:441 STEP: Setting up 10 local volumes on node "node1" STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-04fb6ac5-6a42-4ec8-bf20-561cb2e43d05" Aug 28 03:24:37.260: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-04fb6ac5-6a42-4ec8-bf20-561cb2e43d05" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-04fb6ac5-6a42-4ec8-bf20-561cb2e43d05" "/tmp/local-volume-test-04fb6ac5-6a42-4ec8-bf20-561cb2e43d05"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:24:37.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-b0becc07-1f86-440d-a01d-fe465804edf9" Aug 28 03:24:37.571: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-b0becc07-1f86-440d-a01d-fe465804edf9" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-b0becc07-1f86-440d-a01d-fe465804edf9" "/tmp/local-volume-test-b0becc07-1f86-440d-a01d-fe465804edf9"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:24:37.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-c726c754-ec20-4cd1-8c33-60a21c58d774" Aug 28 03:24:37.689: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-c726c754-ec20-4cd1-8c33-60a21c58d774" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-c726c754-ec20-4cd1-8c33-60a21c58d774" "/tmp/local-volume-test-c726c754-ec20-4cd1-8c33-60a21c58d774"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:24:37.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-6df3f0ef-6515-4fb4-bdaf-e905f58e9c71" Aug 28 03:24:37.794: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-6df3f0ef-6515-4fb4-bdaf-e905f58e9c71" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-6df3f0ef-6515-4fb4-bdaf-e905f58e9c71" "/tmp/local-volume-test-6df3f0ef-6515-4fb4-bdaf-e905f58e9c71"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:24:37.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-f38d3cdb-1b21-43b5-83e3-b4b469502ddc" Aug 28 03:24:37.918: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-f38d3cdb-1b21-43b5-83e3-b4b469502ddc" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-f38d3cdb-1b21-43b5-83e3-b4b469502ddc" "/tmp/local-volume-test-f38d3cdb-1b21-43b5-83e3-b4b469502ddc"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:24:37.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-52f6223b-63c4-49fd-884b-a1f01ad95203" Aug 28 03:24:38.027: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-52f6223b-63c4-49fd-884b-a1f01ad95203" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-52f6223b-63c4-49fd-884b-a1f01ad95203" "/tmp/local-volume-test-52f6223b-63c4-49fd-884b-a1f01ad95203"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:24:38.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-d4318b55-a421-49fb-9264-0cd9d5e99cf9" Aug 28 03:24:38.140: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-d4318b55-a421-49fb-9264-0cd9d5e99cf9" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-d4318b55-a421-49fb-9264-0cd9d5e99cf9" "/tmp/local-volume-test-d4318b55-a421-49fb-9264-0cd9d5e99cf9"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:24:38.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-54f9f91b-6316-4a76-a78c-9e10d3d01f7c" Aug 28 03:24:38.252: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-54f9f91b-6316-4a76-a78c-9e10d3d01f7c" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-54f9f91b-6316-4a76-a78c-9e10d3d01f7c" "/tmp/local-volume-test-54f9f91b-6316-4a76-a78c-9e10d3d01f7c"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:24:38.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-36d69087-e816-42bd-8eff-0d65b59f1d69" Aug 28 03:24:38.357: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-36d69087-e816-42bd-8eff-0d65b59f1d69" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-36d69087-e816-42bd-8eff-0d65b59f1d69" "/tmp/local-volume-test-36d69087-e816-42bd-8eff-0d65b59f1d69"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:24:38.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-ef55b9b9-f494-41c4-bc84-6aa289961706" Aug 28 03:24:38.464: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ef55b9b9-f494-41c4-bc84-6aa289961706" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ef55b9b9-f494-41c4-bc84-6aa289961706" "/tmp/local-volume-test-ef55b9b9-f494-41c4-bc84-6aa289961706"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:24:38.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Setting up 10 local volumes on node "node2" STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-1261457f-cde2-4a1c-ba70-d841ff7147a6" Aug 28 03:24:42.612: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-1261457f-cde2-4a1c-ba70-d841ff7147a6" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-1261457f-cde2-4a1c-ba70-d841ff7147a6" "/tmp/local-volume-test-1261457f-cde2-4a1c-ba70-d841ff7147a6"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:24:42.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-43dc9d1b-2905-48f4-934f-1f076df67cac" Aug 28 03:24:42.733: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-43dc9d1b-2905-48f4-934f-1f076df67cac" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-43dc9d1b-2905-48f4-934f-1f076df67cac" "/tmp/local-volume-test-43dc9d1b-2905-48f4-934f-1f076df67cac"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:24:42.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-8d45e0a7-e75b-4b36-9d23-88895d74a603" Aug 28 03:24:42.854: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-8d45e0a7-e75b-4b36-9d23-88895d74a603" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-8d45e0a7-e75b-4b36-9d23-88895d74a603" "/tmp/local-volume-test-8d45e0a7-e75b-4b36-9d23-88895d74a603"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:24:42.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-d9c14c17-9827-4c7a-9c66-b0abb71fcea5" Aug 28 03:24:42.982: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-d9c14c17-9827-4c7a-9c66-b0abb71fcea5" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-d9c14c17-9827-4c7a-9c66-b0abb71fcea5" "/tmp/local-volume-test-d9c14c17-9827-4c7a-9c66-b0abb71fcea5"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:24:42.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-0d2d051d-e88a-4e79-95a3-0f701919939f" Aug 28 03:24:43.097: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-0d2d051d-e88a-4e79-95a3-0f701919939f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-0d2d051d-e88a-4e79-95a3-0f701919939f" "/tmp/local-volume-test-0d2d051d-e88a-4e79-95a3-0f701919939f"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:24:43.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-5601b49c-7378-42b9-926f-6654d177491d" Aug 28 03:24:43.214: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-5601b49c-7378-42b9-926f-6654d177491d" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-5601b49c-7378-42b9-926f-6654d177491d" "/tmp/local-volume-test-5601b49c-7378-42b9-926f-6654d177491d"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:24:43.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-814a4645-55a0-44a0-a260-611fb16b364b" Aug 28 03:24:43.355: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-814a4645-55a0-44a0-a260-611fb16b364b" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-814a4645-55a0-44a0-a260-611fb16b364b" "/tmp/local-volume-test-814a4645-55a0-44a0-a260-611fb16b364b"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:24:43.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-f54bb368-72ff-49b2-ac8b-6973173a771a" Aug 28 03:24:43.473: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-f54bb368-72ff-49b2-ac8b-6973173a771a" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-f54bb368-72ff-49b2-ac8b-6973173a771a" "/tmp/local-volume-test-f54bb368-72ff-49b2-ac8b-6973173a771a"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:24:43.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-80ca0d4a-7a17-4798-a75d-bf72963835b4" Aug 28 03:24:43.587: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-80ca0d4a-7a17-4798-a75d-bf72963835b4" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-80ca0d4a-7a17-4798-a75d-bf72963835b4" "/tmp/local-volume-test-80ca0d4a-7a17-4798-a75d-bf72963835b4"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:24:43.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-2220106b-0637-4186-b13a-4aac8c6da0ac" Aug 28 03:24:43.720: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-2220106b-0637-4186-b13a-4aac8c6da0ac" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-2220106b-0637-4186-b13a-4aac8c6da0ac" "/tmp/local-volume-test-2220106b-0637-4186-b13a-4aac8c6da0ac"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:24:43.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Create 20 PVs STEP: Start a goroutine to recycle unbound PVs [It] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:517 STEP: Creating 7 pods periodically STEP: Waiting for all pods to complete successfully STEP: Delete "local-pvgmj49" and create a new PV for same local volume storage Aug 28 03:29:44.029: FAIL: some pods failed to complete within 5m0s Unexpected error: <*errors.errorString | 0xc0002bc200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func20.6.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:610 +0x42a k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002f9fb00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc002f9fb00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc002f9fb00, 0x4dec428) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 Aug 28 03:29:44.030: INFO: Deleting pod pod-0b612b33-8ffb-4d64-b727-05dcb4040261 Aug 28 03:29:44.036: INFO: Deleting PersistentVolumeClaim "pvc-hprtx" Aug 28 03:29:44.040: INFO: Deleting PersistentVolumeClaim "pvc-5lg7z" Aug 28 03:29:44.043: INFO: Deleting PersistentVolumeClaim "pvc-slrfw" Aug 28 03:29:44.048: INFO: Deleting pod pod-f7a4f75e-7b5c-4a31-8a70-3c2f1794df39 Aug 28 03:29:44.053: INFO: Deleting PersistentVolumeClaim "pvc-s827t" Aug 28 03:29:44.058: INFO: Deleting PersistentVolumeClaim "pvc-dvm7x" Aug 28 03:29:44.062: INFO: Deleting PersistentVolumeClaim "pvc-qs5x7" Aug 28 03:29:44.065: INFO: Deleting pod pod-19a82adf-b74d-4070-adc6-0bff5e2608a5 Aug 28 03:29:44.069: INFO: Deleting PersistentVolumeClaim "pvc-klw87" Aug 28 03:29:44.073: INFO: Deleting PersistentVolumeClaim "pvc-gm2pl" Aug 28 03:29:44.076: INFO: Deleting PersistentVolumeClaim "pvc-5dk9s" Aug 28 03:29:44.080: INFO: Deleting pod pod-bc4d94a3-d01a-495b-a191-0bbbf6278745 Aug 28 03:29:44.084: INFO: Deleting PersistentVolumeClaim "pvc-wd9rh" Aug 28 03:29:44.087: INFO: Deleting PersistentVolumeClaim "pvc-xz8bj" Aug 28 03:29:44.090: INFO: Deleting PersistentVolumeClaim "pvc-wftxx" Aug 28 03:29:44.094: INFO: Deleting pod pod-b139fde4-e05d-4abf-9b7e-548002ce4169 Aug 28 03:29:44.098: INFO: Deleting PersistentVolumeClaim "pvc-fps6q" Aug 28 03:29:44.102: INFO: Deleting PersistentVolumeClaim "pvc-nkgq8" Aug 28 03:29:44.106: INFO: Deleting PersistentVolumeClaim "pvc-fgdv7" Aug 28 03:29:44.109: INFO: Deleting pod pod-9184f8b9-11fe-4b3a-9aec-9e784f6e1376 Aug 28 03:29:44.114: INFO: Deleting PersistentVolumeClaim "pvc-hsqrz" Aug 28 03:29:44.117: INFO: Deleting PersistentVolumeClaim "pvc-tgnrn" Aug 28 03:29:44.120: INFO: Deleting PersistentVolumeClaim "pvc-v5xsr" Aug 28 03:29:44.124: INFO: Deleting pod pod-ca64715a-5bc1-485c-bd3f-4a129e59e031 Aug 28 03:29:44.128: INFO: Deleting PersistentVolumeClaim "pvc-l96m6" Aug 28 03:29:44.132: INFO: Deleting PersistentVolumeClaim "pvc-8ds87" Aug 28 03:29:44.136: INFO: Deleting PersistentVolumeClaim "pvc-4l9g9" [AfterEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:505 STEP: Stop and wait for recycle goroutine to finish STEP: Clean all PVs STEP: Cleaning up 10 local volumes on node "node1" STEP: Cleaning up PVC and PV Aug 28 03:29:44.141: INFO: pvc is nil Aug 28 03:29:44.141: INFO: Deleting PersistentVolume "local-pvlnfxl" STEP: Cleaning up PVC and PV Aug 28 03:29:44.145: INFO: pvc is nil Aug 28 03:29:44.145: INFO: Deleting PersistentVolume "local-pvmkqlm" STEP: Cleaning up PVC and PV Aug 28 03:29:44.149: INFO: pvc is nil Aug 28 03:29:44.149: INFO: Deleting PersistentVolume "local-pv6c66j" STEP: Cleaning up PVC and PV Aug 28 03:29:44.153: INFO: pvc is nil Aug 28 03:29:44.153: INFO: Deleting PersistentVolume "local-pvpfh7c" STEP: Cleaning up PVC and PV Aug 28 03:29:44.157: INFO: pvc is nil Aug 28 03:29:44.157: INFO: Deleting PersistentVolume "local-pvcdfqw" STEP: Cleaning up PVC and PV Aug 28 03:29:44.161: INFO: pvc is nil Aug 28 03:29:44.161: INFO: Deleting PersistentVolume "local-pvcmrqn" STEP: Cleaning up PVC and PV Aug 28 03:29:44.165: INFO: pvc is nil Aug 28 03:29:44.165: INFO: Deleting PersistentVolume "local-pvdsdxm" STEP: Cleaning up PVC and PV Aug 28 03:29:44.170: INFO: pvc is nil Aug 28 03:29:44.170: INFO: Deleting PersistentVolume "local-pv6b8r7" STEP: Cleaning up PVC and PV Aug 28 03:29:44.174: INFO: pvc is nil Aug 28 03:29:44.174: INFO: Deleting PersistentVolume "local-pvxs782" STEP: Cleaning up PVC and PV Aug 28 03:29:44.177: INFO: pvc is nil Aug 28 03:29:44.177: INFO: Deleting PersistentVolume "local-pv78bhp" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-04fb6ac5-6a42-4ec8-bf20-561cb2e43d05" Aug 28 03:29:44.182: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-04fb6ac5-6a42-4ec8-bf20-561cb2e43d05"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:44.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 28 03:29:44.919: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-04fb6ac5-6a42-4ec8-bf20-561cb2e43d05] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:44.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-b0becc07-1f86-440d-a01d-fe465804edf9" Aug 28 03:29:45.356: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-b0becc07-1f86-440d-a01d-fe465804edf9"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:45.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 28 03:29:45.686: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-b0becc07-1f86-440d-a01d-fe465804edf9] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:45.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-c726c754-ec20-4cd1-8c33-60a21c58d774" Aug 28 03:29:45.807: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-c726c754-ec20-4cd1-8c33-60a21c58d774"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:45.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 28 03:29:46.048: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c726c754-ec20-4cd1-8c33-60a21c58d774] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:46.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-6df3f0ef-6515-4fb4-bdaf-e905f58e9c71" Aug 28 03:29:46.192: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-6df3f0ef-6515-4fb4-bdaf-e905f58e9c71"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:46.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 28 03:29:46.320: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6df3f0ef-6515-4fb4-bdaf-e905f58e9c71] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:46.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-f38d3cdb-1b21-43b5-83e3-b4b469502ddc" Aug 28 03:29:46.449: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-f38d3cdb-1b21-43b5-83e3-b4b469502ddc"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:46.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 28 03:29:46.567: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f38d3cdb-1b21-43b5-83e3-b4b469502ddc] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:46.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-52f6223b-63c4-49fd-884b-a1f01ad95203" Aug 28 03:29:46.729: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-52f6223b-63c4-49fd-884b-a1f01ad95203"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:46.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 28 03:29:46.898: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-52f6223b-63c4-49fd-884b-a1f01ad95203] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:46.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-d4318b55-a421-49fb-9264-0cd9d5e99cf9" Aug 28 03:29:47.016: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-d4318b55-a421-49fb-9264-0cd9d5e99cf9"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:47.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 28 03:29:47.142: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d4318b55-a421-49fb-9264-0cd9d5e99cf9] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:47.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-54f9f91b-6316-4a76-a78c-9e10d3d01f7c" Aug 28 03:29:47.255: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-54f9f91b-6316-4a76-a78c-9e10d3d01f7c"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:47.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 28 03:29:47.625: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-54f9f91b-6316-4a76-a78c-9e10d3d01f7c] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:47.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-36d69087-e816-42bd-8eff-0d65b59f1d69" Aug 28 03:29:47.764: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-36d69087-e816-42bd-8eff-0d65b59f1d69"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:47.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 28 03:29:47.888: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-36d69087-e816-42bd-8eff-0d65b59f1d69] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:47.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-ef55b9b9-f494-41c4-bc84-6aa289961706" Aug 28 03:29:48.008: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ef55b9b9-f494-41c4-bc84-6aa289961706"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:48.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 28 03:29:48.288: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ef55b9b9-f494-41c4-bc84-6aa289961706] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node1-bvnrs ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:48.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up 10 local volumes on node "node2" STEP: Cleaning up PVC and PV Aug 28 03:29:48.438: INFO: pvc is nil Aug 28 03:29:48.438: INFO: Deleting PersistentVolume "local-pvlx9s8" STEP: Cleaning up PVC and PV Aug 28 03:29:48.444: INFO: pvc is nil Aug 28 03:29:48.444: INFO: Deleting PersistentVolume "local-pvz8jrh" STEP: Cleaning up PVC and PV Aug 28 03:29:48.448: INFO: pvc is nil Aug 28 03:29:48.448: INFO: Deleting PersistentVolume "local-pvqhvnp" STEP: Cleaning up PVC and PV Aug 28 03:29:48.452: INFO: pvc is nil Aug 28 03:29:48.452: INFO: Deleting PersistentVolume "local-pvcqrw6" STEP: Cleaning up PVC and PV Aug 28 03:29:48.455: INFO: pvc is nil Aug 28 03:29:48.455: INFO: Deleting PersistentVolume "local-pvbzrnj" STEP: Cleaning up PVC and PV Aug 28 03:29:48.460: INFO: pvc is nil Aug 28 03:29:48.460: INFO: Deleting PersistentVolume "local-pvgjtmr" STEP: Cleaning up PVC and PV Aug 28 03:29:48.464: INFO: pvc is nil Aug 28 03:29:48.464: INFO: Deleting PersistentVolume "local-pv5jglq" STEP: Cleaning up PVC and PV Aug 28 03:29:48.467: INFO: pvc is nil Aug 28 03:29:48.467: INFO: Deleting PersistentVolume "local-pvx7x4l" STEP: Cleaning up PVC and PV Aug 28 03:29:48.471: INFO: pvc is nil Aug 28 03:29:48.471: INFO: Deleting PersistentVolume "local-pvk8hss" STEP: Cleaning up PVC and PV Aug 28 03:29:48.474: INFO: pvc is nil Aug 28 03:29:48.474: INFO: Deleting PersistentVolume "local-pvpwcsg" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-1261457f-cde2-4a1c-ba70-d841ff7147a6" Aug 28 03:29:48.477: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-1261457f-cde2-4a1c-ba70-d841ff7147a6"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:48.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 28 03:29:48.613: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-1261457f-cde2-4a1c-ba70-d841ff7147a6] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:48.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-43dc9d1b-2905-48f4-934f-1f076df67cac" Aug 28 03:29:48.719: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-43dc9d1b-2905-48f4-934f-1f076df67cac"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:48.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 28 03:29:48.840: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-43dc9d1b-2905-48f4-934f-1f076df67cac] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:48.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-8d45e0a7-e75b-4b36-9d23-88895d74a603" Aug 28 03:29:48.945: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-8d45e0a7-e75b-4b36-9d23-88895d74a603"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:48.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 28 03:29:49.057: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-8d45e0a7-e75b-4b36-9d23-88895d74a603] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:49.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-d9c14c17-9827-4c7a-9c66-b0abb71fcea5" Aug 28 03:29:49.164: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-d9c14c17-9827-4c7a-9c66-b0abb71fcea5"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:49.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 28 03:29:49.291: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d9c14c17-9827-4c7a-9c66-b0abb71fcea5] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:49.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-0d2d051d-e88a-4e79-95a3-0f701919939f" Aug 28 03:29:49.402: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-0d2d051d-e88a-4e79-95a3-0f701919939f"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:49.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 28 03:29:49.517: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0d2d051d-e88a-4e79-95a3-0f701919939f] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:49.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-5601b49c-7378-42b9-926f-6654d177491d" Aug 28 03:29:49.626: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-5601b49c-7378-42b9-926f-6654d177491d"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:49.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 28 03:29:49.739: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-5601b49c-7378-42b9-926f-6654d177491d] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:49.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-814a4645-55a0-44a0-a260-611fb16b364b" Aug 28 03:29:49.841: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-814a4645-55a0-44a0-a260-611fb16b364b"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:49.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 28 03:29:49.962: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-814a4645-55a0-44a0-a260-611fb16b364b] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:49.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-f54bb368-72ff-49b2-ac8b-6973173a771a" Aug 28 03:29:50.081: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-f54bb368-72ff-49b2-ac8b-6973173a771a"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:50.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 28 03:29:50.195: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f54bb368-72ff-49b2-ac8b-6973173a771a] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:50.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-80ca0d4a-7a17-4798-a75d-bf72963835b4" Aug 28 03:29:50.312: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-80ca0d4a-7a17-4798-a75d-bf72963835b4"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:50.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 28 03:29:50.425: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-80ca0d4a-7a17-4798-a75d-bf72963835b4] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:50.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-2220106b-0637-4186-b13a-4aac8c6da0ac" Aug 28 03:29:50.538: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-2220106b-0637-4186-b13a-4aac8c6da0ac"] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:50.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory Aug 28 03:29:50.660: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-2220106b-0637-4186-b13a-4aac8c6da0ac] Namespace:persistent-local-volumes-test-9313 PodName:hostexec-node2-mxlwb ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:50.660: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "persistent-local-volumes-test-9313". STEP: Found 76 events. Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:01 +0000 UTC - event for hostexec-node1-bvnrs: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-9313/hostexec-node1-bvnrs to node1 Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:30 +0000 UTC - event for hostexec-node1-bvnrs: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:31 +0000 UTC - event for hostexec-node1-bvnrs: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 565.50694ms Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:31 +0000 UTC - event for hostexec-node1-bvnrs: {kubelet node1} Started: Started container agnhost-container Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:31 +0000 UTC - event for hostexec-node1-bvnrs: {kubelet node1} Created: Created container agnhost-container Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:38 +0000 UTC - event for hostexec-node2-mxlwb: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-9313/hostexec-node2-mxlwb to node2 Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:39 +0000 UTC - event for hostexec-node2-mxlwb: {kubelet node2} Started: Started container agnhost-container Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:39 +0000 UTC - event for hostexec-node2-mxlwb: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:39 +0000 UTC - event for hostexec-node2-mxlwb: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 463.21804ms Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:39 +0000 UTC - event for hostexec-node2-mxlwb: {kubelet node2} Created: Created container agnhost-container Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:43 +0000 UTC - event for pvc-5dk9s: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:43 +0000 UTC - event for pvc-gm2pl: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:43 +0000 UTC - event for pvc-klw87: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:43 +0000 UTC - event for pvc-wd9rh: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:43 +0000 UTC - event for pvc-wftxx: {persistentvolume-controller } WaitForPodScheduled: waiting for pod pod-bc4d94a3-d01a-495b-a191-0bbbf6278745 to be scheduled Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:43 +0000 UTC - event for pvc-xz8bj: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:44 +0000 UTC - event for pod-19a82adf-b74d-4070-adc6-0bff5e2608a5: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-9313/pod-19a82adf-b74d-4070-adc6-0bff5e2608a5 to node2 Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:44 +0000 UTC - event for pod-b139fde4-e05d-4abf-9b7e-548002ce4169: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-9313/pod-b139fde4-e05d-4abf-9b7e-548002ce4169 to node2 Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:44 +0000 UTC - event for pod-f7a4f75e-7b5c-4a31-8a70-3c2f1794df39: {default-scheduler } FailedScheduling: 0/5 nodes are available: 2 node(s) didn't find available persistent volumes to bind, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:45 +0000 UTC - event for pod-9184f8b9-11fe-4b3a-9aec-9e784f6e1376: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-9313/pod-9184f8b9-11fe-4b3a-9aec-9e784f6e1376 to node1 Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:45 +0000 UTC - event for pod-bc4d94a3-d01a-495b-a191-0bbbf6278745: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-9313/pod-bc4d94a3-d01a-495b-a191-0bbbf6278745 to node2 Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:45 +0000 UTC - event for pod-ca64715a-5bc1-485c-bd3f-4a129e59e031: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-9313/pod-ca64715a-5bc1-485c-bd3f-4a129e59e031 to node1 Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:46 +0000 UTC - event for pvc-dvm7x: {persistentvolume-controller } WaitForPodScheduled: waiting for pod pod-f7a4f75e-7b5c-4a31-8a70-3c2f1794df39 to be scheduled Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:46 +0000 UTC - event for pvc-qs5x7: {persistentvolume-controller } WaitForPodScheduled: waiting for pod pod-f7a4f75e-7b5c-4a31-8a70-3c2f1794df39 to be scheduled Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:46 +0000 UTC - event for pvc-s827t: {persistentvolume-controller } WaitForPodScheduled: waiting for pod pod-f7a4f75e-7b5c-4a31-8a70-3c2f1794df39 to be scheduled Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:47 +0000 UTC - event for pod-0b612b33-8ffb-4d64-b727-05dcb4040261: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-9313/pod-0b612b33-8ffb-4d64-b727-05dcb4040261 to node1 Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:47 +0000 UTC - event for pod-19a82adf-b74d-4070-adc6-0bff5e2608a5: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:47 +0000 UTC - event for pod-19a82adf-b74d-4070-adc6-0bff5e2608a5: {multus } AddedInterface: Add eth0 [10.244.4.240/24] Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:48 +0000 UTC - event for pod-19a82adf-b74d-4070-adc6-0bff5e2608a5: {kubelet node2} Failed: Error: ErrImagePull Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:48 +0000 UTC - event for pod-19a82adf-b74d-4070-adc6-0bff5e2608a5: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:48 +0000 UTC - event for pod-b139fde4-e05d-4abf-9b7e-548002ce4169: {multus } AddedInterface: Add eth0 [10.244.4.241/24] Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:48 +0000 UTC - event for pod-b139fde4-e05d-4abf-9b7e-548002ce4169: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:49 +0000 UTC - event for pod-19a82adf-b74d-4070-adc6-0bff5e2608a5: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:49 +0000 UTC - event for pod-19a82adf-b74d-4070-adc6-0bff5e2608a5: {kubelet node2} Failed: Error: ImagePullBackOff Aug 28 03:29:50.776: INFO: At 2021-08-28 03:24:49 +0000 UTC - event for pod-b139fde4-e05d-4abf-9b7e-548002ce4169: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:49 +0000 UTC - event for pod-b139fde4-e05d-4abf-9b7e-548002ce4169: {kubelet node2} Failed: Error: ErrImagePull Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:49 +0000 UTC - event for pod-ca64715a-5bc1-485c-bd3f-4a129e59e031: {multus } AddedInterface: Add eth0 [10.244.3.50/24] Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:50 +0000 UTC - event for pod-b139fde4-e05d-4abf-9b7e-548002ce4169: {kubelet node2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:50 +0000 UTC - event for pod-bc4d94a3-d01a-495b-a191-0bbbf6278745: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:50 +0000 UTC - event for pod-bc4d94a3-d01a-495b-a191-0bbbf6278745: {multus } AddedInterface: Add eth0 [10.244.4.242/24] Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:50 +0000 UTC - event for pod-ca64715a-5bc1-485c-bd3f-4a129e59e031: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:50 +0000 UTC - event for pod-ca64715a-5bc1-485c-bd3f-4a129e59e031: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:50 +0000 UTC - event for pod-ca64715a-5bc1-485c-bd3f-4a129e59e031: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:51 +0000 UTC - event for pod-bc4d94a3-d01a-495b-a191-0bbbf6278745: {kubelet node2} Failed: Error: ErrImagePull Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:51 +0000 UTC - event for pod-bc4d94a3-d01a-495b-a191-0bbbf6278745: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:51 +0000 UTC - event for pod-bc4d94a3-d01a-495b-a191-0bbbf6278745: {kubelet node2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:51 +0000 UTC - event for pod-ca64715a-5bc1-485c-bd3f-4a129e59e031: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:52 +0000 UTC - event for pod-9184f8b9-11fe-4b3a-9aec-9e784f6e1376: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:52 +0000 UTC - event for pod-9184f8b9-11fe-4b3a-9aec-9e784f6e1376: {multus } AddedInterface: Add eth0 [10.244.3.51/24] Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:52 +0000 UTC - event for pod-b139fde4-e05d-4abf-9b7e-548002ce4169: {kubelet node2} Failed: Error: ImagePullBackOff Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:52 +0000 UTC - event for pod-b139fde4-e05d-4abf-9b7e-548002ce4169: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:52 +0000 UTC - event for pod-b139fde4-e05d-4abf-9b7e-548002ce4169: {multus } AddedInterface: Add eth0 [10.244.4.243/24] Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:53 +0000 UTC - event for pod-9184f8b9-11fe-4b3a-9aec-9e784f6e1376: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:53 +0000 UTC - event for pod-9184f8b9-11fe-4b3a-9aec-9e784f6e1376: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:54 +0000 UTC - event for pod-9184f8b9-11fe-4b3a-9aec-9e784f6e1376: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:54 +0000 UTC - event for pod-9184f8b9-11fe-4b3a-9aec-9e784f6e1376: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:54 +0000 UTC - event for pod-bc4d94a3-d01a-495b-a191-0bbbf6278745: {kubelet node2} Failed: Error: ImagePullBackOff Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:54 +0000 UTC - event for pod-bc4d94a3-d01a-495b-a191-0bbbf6278745: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:54 +0000 UTC - event for pod-bc4d94a3-d01a-495b-a191-0bbbf6278745: {multus } AddedInterface: Add eth0 [10.244.4.244/24] Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:54 +0000 UTC - event for pod-ca64715a-5bc1-485c-bd3f-4a129e59e031: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:54 +0000 UTC - event for pod-ca64715a-5bc1-485c-bd3f-4a129e59e031: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:54 +0000 UTC - event for pod-ca64715a-5bc1-485c-bd3f-4a129e59e031: {multus } AddedInterface: Add eth0 [10.244.3.52/24] Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:55 +0000 UTC - event for pod-0b612b33-8ffb-4d64-b727-05dcb4040261: {multus } AddedInterface: Add eth0 [10.244.3.53/24] Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:55 +0000 UTC - event for pod-0b612b33-8ffb-4d64-b727-05dcb4040261: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:56 +0000 UTC - event for pod-bc4d94a3-d01a-495b-a191-0bbbf6278745: {multus } AddedInterface: Add eth0 [10.244.4.245/24] Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:57 +0000 UTC - event for pod-0b612b33-8ffb-4d64-b727-05dcb4040261: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:57 +0000 UTC - event for pod-0b612b33-8ffb-4d64-b727-05dcb4040261: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:57 +0000 UTC - event for pod-0b612b33-8ffb-4d64-b727-05dcb4040261: {kubelet node1} Failed: Error: ImagePullBackOff Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:57 +0000 UTC - event for pod-0b612b33-8ffb-4d64-b727-05dcb4040261: {kubelet node1} Failed: Error: ErrImagePull Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:57 +0000 UTC - event for pod-ca64715a-5bc1-485c-bd3f-4a129e59e031: {multus } AddedInterface: Add eth0 [10.244.3.54/24] Aug 28 03:29:50.777: INFO: At 2021-08-28 03:24:59 +0000 UTC - event for pod-ca64715a-5bc1-485c-bd3f-4a129e59e031: {multus } AddedInterface: Add eth0 [10.244.3.55/24] Aug 28 03:29:50.777: INFO: At 2021-08-28 03:25:03 +0000 UTC - event for pod-19a82adf-b74d-4070-adc6-0bff5e2608a5: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit Aug 28 03:29:50.777: INFO: At 2021-08-28 03:29:44 +0000 UTC - event for pod-f7a4f75e-7b5c-4a31-8a70-3c2f1794df39: {default-scheduler } FailedScheduling: skip schedule deleting pod: persistent-local-volumes-test-9313/pod-f7a4f75e-7b5c-4a31-8a70-3c2f1794df39 Aug 28 03:29:50.777: INFO: At 2021-08-28 03:29:44 +0000 UTC - event for pvc-dvm7x: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding Aug 28 03:29:50.777: INFO: At 2021-08-28 03:29:44 +0000 UTC - event for pvc-qs5x7: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding Aug 28 03:29:50.777: INFO: At 2021-08-28 03:29:44 +0000 UTC - event for pvc-s827t: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding Aug 28 03:29:50.780: INFO: POD NODE PHASE GRACE CONDITIONS Aug 28 03:29:50.780: INFO: hostexec-node1-bvnrs node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:24:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:24:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:24:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:24:01 +0000 UTC }] Aug 28 03:29:50.780: INFO: hostexec-node2-mxlwb node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:24:38 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:24:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:24:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:24:38 +0000 UTC }] Aug 28 03:29:50.780: INFO: pod-9184f8b9-11fe-4b3a-9aec-9e784f6e1376 node1 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:24:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:24:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:24:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:24:45 +0000 UTC }] Aug 28 03:29:50.780: INFO: pod-ca64715a-5bc1-485c-bd3f-4a129e59e031 node1 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:24:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:24:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:24:45 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-28 03:24:45 +0000 UTC }] Aug 28 03:29:50.780: INFO: Aug 28 03:29:50.784: INFO: Logging node info for node master1 Aug 28 03:29:50.787: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 3af53387-5aee-42c1-b0e7-644cf9161d48 155963 0 2021-08-27 20:46:13 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"92:f8:b6:72:e4:be"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-08-27 20:46:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-08-27 20:46:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-08-27 20:46:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-08-27 20:48:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-08-27 20:50:56 +0000 UTC,LastTransitionTime:2021-08-27 20:50:56 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-08-28 03:29:41 +0000 UTC,LastTransitionTime:2021-08-27 20:46:13 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-08-28 03:29:41 +0000 UTC,LastTransitionTime:2021-08-27 20:46:13 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-08-28 03:29:41 +0000 UTC,LastTransitionTime:2021-08-27 20:46:13 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-08-28 03:29:41 +0000 UTC,LastTransitionTime:2021-08-27 20:50:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5d584023135a46ecb77596bf48ed7f2f,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:587cafa0-6de3-49f8-906e-06315a8ff104,KernelVersion:3.10.0-1160.36.2.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723496952,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:ae72171f047a37ee5423e0692df7429830919af16e9d668ab0c80b723863d102 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:cd98d1edca8e5e2e3ea42cbc463812483e5d069d10f0974ca9d484b5a7bd68db tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Aug 28 03:29:50.787: INFO: Logging kubelet events for node master1 Aug 28 03:29:50.790: INFO: Logging pods the kubelet thinks is on node master1 Aug 28 03:29:50.804: INFO: coredns-7677f9bb54-dwtp5 started at 2021-08-27 20:49:16 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.804: INFO: Container coredns ready: true, restart count 1 Aug 28 03:29:50.804: INFO: node-exporter-z2ngr started at 2021-08-27 20:59:13 +0000 UTC (0+2 container statuses recorded) Aug 28 03:29:50.804: INFO: Container kube-rbac-proxy ready: true, restart count 0 Aug 28 03:29:50.804: INFO: Container node-exporter ready: true, restart count 0 Aug 28 03:29:50.804: INFO: kube-apiserver-master1 started at 2021-08-27 20:54:17 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.804: INFO: Container kube-apiserver ready: true, restart count 0 Aug 28 03:29:50.804: INFO: kube-controller-manager-master1 started at 2021-08-27 20:54:27 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.804: INFO: Container kube-controller-manager ready: true, restart count 3 Aug 28 03:29:50.804: INFO: kube-proxy-rb5p6 started at 2021-08-27 20:48:12 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.804: INFO: Container kube-proxy ready: true, restart count 1 Aug 28 03:29:50.804: INFO: docker-registry-docker-registry-56cbc7bc58-cthtt started at 2021-08-27 20:51:47 +0000 UTC (0+2 container statuses recorded) Aug 28 03:29:50.804: INFO: Container docker-registry ready: true, restart count 0 Aug 28 03:29:50.804: INFO: Container nginx ready: true, restart count 0 Aug 28 03:29:50.804: INFO: kube-scheduler-master1 started at 2021-08-27 20:47:27 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.804: INFO: Container kube-scheduler ready: true, restart count 0 Aug 28 03:29:50.804: INFO: kube-flannel-pp7vp started at 2021-08-27 20:48:48 +0000 UTC (1+1 container statuses recorded) Aug 28 03:29:50.804: INFO: Init container install-cni ready: true, restart count 0 Aug 28 03:29:50.804: INFO: Container kube-flannel ready: true, restart count 3 Aug 28 03:29:50.804: INFO: kube-multus-ds-amd64-sfr9k started at 2021-08-27 20:48:56 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.804: INFO: Container kube-multus ready: true, restart count 1 W0828 03:29:50.816410 23 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 28 03:29:50.842: INFO: Latency metrics for node master1 Aug 28 03:29:50.842: INFO: Logging node info for node master2 Aug 28 03:29:50.844: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 666e473f-d9e6-4c06-8b56-06474d788f70 156095 0 2021-08-27 20:46:51 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"42:86:ff:30:bd:4d"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-08-27 20:46:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {kubelet Update v1 2021-08-27 20:46:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {flanneld Update v1 2021-08-27 20:48:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-08-27 20:48:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-08-27 20:55:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-08-27 20:51:53 +0000 UTC,LastTransitionTime:2021-08-27 20:51:53 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-08-28 03:29:50 +0000 UTC,LastTransitionTime:2021-08-27 20:46:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-08-28 03:29:50 +0000 UTC,LastTransitionTime:2021-08-27 20:46:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-08-28 03:29:50 +0000 UTC,LastTransitionTime:2021-08-27 20:46:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-08-28 03:29:50 +0000 UTC,LastTransitionTime:2021-08-27 20:48:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2835065974a64998811b9acd85de209b,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:309b9155-1e2a-4ebd-900f-bba5abfc3a5d,KernelVersion:3.10.0-1160.36.2.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723496952,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Aug 28 03:29:50.845: INFO: Logging kubelet events for node master2 Aug 28 03:29:50.847: INFO: Logging pods the kubelet thinks is on node master2 Aug 28 03:29:50.861: INFO: prometheus-operator-5bb8cb9d8f-whr5p started at 2021-08-27 20:59:06 +0000 UTC (0+2 container statuses recorded) Aug 28 03:29:50.861: INFO: Container kube-rbac-proxy ready: true, restart count 0 Aug 28 03:29:50.861: INFO: Container prometheus-operator ready: true, restart count 0 Aug 28 03:29:50.861: INFO: node-exporter-96jk5 started at 2021-08-27 20:59:13 +0000 UTC (0+2 container statuses recorded) Aug 28 03:29:50.861: INFO: Container kube-rbac-proxy ready: true, restart count 0 Aug 28 03:29:50.861: INFO: Container node-exporter ready: true, restart count 0 Aug 28 03:29:50.861: INFO: kube-controller-manager-master2 started at 2021-08-27 20:51:01 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.861: INFO: Container kube-controller-manager ready: true, restart count 2 Aug 28 03:29:50.861: INFO: kube-flannel-4znnq started at 2021-08-27 20:48:48 +0000 UTC (1+1 container statuses recorded) Aug 28 03:29:50.861: INFO: Init container install-cni ready: true, restart count 2 Aug 28 03:29:50.861: INFO: Container kube-flannel ready: true, restart count 2 Aug 28 03:29:50.861: INFO: kube-multus-ds-amd64-4mgbk started at 2021-08-27 20:48:56 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.861: INFO: Container kube-multus ready: true, restart count 1 Aug 28 03:29:50.861: INFO: node-feature-discovery-controller-5bf5c49849-zr9zd started at 2021-08-27 20:55:09 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.861: INFO: Container nfd-controller ready: true, restart count 0 Aug 28 03:29:50.861: INFO: kube-apiserver-master2 started at 2021-08-27 20:54:17 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.861: INFO: Container kube-apiserver ready: true, restart count 0 Aug 28 03:29:50.861: INFO: kube-scheduler-master2 started at 2021-08-27 20:47:27 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.861: INFO: Container kube-scheduler ready: true, restart count 2 Aug 28 03:29:50.861: INFO: kube-proxy-b4mn9 started at 2021-08-27 20:48:12 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.861: INFO: Container kube-proxy ready: true, restart count 1 W0828 03:29:50.874497 23 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 28 03:29:50.905: INFO: Latency metrics for node master2 Aug 28 03:29:50.905: INFO: Logging node info for node master3 Aug 28 03:29:50.908: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 620b6dba-f2c5-46e9-b2ff-d2f4197167d0 156069 0 2021-08-27 20:47:02 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"ce:de:ea:c3:40:4f"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-08-27 20:47:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-08-27 20:47:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-08-27 20:48:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-08-27 20:48:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-08-27 20:51:44 +0000 UTC,LastTransitionTime:2021-08-27 20:51:44 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-08-28 03:29:48 +0000 UTC,LastTransitionTime:2021-08-27 20:47:02 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-08-28 03:29:48 +0000 UTC,LastTransitionTime:2021-08-27 20:47:02 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-08-28 03:29:48 +0000 UTC,LastTransitionTime:2021-08-27 20:47:02 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-08-28 03:29:48 +0000 UTC,LastTransitionTime:2021-08-27 20:48:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:37decbffe0e84048b5801289ad3be5bf,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:c96fe8b9-1ce0-44cd-935a-b58987e26570,KernelVersion:3.10.0-1160.36.2.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723496891,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Aug 28 03:29:50.909: INFO: Logging kubelet events for node master3 Aug 28 03:29:50.911: INFO: Logging pods the kubelet thinks is on node master3 Aug 28 03:29:50.927: INFO: kube-controller-manager-master3 started at 2021-08-27 20:47:27 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.927: INFO: Container kube-controller-manager ready: true, restart count 2 Aug 28 03:29:50.927: INFO: kube-proxy-8sxhm started at 2021-08-27 20:48:12 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.927: INFO: Container kube-proxy ready: true, restart count 1 Aug 28 03:29:50.927: INFO: kube-flannel-fkz5d started at 2021-08-27 20:48:48 +0000 UTC (1+1 container statuses recorded) Aug 28 03:29:50.927: INFO: Init container install-cni ready: true, restart count 0 Aug 28 03:29:50.927: INFO: Container kube-flannel ready: true, restart count 1 Aug 28 03:29:50.927: INFO: dns-autoscaler-5b7b5c9b6f-54xch started at 2021-08-27 20:49:19 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.927: INFO: Container autoscaler ready: true, restart count 1 Aug 28 03:29:50.927: INFO: coredns-7677f9bb54-rxplt started at 2021-08-27 20:49:21 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.927: INFO: Container coredns ready: true, restart count 1 Aug 28 03:29:50.927: INFO: kube-apiserver-master3 started at 2021-08-27 20:54:17 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.927: INFO: Container kube-apiserver ready: true, restart count 0 Aug 28 03:29:50.927: INFO: kube-scheduler-master3 started at 2021-08-27 20:47:27 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.927: INFO: Container kube-scheduler ready: true, restart count 2 Aug 28 03:29:50.927: INFO: kube-multus-ds-amd64-wwcgv started at 2021-08-27 20:48:56 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.927: INFO: Container kube-multus ready: true, restart count 1 Aug 28 03:29:50.927: INFO: node-exporter-d4m7q started at 2021-08-27 20:59:13 +0000 UTC (0+2 container statuses recorded) Aug 28 03:29:50.927: INFO: Container kube-rbac-proxy ready: true, restart count 0 Aug 28 03:29:50.927: INFO: Container node-exporter ready: true, restart count 0 W0828 03:29:50.941537 23 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 28 03:29:50.965: INFO: Latency metrics for node master3 Aug 28 03:29:50.965: INFO: Logging node info for node node1 Aug 28 03:29:50.968: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 e7a2481e-32db-4c83-bd9f-4a0687258e7a 155962 0 2021-08-27 20:48:09 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.36.2.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1382":"csi-mock-csi-mock-volumes-1382","csi-mock-csi-mock-volumes-1844":"csi-mock-csi-mock-volumes-1844","csi-mock-csi-mock-volumes-2691":"csi-mock-csi-mock-volumes-2691","csi-mock-csi-mock-volumes-2731":"csi-mock-csi-mock-volumes-2731","csi-mock-csi-mock-volumes-4084":"csi-mock-csi-mock-volumes-4084","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6248":"csi-mock-csi-mock-volumes-6248","csi-mock-csi-mock-volumes-6493":"csi-mock-csi-mock-volumes-6493","csi-mock-csi-mock-volumes-7027":"csi-mock-csi-mock-volumes-7027","csi-mock-csi-mock-volumes-7859":"csi-mock-csi-mock-volumes-7859","csi-mock-csi-mock-volumes-7866":"csi-mock-csi-mock-volumes-7866","csi-mock-csi-mock-volumes-9157":"csi-mock-csi-mock-volumes-9157","csi-mock-csi-mock-volumes-9165":"csi-mock-csi-mock-volumes-9165","csi-mock-csi-mock-volumes-9410":"csi-mock-csi-mock-volumes-9410"} flannel.alpha.coreos.com/backend-data:{"VtepMAC":"56:c7:37:40:51:ca"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-08-27 20:48:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-08-27 20:48:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-08-27 20:55:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-08-27 20:57:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-08-28 02:45:58 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}}} {kube-controller-manager Update v1 2021-08-28 03:10:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubelet Update v1 2021-08-28 03:10:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-08-27 20:51:49 +0000 UTC,LastTransitionTime:2021-08-27 20:51:49 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-08-28 03:29:41 +0000 UTC,LastTransitionTime:2021-08-27 20:48:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-08-28 03:29:41 +0000 UTC,LastTransitionTime:2021-08-27 20:48:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-08-28 03:29:41 +0000 UTC,LastTransitionTime:2021-08-27 20:48:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-08-28 03:29:41 +0000 UTC,LastTransitionTime:2021-08-27 20:48:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c1e38e80ea114a5f96601202301ce842,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:e769e86d-15c0-442c-a93b-bcc6c33ff1cd,KernelVersion:3.10.0-1160.36.2.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:331c6faa8b0d5ec72cf105e87d35df0a2f2baeec3d6217a51faa73f9460f937f localhost:30500/barometer-collectd:stable],SizeBytes:1238704157,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:a7cea43d9d2f67c38fbf0407786edbe660ee9072945f7bb272b55fd255e8eaca opnfv/barometer-collectd:stable],SizeBytes:1075746799,},ContainerImage{Names:[@ :],SizeBytes:1003787960,},ContainerImage{Names:[localhost:30500/cmk@sha256:fd1487b0c07556a087eff669e70c501a704720dcd53ff75183593de6720585f2 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723496952,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:df13abe416e37eb3db4722840dd479b00ba193ac6606e7902331dcea50f4f1f2 nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:48281550,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:46041582,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:d7300ccf7ff3e9cea2111d275143b8050618bbc1d1ffe41f46286b1696261243 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44393508,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64@sha256:3b36bd80b97c532a774e7f6246797b8575d97037982f353476c703ba6686c75c gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64:1.0],SizeBytes:19227369,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:0b4273abac4c241fa3d70aaf52b0d79a133d2737081f4a5c5dea4949f6c45dc3 k8s.gcr.io/sig-storage/mock-driver:v3.1.0],SizeBytes:18687618,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:16322467,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:de25c7fc6c4f3a27c7f0c2dff454e4671823a34d88abd533f210848d527e0fbb alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Aug 28 03:29:50.969: INFO: Logging kubelet events for node node1 Aug 28 03:29:50.971: INFO: Logging pods the kubelet thinks is on node node1 Aug 28 03:29:50.991: INFO: kubernetes-dashboard-86c6f9df5b-c56fg started at 2021-08-27 20:49:21 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.991: INFO: Container kubernetes-dashboard ready: true, restart count 1 Aug 28 03:29:50.991: INFO: node-feature-discovery-worker-bd9kg started at 2021-08-27 20:55:06 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.991: INFO: Container nfd-worker ready: true, restart count 0 Aug 28 03:29:50.991: INFO: prometheus-k8s-0 started at 2021-08-27 20:59:29 +0000 UTC (0+5 container statuses recorded) Aug 28 03:29:50.991: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Aug 28 03:29:50.991: INFO: Container grafana ready: true, restart count 0 Aug 28 03:29:50.991: INFO: Container prometheus ready: true, restart count 1 Aug 28 03:29:50.991: INFO: Container prometheus-config-reloader ready: true, restart count 0 Aug 28 03:29:50.991: INFO: Container rules-configmap-reloader ready: true, restart count 0 Aug 28 03:29:50.991: INFO: kube-multus-ds-amd64-nn7bl started at 2021-08-27 20:48:56 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.991: INFO: Container kube-multus ready: true, restart count 1 Aug 28 03:29:50.991: INFO: kubernetes-metrics-scraper-678c97765c-gtp5x started at 2021-08-27 20:49:21 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.991: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Aug 28 03:29:50.991: INFO: cmk-init-discover-node1-spg26 started at 2021-08-27 20:57:37 +0000 UTC (0+3 container statuses recorded) Aug 28 03:29:50.991: INFO: Container discover ready: false, restart count 0 Aug 28 03:29:50.991: INFO: Container init ready: false, restart count 0 Aug 28 03:29:50.991: INFO: Container install ready: false, restart count 0 Aug 28 03:29:50.991: INFO: hostexec-node1-bvnrs started at 2021-08-28 03:24:01 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.991: INFO: Container agnhost-container ready: true, restart count 0 Aug 28 03:29:50.991: INFO: nginx-proxy-node1 started at 2021-08-27 20:54:17 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.991: INFO: Container nginx-proxy ready: true, restart count 2 Aug 28 03:29:50.991: INFO: kube-proxy-pb5bl started at 2021-08-27 20:48:12 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.991: INFO: Container kube-proxy ready: true, restart count 2 Aug 28 03:29:50.991: INFO: collectd-ccvwg started at 2021-08-27 21:04:15 +0000 UTC (0+3 container statuses recorded) Aug 28 03:29:50.991: INFO: Container collectd ready: true, restart count 0 Aug 28 03:29:50.991: INFO: Container collectd-exporter ready: true, restart count 0 Aug 28 03:29:50.991: INFO: Container rbac-proxy ready: true, restart count 0 Aug 28 03:29:50.991: INFO: pod-ca64715a-5bc1-485c-bd3f-4a129e59e031 started at 2021-08-28 03:24:45 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.991: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:29:50.991: INFO: pod-9184f8b9-11fe-4b3a-9aec-9e784f6e1376 started at 2021-08-28 03:24:45 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.991: INFO: Container write-pod ready: false, restart count 0 Aug 28 03:29:50.991: INFO: kube-flannel-ssxn7 started at 2021-08-27 20:48:48 +0000 UTC (1+1 container statuses recorded) Aug 28 03:29:50.991: INFO: Init container install-cni ready: true, restart count 0 Aug 28 03:29:50.991: INFO: Container kube-flannel ready: true, restart count 1 Aug 28 03:29:50.991: INFO: cmk-jw4m6 started at 2021-08-27 20:58:19 +0000 UTC (0+2 container statuses recorded) Aug 28 03:29:50.991: INFO: Container nodereport ready: true, restart count 0 Aug 28 03:29:50.991: INFO: Container reconcile ready: true, restart count 0 Aug 28 03:29:50.991: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9lndx started at 2021-08-27 20:55:51 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:50.991: INFO: Container kube-sriovdp ready: true, restart count 0 Aug 28 03:29:50.991: INFO: node-exporter-4cvlq started at 2021-08-27 20:59:13 +0000 UTC (0+2 container statuses recorded) Aug 28 03:29:50.991: INFO: Container kube-rbac-proxy ready: true, restart count 0 Aug 28 03:29:50.991: INFO: Container node-exporter ready: true, restart count 0 W0828 03:29:51.003844 23 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 28 03:29:51.059: INFO: Latency metrics for node node1 Aug 28 03:29:51.060: INFO: Logging node info for node node2 Aug 28 03:29:51.063: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 eb7da7c0-513f-4072-a078-ad3d24f88114 155964 0 2021-08-27 20:48:09 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.36.2.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1442":"csi-mock-csi-mock-volumes-1442","csi-mock-csi-mock-volumes-1936":"csi-mock-csi-mock-volumes-1936","csi-mock-csi-mock-volumes-2652":"csi-mock-csi-mock-volumes-2652","csi-mock-csi-mock-volumes-2831":"csi-mock-csi-mock-volumes-2831","csi-mock-csi-mock-volumes-2967":"csi-mock-csi-mock-volumes-2967","csi-mock-csi-mock-volumes-3285":"csi-mock-csi-mock-volumes-3285","csi-mock-csi-mock-volumes-5585":"csi-mock-csi-mock-volumes-5585","csi-mock-csi-mock-volumes-7451":"csi-mock-csi-mock-volumes-7451","csi-mock-csi-mock-volumes-8521":"csi-mock-csi-mock-volumes-8521"} flannel.alpha.coreos.com/backend-data:{"VtepMAC":"9a:88:9c:d1:39:68"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-08-27 20:48:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-08-27 20:48:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-08-27 20:55:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-08-27 20:58:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-08-28 02:45:47 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{}}}}} {kube-controller-manager Update v1 2021-08-28 03:01:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubelet Update v1 2021-08-28 03:02:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-08-27 20:50:36 +0000 UTC,LastTransitionTime:2021-08-27 20:50:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-08-28 03:29:42 +0000 UTC,LastTransitionTime:2021-08-27 20:48:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-08-28 03:29:42 +0000 UTC,LastTransitionTime:2021-08-27 20:48:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-08-28 03:29:42 +0000 UTC,LastTransitionTime:2021-08-27 20:48:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-08-28 03:29:42 +0000 UTC,LastTransitionTime:2021-08-27 20:48:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0ccfc2a4a9b7400c9ca53b5de0ca4970,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:26b02690-0814-4c92-9f6d-d315df796ce6,KernelVersion:3.10.0-1160.36.2.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:331c6faa8b0d5ec72cf105e87d35df0a2f2baeec3d6217a51faa73f9460f937f localhost:30500/barometer-collectd:stable],SizeBytes:1238704157,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:723496952,},ContainerImage{Names:[localhost:30500/cmk@sha256:fd1487b0c07556a087eff669e70c501a704720dcd53ff75183593de6720585f2 localhost:30500/cmk:v1.5.1],SizeBytes:723496952,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[k8s.gcr.io/sig-storage/nfs-provisioner@sha256:c1bedac8758029948afe060bf8f6ee63ea489b5e08d29745f44fab68ee0d46ca k8s.gcr.io/sig-storage/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 gluster/glusterdynamic-provisioner:v1.0],SizeBytes:373281573,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[nginx@sha256:df13abe416e37eb3db4722840dd479b00ba193ac6606e7902331dcea50f4f1f2 nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:48281550,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:46041582,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:d7300ccf7ff3e9cea2111d275143b8050618bbc1d1ffe41f46286b1696261243 localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44393508,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:ae72171f047a37ee5423e0692df7429830919af16e9d668ab0c80b723863d102 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:cd98d1edca8e5e2e3ea42cbc463812483e5d069d10f0974ca9d484b5a7bd68db localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:0b4273abac4c241fa3d70aaf52b0d79a133d2737081f4a5c5dea4949f6c45dc3 k8s.gcr.io/sig-storage/mock-driver:v3.1.0],SizeBytes:18687618,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:16322467,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Aug 28 03:29:51.063: INFO: Logging kubelet events for node node2 Aug 28 03:29:51.066: INFO: Logging pods the kubelet thinks is on node node2 Aug 28 03:29:51.082: INFO: nginx-proxy-node2 started at 2021-08-27 20:54:17 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:51.082: INFO: Container nginx-proxy ready: true, restart count 1 Aug 28 03:29:51.082: INFO: kube-proxy-r4q4t started at 2021-08-27 20:48:12 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:51.082: INFO: Container kube-proxy ready: true, restart count 2 Aug 28 03:29:51.082: INFO: kube-multus-ds-amd64-tfffk started at 2021-08-27 20:48:56 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:51.082: INFO: Container kube-multus ready: true, restart count 1 Aug 28 03:29:51.082: INFO: node-exporter-p6h5h started at 2021-08-27 20:59:13 +0000 UTC (0+2 container statuses recorded) Aug 28 03:29:51.082: INFO: Container kube-rbac-proxy ready: true, restart count 0 Aug 28 03:29:51.082: INFO: Container node-exporter ready: true, restart count 0 Aug 28 03:29:51.082: INFO: cmk-fzjgr started at 2021-08-27 20:58:20 +0000 UTC (0+2 container statuses recorded) Aug 28 03:29:51.082: INFO: Container nodereport ready: true, restart count 0 Aug 28 03:29:51.082: INFO: Container reconcile ready: true, restart count 0 Aug 28 03:29:51.082: INFO: collectd-64dp2 started at 2021-08-27 21:04:15 +0000 UTC (0+3 container statuses recorded) Aug 28 03:29:51.082: INFO: Container collectd ready: true, restart count 0 Aug 28 03:29:51.082: INFO: Container collectd-exporter ready: true, restart count 0 Aug 28 03:29:51.082: INFO: Container rbac-proxy ready: true, restart count 0 Aug 28 03:29:51.082: INFO: hostexec-node2-mxlwb started at 2021-08-28 03:24:38 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:51.082: INFO: Container agnhost-container ready: true, restart count 0 Aug 28 03:29:51.082: INFO: cmk-webhook-6c9d5f8578-ndbx2 started at 2021-08-27 20:58:20 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:51.082: INFO: Container cmk-webhook ready: true, restart count 0 Aug 28 03:29:51.082: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-v99df started at 2021-08-27 21:02:08 +0000 UTC (0+2 container statuses recorded) Aug 28 03:29:51.082: INFO: Container tas-controller ready: true, restart count 0 Aug 28 03:29:51.082: INFO: Container tas-extender ready: true, restart count 0 Aug 28 03:29:51.082: INFO: kube-flannel-t9qv4 started at 2021-08-27 20:48:48 +0000 UTC (1+1 container statuses recorded) Aug 28 03:29:51.082: INFO: Init container install-cni ready: true, restart count 0 Aug 28 03:29:51.082: INFO: Container kube-flannel ready: true, restart count 2 Aug 28 03:29:51.082: INFO: cmk-init-discover-node2-l9qjd started at 2021-08-27 20:57:57 +0000 UTC (0+3 container statuses recorded) Aug 28 03:29:51.082: INFO: Container discover ready: false, restart count 0 Aug 28 03:29:51.082: INFO: Container init ready: false, restart count 0 Aug 28 03:29:51.082: INFO: Container install ready: false, restart count 0 Aug 28 03:29:51.083: INFO: node-feature-discovery-worker-54lfh started at 2021-08-27 20:55:06 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:51.083: INFO: Container nfd-worker ready: true, restart count 0 Aug 28 03:29:51.083: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-4f962 started at 2021-08-27 20:55:51 +0000 UTC (0+1 container statuses recorded) Aug 28 03:29:51.083: INFO: Container kube-sriovdp ready: true, restart count 0 W0828 03:29:51.096795 23 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 28 03:29:51.147: INFO: Latency metrics for node node2 Aug 28 03:29:51.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-9313" for this suite. • Failure [349.941 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:427 should be able to process many pods and reuse local volumes [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:517 Aug 28 03:29:44.029: some pods failed to complete within 5m0s Unexpected error: <*errors.errorString | 0xc0002bc200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:610 ------------------------------ {"msg":"FAILED [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","total":20,"completed":0,"skipped":3339,"failed":2,"failures":["[sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create prometheus metrics for volume provisioning errors [Slow] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:146 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 03:29:51.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Aug 28 03:29:51.177: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 03:29:51.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-4748" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning errors [Slow] [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:146 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 03:29:51.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 Aug 28 03:29:55.249: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-5718 PodName:hostexec-node1-chj8s ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:55.249: INFO: >>> kubeConfig: /root/.kube/config Aug 28 03:29:55.370: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Aug 28 03:29:55.370: INFO: exec node1: stdout: "0\n" Aug 28 03:29:55.370: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Aug 28 03:29:55.370: INFO: exec node1: exit code: 0 Aug 28 03:29:55.370: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 03:29:55.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-5718" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.180 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Pod Disks [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:457 [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 03:29:55.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 [It] [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:457 Aug 28 03:29:55.407: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 03:29:55.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-2882" for this suite. S [SKIPPING] [0.044 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Serial] attach on previously attached volumes should work [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:457 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:499 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 03:29:55.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Aug 28 03:29:55.448: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 03:29:55.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3265" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:499 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create metrics for total time taken in volume operations in P/V Controller /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:260 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 03:29:55.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 Aug 28 03:29:55.480: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 03:29:55.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-7906" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.027 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total time taken in volume operations in P/V Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:260 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 03:29:55.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 Aug 28 03:29:59.537: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-3064 PodName:hostexec-node1-xjdf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:29:59.537: INFO: >>> kubeConfig: /root/.kube/config Aug 28 03:29:59.656: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Aug 28 03:29:59.656: INFO: exec node1: stdout: "0\n" Aug 28 03:29:59.656: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Aug 28 03:29:59.656: INFO: exec node1: exit code: 0 Aug 28 03:29:59.656: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 03:29:59.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3064" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.177 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Two pods mounting a local volume one after the other [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 03:29:59.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 Aug 28 03:30:01.724: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-7307 PodName:hostexec-node1-7wsrz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} Aug 28 03:30:01.724: INFO: >>> kubeConfig: /root/.kube/config Aug 28 03:30:01.845: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l Aug 28 03:30:01.845: INFO: exec node1: stdout: "0\n" Aug 28 03:30:01.845: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" Aug 28 03:30:01.845: INFO: exec node1: exit code: 0 Aug 28 03:30:01.845: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 03:30:01.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-7307" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.180 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSAug 28 03:30:01.860: INFO: Running AfterSuite actions on all nodes Aug 28 03:30:01.860: INFO: Running AfterSuite actions on node 1 Aug 28 03:30:01.860: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_storage_serial/junit_01.xml {"msg":"Test Suite completed","total":20,"completed":0,"skipped":5482,"failed":2,"failures":["[sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes"]} Summarizing 2 Failures: [Fail] [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] [It] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:683 [Fail] [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] [It] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:610 Ran 2 of 5484 Specs in 675.754 seconds FAIL! -- 0 Passed | 2 Failed | 0 Pending | 5482 Skipped --- FAIL: TestE2E (675.89s) FAIL Ginkgo ran 1 suite in 11m17.069639371s Test Suite Failed