I0522 01:35:13.242314 21 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0522 01:35:13.242427 21 e2e.go:129] Starting e2e run "74e241f6-eac0-4ae4-82e2-cb9b32927126" on Ginkgo node 1 {"msg":"Test Suite starting","total":17,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1621647312 - Will randomize all specs Will run 17 of 5484 specs May 22 01:35:13.315: INFO: >>> kubeConfig: /root/.kube/config May 22 01:35:13.319: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 22 01:35:13.347: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 22 01:35:13.406: INFO: The status of Pod cmk-init-discover-node1-48g7j is Succeeded, skipping waiting May 22 01:35:13.406: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 22 01:35:13.406: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 22 01:35:13.406: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 22 01:35:13.423: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 22 01:35:13.423: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 22 01:35:13.423: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 22 01:35:13.423: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 22 01:35:13.423: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 22 01:35:13.423: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 22 01:35:13.423: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 22 01:35:13.423: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 22 01:35:13.423: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 22 01:35:13.423: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 22 01:35:13.423: INFO: e2e test version: v1.19.10 May 22 01:35:13.425: INFO: kube-apiserver version: v1.19.8 May 22 01:35:13.425: INFO: >>> kubeConfig: /root/.kube/config May 22 01:35:13.434: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create metrics for total number of volumes in A/D Controller /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:321 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:35:13.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv May 22 01:35:13.465: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 22 01:35:13.468: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 22 01:35:13.470: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:35:13.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3433" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.037 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total number of volumes in A/D Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:321 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Pod Disks [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:457 [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:35:13.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-disks STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75 [It] [Serial] attach on previously attached volumes should work /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:457 May 22 01:35:13.515: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:35:13.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-disks-2727" for this suite. S [SKIPPING] [0.041 seconds] [sig-storage] Pod Disks /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Serial] attach on previously attached volumes should work [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:457 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:458 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:642 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:35:13.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:619 [It] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:642 STEP: Create a PVC STEP: Create 50 pods to use this PVC STEP: Wait for all pods are running May 22 01:40:14.065: FAIL: Some pods are not running within 5m0s Unexpected error: <*errors.errorString | 0xc0003001f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func20.7.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:683 +0x748 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001a08180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001a08180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001a08180, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:633 STEP: Clean PV local-pvzzt5x [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "persistent-local-volumes-test-8512". STEP: Found 384 events. May 22 01:40:14.089: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-0806eda0-8dd0-499b-8419-f4c96fb3bf90: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-0806eda0-8dd0-499b-8419-f4c96fb3bf90 to node1 May 22 01:40:14.089: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-0ef0bcb1-3e96-4db7-875a-08191af24833: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-0ef0bcb1-3e96-4db7-875a-08191af24833 to node1 May 22 01:40:14.089: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-16028087-0ae3-489e-8b7b-2f309c12136f: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-16028087-0ae3-489e-8b7b-2f309c12136f to node1 May 22 01:40:14.089: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-166d9b83-9cda-4fee-b89b-128f7e31b41e: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-166d9b83-9cda-4fee-b89b-128f7e31b41e to node1 May 22 01:40:14.089: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-17c41de8-712a-4b64-8e12-b9167f1ccc82: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-17c41de8-712a-4b64-8e12-b9167f1ccc82 to node1 May 22 01:40:14.089: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-23fd4946-8b00-4e14-87d6-628987f1196e: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-23fd4946-8b00-4e14-87d6-628987f1196e to node1 May 22 01:40:14.089: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-25416362-b799-49a7-9006-efdf18ad9c5d: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-25416362-b799-49a7-9006-efdf18ad9c5d to node1 May 22 01:40:14.089: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-2ce73150-a2e6-468a-87df-ddb28cb19b3b: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-2ce73150-a2e6-468a-87df-ddb28cb19b3b to node1 May 22 01:40:14.089: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-33f6af65-447d-47e5-856a-ee0520e50654: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-33f6af65-447d-47e5-856a-ee0520e50654 to node1 May 22 01:40:14.089: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-3aa7b4b3-1a97-4094-ab85-a439721516b1: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-3aa7b4b3-1a97-4094-ab85-a439721516b1 to node1 May 22 01:40:14.089: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-4c5b332a-c9d0-4347-8498-3906e84828c4: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-4c5b332a-c9d0-4347-8498-3906e84828c4 to node1 May 22 01:40:14.089: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-51a38ecb-aa5d-4cd5-9e9a-5e01ba11b26a: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-51a38ecb-aa5d-4cd5-9e9a-5e01ba11b26a to node1 May 22 01:40:14.089: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-5247e6a6-6702-4904-9bd2-93e1126ee447: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-5247e6a6-6702-4904-9bd2-93e1126ee447 to node1 May 22 01:40:14.089: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-52aa3294-e25c-4d72-bd4f-7a84f4491e1e: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-52aa3294-e25c-4d72-bd4f-7a84f4491e1e to node1 May 22 01:40:14.089: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-550c8d3c-c425-44a5-86c7-6aa695148c5b: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-550c8d3c-c425-44a5-86c7-6aa695148c5b to node1 May 22 01:40:14.089: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-57430379-2a25-4b50-80f8-1f9d8fb22de4: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-57430379-2a25-4b50-80f8-1f9d8fb22de4 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-6663facf-422c-4de6-a474-fc640916e836: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-6663facf-422c-4de6-a474-fc640916e836 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-6a3981b8-47b7-4324-aa24-42b9517fa1b8: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-6a3981b8-47b7-4324-aa24-42b9517fa1b8 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-6a5ca55b-7fea-4e70-81f4-251c29d7a077: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-6a5ca55b-7fea-4e70-81f4-251c29d7a077 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-6acee2ee-d14c-4b78-9ff8-3e1f242a266a: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-6acee2ee-d14c-4b78-9ff8-3e1f242a266a to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-79204cb4-4be8-4740-8cba-d13da7fa71de: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-79204cb4-4be8-4740-8cba-d13da7fa71de to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-81772626-74ba-492c-8d3e-0972578acee5: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-81772626-74ba-492c-8d3e-0972578acee5 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-83cf990b-2e75-49d0-978d-3c83c55b4b18: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-83cf990b-2e75-49d0-978d-3c83c55b4b18 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-958e2d96-1d04-4d0b-87e0-7be3d23c258c: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-958e2d96-1d04-4d0b-87e0-7be3d23c258c to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-96515024-2799-4240-ad76-2e3f70eb11c4: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-96515024-2799-4240-ad76-2e3f70eb11c4 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-9a8fdccc-3091-428e-8cd6-ce188c633067: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-9a8fdccc-3091-428e-8cd6-ce188c633067 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-9c79a773-79d1-4e12-8a5f-7c40fd09ac29: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-9c79a773-79d1-4e12-8a5f-7c40fd09ac29 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-9d9e2186-4fd6-4a51-9dd4-f11b858ec786: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-9d9e2186-4fd6-4a51-9dd4-f11b858ec786 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-a1f65dd2-4488-4e0f-b5b0-7a2a720b0782: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-a1f65dd2-4488-4e0f-b5b0-7a2a720b0782 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-a7e733ab-c9e1-4a96-8140-66bfbe914aa7: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-a7e733ab-c9e1-4a96-8140-66bfbe914aa7 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-ac068ac2-7008-4da8-84c9-09b597b100b5: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-ac068ac2-7008-4da8-84c9-09b597b100b5 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-b5622c92-09b0-411f-8700-8aea35ec6f61: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-b5622c92-09b0-411f-8700-8aea35ec6f61 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-bdc092d6-9ec5-4586-82d9-f8b25e423447: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-bdc092d6-9ec5-4586-82d9-f8b25e423447 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-be10ab73-38c1-4d88-82d7-9594ce87d7f2: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-be10ab73-38c1-4d88-82d7-9594ce87d7f2 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-c4e86621-d05a-4ab6-b894-950537713573: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-c4e86621-d05a-4ab6-b894-950537713573 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-c506f916-7ca2-4154-9d6c-30ceb910a947: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-c506f916-7ca2-4154-9d6c-30ceb910a947 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-c59e1fe4-fe07-4049-8c93-09d9a5d6fcd0: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-c59e1fe4-fe07-4049-8c93-09d9a5d6fcd0 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-cb0494ec-13e3-4046-a345-1d02ce011718: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-cb0494ec-13e3-4046-a345-1d02ce011718 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-d593da45-ae55-40b4-b38a-616eaafcc189: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-d593da45-ae55-40b4-b38a-616eaafcc189 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-d8aa6580-4df3-4e8e-87e2-19954daee3ce: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-d8aa6580-4df3-4e8e-87e2-19954daee3ce to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-de79b91d-d259-449a-9636-9b73aa3abbf4: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-de79b91d-d259-449a-9636-9b73aa3abbf4 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-e325b6cd-538a-49cc-b2d2-f5610d29e416: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-e325b6cd-538a-49cc-b2d2-f5610d29e416 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-e905733c-287b-4a80-b63d-06c4b6d5effd: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-e905733c-287b-4a80-b63d-06c4b6d5effd to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-eb2dd31d-3c35-44f2-a121-e69e63ba64ad: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-eb2dd31d-3c35-44f2-a121-e69e63ba64ad to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-ee148ef8-0618-42d8-bdb3-74178162c2d0: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-ee148ef8-0618-42d8-bdb3-74178162c2d0 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-f75bea85-5332-46b1-8674-f383dac77d5c: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-f75bea85-5332-46b1-8674-f383dac77d5c to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-fb82cc33-a266-43a8-93a5-f5203a922ef1: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-fb82cc33-a266-43a8-93a5-f5203a922ef1 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-fbe47191-a015-44ef-a732-9f8f673c81a1: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-fbe47191-a015-44ef-a732-9f8f673c81a1 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-fc92ed50-9307-4502-99c5-8d7b7aee5c25: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-fc92ed50-9307-4502-99c5-8d7b7aee5c25 to node1 May 22 01:40:14.090: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-ff2283db-c4be-44af-ba93-13621c43139d: { } Scheduled: Successfully assigned persistent-local-volumes-test-8512/pod-ff2283db-c4be-44af-ba93-13621c43139d to node1 May 22 01:40:14.090: INFO: At 2021-05-22 01:35:16 +0000 UTC - event for pod-33f6af65-447d-47e5-856a-ee0520e50654: {multus } AddedInterface: Add eth0 [10.244.3.169/24] May 22 01:40:14.090: INFO: At 2021-05-22 01:35:16 +0000 UTC - event for pod-33f6af65-447d-47e5-856a-ee0520e50654: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.090: INFO: At 2021-05-22 01:35:17 +0000 UTC - event for pod-a7e733ab-c9e1-4a96-8140-66bfbe914aa7: {multus } AddedInterface: Add eth0 [10.244.3.170/24] May 22 01:40:14.090: INFO: At 2021-05-22 01:35:17 +0000 UTC - event for pod-a7e733ab-c9e1-4a96-8140-66bfbe914aa7: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.090: INFO: At 2021-05-22 01:35:18 +0000 UTC - event for pod-33f6af65-447d-47e5-856a-ee0520e50654: {kubelet node1} Created: Created container write-pod May 22 01:40:14.090: INFO: At 2021-05-22 01:35:18 +0000 UTC - event for pod-33f6af65-447d-47e5-856a-ee0520e50654: {kubelet node1} Started: Started container write-pod May 22 01:40:14.090: INFO: At 2021-05-22 01:35:18 +0000 UTC - event for pod-33f6af65-447d-47e5-856a-ee0520e50654: {kubelet node1} Pulled: Successfully pulled image "docker.io/library/busybox:1.29" in 1.43534353s May 22 01:40:14.090: INFO: At 2021-05-22 01:35:18 +0000 UTC - event for pod-9c79a773-79d1-4e12-8a5f-7c40fd09ac29: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.090: INFO: At 2021-05-22 01:35:18 +0000 UTC - event for pod-9c79a773-79d1-4e12-8a5f-7c40fd09ac29: {multus } AddedInterface: Add eth0 [10.244.3.171/24] May 22 01:40:14.090: INFO: At 2021-05-22 01:35:19 +0000 UTC - event for pod-a1f65dd2-4488-4e0f-b5b0-7a2a720b0782: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.090: INFO: At 2021-05-22 01:35:19 +0000 UTC - event for pod-a1f65dd2-4488-4e0f-b5b0-7a2a720b0782: {multus } AddedInterface: Add eth0 [10.244.3.172/24] May 22 01:40:14.090: INFO: At 2021-05-22 01:35:19 +0000 UTC - event for pod-a7e733ab-c9e1-4a96-8140-66bfbe914aa7: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.090: INFO: At 2021-05-22 01:35:19 +0000 UTC - event for pod-a7e733ab-c9e1-4a96-8140-66bfbe914aa7: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.090: INFO: At 2021-05-22 01:35:20 +0000 UTC - event for pod-9c79a773-79d1-4e12-8a5f-7c40fd09ac29: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.090: INFO: At 2021-05-22 01:35:20 +0000 UTC - event for pod-9c79a773-79d1-4e12-8a5f-7c40fd09ac29: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.090: INFO: At 2021-05-22 01:35:20 +0000 UTC - event for pod-a7e733ab-c9e1-4a96-8140-66bfbe914aa7: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.090: INFO: At 2021-05-22 01:35:20 +0000 UTC - event for pod-a7e733ab-c9e1-4a96-8140-66bfbe914aa7: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.090: INFO: At 2021-05-22 01:35:20 +0000 UTC - event for pod-bdc092d6-9ec5-4586-82d9-f8b25e423447: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.090: INFO: At 2021-05-22 01:35:20 +0000 UTC - event for pod-bdc092d6-9ec5-4586-82d9-f8b25e423447: {multus } AddedInterface: Add eth0 [10.244.3.173/24] May 22 01:40:14.090: INFO: At 2021-05-22 01:35:21 +0000 UTC - event for pod-81772626-74ba-492c-8d3e-0972578acee5: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.090: INFO: At 2021-05-22 01:35:21 +0000 UTC - event for pod-81772626-74ba-492c-8d3e-0972578acee5: {multus } AddedInterface: Add eth0 [10.244.3.174/24] May 22 01:40:14.090: INFO: At 2021-05-22 01:35:21 +0000 UTC - event for pod-a1f65dd2-4488-4e0f-b5b0-7a2a720b0782: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.090: INFO: At 2021-05-22 01:35:21 +0000 UTC - event for pod-a1f65dd2-4488-4e0f-b5b0-7a2a720b0782: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.090: INFO: At 2021-05-22 01:35:22 +0000 UTC - event for pod-52aa3294-e25c-4d72-bd4f-7a84f4491e1e: {multus } AddedInterface: Add eth0 [10.244.3.175/24] May 22 01:40:14.090: INFO: At 2021-05-22 01:35:22 +0000 UTC - event for pod-52aa3294-e25c-4d72-bd4f-7a84f4491e1e: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.090: INFO: At 2021-05-22 01:35:22 +0000 UTC - event for pod-9c79a773-79d1-4e12-8a5f-7c40fd09ac29: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.090: INFO: At 2021-05-22 01:35:22 +0000 UTC - event for pod-9c79a773-79d1-4e12-8a5f-7c40fd09ac29: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.090: INFO: At 2021-05-22 01:35:22 +0000 UTC - event for pod-a1f65dd2-4488-4e0f-b5b0-7a2a720b0782: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.090: INFO: At 2021-05-22 01:35:22 +0000 UTC - event for pod-a1f65dd2-4488-4e0f-b5b0-7a2a720b0782: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.090: INFO: At 2021-05-22 01:35:22 +0000 UTC - event for pod-bdc092d6-9ec5-4586-82d9-f8b25e423447: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.090: INFO: At 2021-05-22 01:35:22 +0000 UTC - event for pod-bdc092d6-9ec5-4586-82d9-f8b25e423447: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.090: INFO: At 2021-05-22 01:35:23 +0000 UTC - event for pod-2ce73150-a2e6-468a-87df-ddb28cb19b3b: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.090: INFO: At 2021-05-22 01:35:23 +0000 UTC - event for pod-2ce73150-a2e6-468a-87df-ddb28cb19b3b: {multus } AddedInterface: Add eth0 [10.244.3.176/24] May 22 01:40:14.090: INFO: At 2021-05-22 01:35:23 +0000 UTC - event for pod-81772626-74ba-492c-8d3e-0972578acee5: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.090: INFO: At 2021-05-22 01:35:23 +0000 UTC - event for pod-81772626-74ba-492c-8d3e-0972578acee5: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.090: INFO: At 2021-05-22 01:35:24 +0000 UTC - event for pod-23fd4946-8b00-4e14-87d6-628987f1196e: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.090: INFO: At 2021-05-22 01:35:24 +0000 UTC - event for pod-23fd4946-8b00-4e14-87d6-628987f1196e: {multus } AddedInterface: Add eth0 [10.244.3.177/24] May 22 01:40:14.090: INFO: At 2021-05-22 01:35:24 +0000 UTC - event for pod-52aa3294-e25c-4d72-bd4f-7a84f4491e1e: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.090: INFO: At 2021-05-22 01:35:24 +0000 UTC - event for pod-52aa3294-e25c-4d72-bd4f-7a84f4491e1e: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.090: INFO: At 2021-05-22 01:35:25 +0000 UTC - event for pod-2ce73150-a2e6-468a-87df-ddb28cb19b3b: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.090: INFO: At 2021-05-22 01:35:25 +0000 UTC - event for pod-2ce73150-a2e6-468a-87df-ddb28cb19b3b: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.090: INFO: At 2021-05-22 01:35:25 +0000 UTC - event for pod-bdc092d6-9ec5-4586-82d9-f8b25e423447: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 22 01:40:14.090: INFO: At 2021-05-22 01:35:25 +0000 UTC - event for pod-e325b6cd-538a-49cc-b2d2-f5610d29e416: {multus } AddedInterface: Add eth0 [10.244.3.178/24] May 22 01:40:14.090: INFO: At 2021-05-22 01:35:25 +0000 UTC - event for pod-e325b6cd-538a-49cc-b2d2-f5610d29e416: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.090: INFO: At 2021-05-22 01:35:26 +0000 UTC - event for pod-23fd4946-8b00-4e14-87d6-628987f1196e: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.090: INFO: At 2021-05-22 01:35:26 +0000 UTC - event for pod-23fd4946-8b00-4e14-87d6-628987f1196e: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.090: INFO: At 2021-05-22 01:35:26 +0000 UTC - event for pod-6663facf-422c-4de6-a474-fc640916e836: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.090: INFO: At 2021-05-22 01:35:26 +0000 UTC - event for pod-6663facf-422c-4de6-a474-fc640916e836: {multus } AddedInterface: Add eth0 [10.244.3.179/24] May 22 01:40:14.090: INFO: At 2021-05-22 01:35:27 +0000 UTC - event for pod-2ce73150-a2e6-468a-87df-ddb28cb19b3b: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.090: INFO: At 2021-05-22 01:35:27 +0000 UTC - event for pod-2ce73150-a2e6-468a-87df-ddb28cb19b3b: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.090: INFO: At 2021-05-22 01:35:27 +0000 UTC - event for pod-52aa3294-e25c-4d72-bd4f-7a84f4491e1e: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.091: INFO: At 2021-05-22 01:35:27 +0000 UTC - event for pod-52aa3294-e25c-4d72-bd4f-7a84f4491e1e: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:27 +0000 UTC - event for pod-81772626-74ba-492c-8d3e-0972578acee5: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.091: INFO: At 2021-05-22 01:35:27 +0000 UTC - event for pod-81772626-74ba-492c-8d3e-0972578acee5: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:27 +0000 UTC - event for pod-96515024-2799-4240-ad76-2e3f70eb11c4: {multus } AddedInterface: Add eth0 [10.244.3.180/24] May 22 01:40:14.091: INFO: At 2021-05-22 01:35:27 +0000 UTC - event for pod-96515024-2799-4240-ad76-2e3f70eb11c4: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:28 +0000 UTC - event for pod-6663facf-422c-4de6-a474-fc640916e836: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.091: INFO: At 2021-05-22 01:35:28 +0000 UTC - event for pod-6663facf-422c-4de6-a474-fc640916e836: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.091: INFO: At 2021-05-22 01:35:28 +0000 UTC - event for pod-e325b6cd-538a-49cc-b2d2-f5610d29e416: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.091: INFO: At 2021-05-22 01:35:28 +0000 UTC - event for pod-e325b6cd-538a-49cc-b2d2-f5610d29e416: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.091: INFO: At 2021-05-22 01:35:28 +0000 UTC - event for pod-ff2283db-c4be-44af-ba93-13621c43139d: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:28 +0000 UTC - event for pod-ff2283db-c4be-44af-ba93-13621c43139d: {multus } AddedInterface: Add eth0 [10.244.3.181/24] May 22 01:40:14.091: INFO: At 2021-05-22 01:35:29 +0000 UTC - event for pod-0ef0bcb1-3e96-4db7-875a-08191af24833: {multus } AddedInterface: Add eth0 [10.244.3.182/24] May 22 01:40:14.091: INFO: At 2021-05-22 01:35:29 +0000 UTC - event for pod-23fd4946-8b00-4e14-87d6-628987f1196e: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 22 01:40:14.091: INFO: At 2021-05-22 01:35:29 +0000 UTC - event for pod-96515024-2799-4240-ad76-2e3f70eb11c4: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.091: INFO: At 2021-05-22 01:35:29 +0000 UTC - event for pod-96515024-2799-4240-ad76-2e3f70eb11c4: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.091: INFO: At 2021-05-22 01:35:29 +0000 UTC - event for pod-e325b6cd-538a-49cc-b2d2-f5610d29e416: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.091: INFO: At 2021-05-22 01:35:29 +0000 UTC - event for pod-e325b6cd-538a-49cc-b2d2-f5610d29e416: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:30 +0000 UTC - event for pod-0ef0bcb1-3e96-4db7-875a-08191af24833: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:30 +0000 UTC - event for pod-bdc092d6-9ec5-4586-82d9-f8b25e423447: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.091: INFO: At 2021-05-22 01:35:30 +0000 UTC - event for pod-bdc092d6-9ec5-4586-82d9-f8b25e423447: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:30 +0000 UTC - event for pod-bdc092d6-9ec5-4586-82d9-f8b25e423447: {multus } AddedInterface: Add eth0 [10.244.3.183/24] May 22 01:40:14.091: INFO: At 2021-05-22 01:35:31 +0000 UTC - event for pod-6663facf-422c-4de6-a474-fc640916e836: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.091: INFO: At 2021-05-22 01:35:31 +0000 UTC - event for pod-6663facf-422c-4de6-a474-fc640916e836: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:31 +0000 UTC - event for pod-96515024-2799-4240-ad76-2e3f70eb11c4: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:31 +0000 UTC - event for pod-96515024-2799-4240-ad76-2e3f70eb11c4: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.091: INFO: At 2021-05-22 01:35:31 +0000 UTC - event for pod-c4e86621-d05a-4ab6-b894-950537713573: {multus } AddedInterface: Add eth0 [10.244.3.184/24] May 22 01:40:14.091: INFO: At 2021-05-22 01:35:31 +0000 UTC - event for pod-c4e86621-d05a-4ab6-b894-950537713573: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:31 +0000 UTC - event for pod-ff2283db-c4be-44af-ba93-13621c43139d: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.091: INFO: At 2021-05-22 01:35:31 +0000 UTC - event for pod-ff2283db-c4be-44af-ba93-13621c43139d: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.091: INFO: At 2021-05-22 01:35:32 +0000 UTC - event for pod-0ef0bcb1-3e96-4db7-875a-08191af24833: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.091: INFO: At 2021-05-22 01:35:32 +0000 UTC - event for pod-0ef0bcb1-3e96-4db7-875a-08191af24833: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.091: INFO: At 2021-05-22 01:35:32 +0000 UTC - event for pod-b5622c92-09b0-411f-8700-8aea35ec6f61: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:32 +0000 UTC - event for pod-b5622c92-09b0-411f-8700-8aea35ec6f61: {multus } AddedInterface: Add eth0 [10.244.3.185/24] May 22 01:40:14.091: INFO: At 2021-05-22 01:35:33 +0000 UTC - event for pod-23fd4946-8b00-4e14-87d6-628987f1196e: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:33 +0000 UTC - event for pod-23fd4946-8b00-4e14-87d6-628987f1196e: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.091: INFO: At 2021-05-22 01:35:33 +0000 UTC - event for pod-23fd4946-8b00-4e14-87d6-628987f1196e: {multus } AddedInterface: Add eth0 [10.244.3.186/24] May 22 01:40:14.091: INFO: At 2021-05-22 01:35:33 +0000 UTC - event for pod-c4e86621-d05a-4ab6-b894-950537713573: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.091: INFO: At 2021-05-22 01:35:33 +0000 UTC - event for pod-c4e86621-d05a-4ab6-b894-950537713573: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.091: INFO: At 2021-05-22 01:35:34 +0000 UTC - event for pod-79204cb4-4be8-4740-8cba-d13da7fa71de: {multus } AddedInterface: Add eth0 [10.244.3.187/24] May 22 01:40:14.091: INFO: At 2021-05-22 01:35:34 +0000 UTC - event for pod-79204cb4-4be8-4740-8cba-d13da7fa71de: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:34 +0000 UTC - event for pod-b5622c92-09b0-411f-8700-8aea35ec6f61: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.091: INFO: At 2021-05-22 01:35:34 +0000 UTC - event for pod-b5622c92-09b0-411f-8700-8aea35ec6f61: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.091: INFO: At 2021-05-22 01:35:34 +0000 UTC - event for pod-ff2283db-c4be-44af-ba93-13621c43139d: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:34 +0000 UTC - event for pod-ff2283db-c4be-44af-ba93-13621c43139d: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.091: INFO: At 2021-05-22 01:35:35 +0000 UTC - event for pod-6a3981b8-47b7-4324-aa24-42b9517fa1b8: {multus } AddedInterface: Add eth0 [10.244.3.188/24] May 22 01:40:14.091: INFO: At 2021-05-22 01:35:35 +0000 UTC - event for pod-6a3981b8-47b7-4324-aa24-42b9517fa1b8: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:35 +0000 UTC - event for pod-ac068ac2-7008-4da8-84c9-09b597b100b5: {multus } AddedInterface: Add eth0 [10.244.3.189/24] May 22 01:40:14.091: INFO: At 2021-05-22 01:35:35 +0000 UTC - event for pod-ac068ac2-7008-4da8-84c9-09b597b100b5: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:36 +0000 UTC - event for pod-0ef0bcb1-3e96-4db7-875a-08191af24833: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:36 +0000 UTC - event for pod-0ef0bcb1-3e96-4db7-875a-08191af24833: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.091: INFO: At 2021-05-22 01:35:36 +0000 UTC - event for pod-25416362-b799-49a7-9006-efdf18ad9c5d: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:36 +0000 UTC - event for pod-25416362-b799-49a7-9006-efdf18ad9c5d: {multus } AddedInterface: Add eth0 [10.244.3.190/24] May 22 01:40:14.091: INFO: At 2021-05-22 01:35:36 +0000 UTC - event for pod-79204cb4-4be8-4740-8cba-d13da7fa71de: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.091: INFO: At 2021-05-22 01:35:36 +0000 UTC - event for pod-79204cb4-4be8-4740-8cba-d13da7fa71de: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.091: INFO: At 2021-05-22 01:35:36 +0000 UTC - event for pod-b5622c92-09b0-411f-8700-8aea35ec6f61: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:36 +0000 UTC - event for pod-b5622c92-09b0-411f-8700-8aea35ec6f61: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.091: INFO: At 2021-05-22 01:35:36 +0000 UTC - event for pod-c4e86621-d05a-4ab6-b894-950537713573: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 22 01:40:14.091: INFO: At 2021-05-22 01:35:37 +0000 UTC - event for pod-958e2d96-1d04-4d0b-87e0-7be3d23c258c: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:37 +0000 UTC - event for pod-958e2d96-1d04-4d0b-87e0-7be3d23c258c: {multus } AddedInterface: Add eth0 [10.244.3.191/24] May 22 01:40:14.091: INFO: At 2021-05-22 01:35:38 +0000 UTC - event for pod-ee148ef8-0618-42d8-bdb3-74178162c2d0: {multus } AddedInterface: Add eth0 [10.244.3.192/24] May 22 01:40:14.091: INFO: At 2021-05-22 01:35:38 +0000 UTC - event for pod-ee148ef8-0618-42d8-bdb3-74178162c2d0: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:39 +0000 UTC - event for pod-a7e733ab-c9e1-4a96-8140-66bfbe914aa7: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.091: INFO: At 2021-05-22 01:35:39 +0000 UTC - event for pod-d593da45-ae55-40b4-b38a-616eaafcc189: {multus } AddedInterface: Add eth0 [10.244.3.193/24] May 22 01:40:14.091: INFO: At 2021-05-22 01:35:39 +0000 UTC - event for pod-d593da45-ae55-40b4-b38a-616eaafcc189: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:40 +0000 UTC - event for pod-51a38ecb-aa5d-4cd5-9e9a-5e01ba11b26a: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:40 +0000 UTC - event for pod-51a38ecb-aa5d-4cd5-9e9a-5e01ba11b26a: {multus } AddedInterface: Add eth0 [10.244.3.194/24] May 22 01:40:14.091: INFO: At 2021-05-22 01:35:40 +0000 UTC - event for pod-6a3981b8-47b7-4324-aa24-42b9517fa1b8: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.091: INFO: At 2021-05-22 01:35:40 +0000 UTC - event for pod-6a3981b8-47b7-4324-aa24-42b9517fa1b8: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.091: INFO: At 2021-05-22 01:35:40 +0000 UTC - event for pod-79204cb4-4be8-4740-8cba-d13da7fa71de: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.091: INFO: At 2021-05-22 01:35:40 +0000 UTC - event for pod-79204cb4-4be8-4740-8cba-d13da7fa71de: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:41 +0000 UTC - event for pod-6acee2ee-d14c-4b78-9ff8-3e1f242a266a: {multus } AddedInterface: Add eth0 [10.244.3.195/24] May 22 01:40:14.091: INFO: At 2021-05-22 01:35:41 +0000 UTC - event for pod-6acee2ee-d14c-4b78-9ff8-3e1f242a266a: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:42 +0000 UTC - event for pod-ac068ac2-7008-4da8-84c9-09b597b100b5: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.091: INFO: At 2021-05-22 01:35:42 +0000 UTC - event for pod-ac068ac2-7008-4da8-84c9-09b597b100b5: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.091: INFO: At 2021-05-22 01:35:46 +0000 UTC - event for pod-25416362-b799-49a7-9006-efdf18ad9c5d: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.091: INFO: At 2021-05-22 01:35:46 +0000 UTC - event for pod-25416362-b799-49a7-9006-efdf18ad9c5d: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.091: INFO: At 2021-05-22 01:35:48 +0000 UTC - event for pod-6a3981b8-47b7-4324-aa24-42b9517fa1b8: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.091: INFO: At 2021-05-22 01:35:48 +0000 UTC - event for pod-6a3981b8-47b7-4324-aa24-42b9517fa1b8: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:49 +0000 UTC - event for pod-958e2d96-1d04-4d0b-87e0-7be3d23c258c: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.091: INFO: At 2021-05-22 01:35:49 +0000 UTC - event for pod-958e2d96-1d04-4d0b-87e0-7be3d23c258c: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.091: INFO: At 2021-05-22 01:35:49 +0000 UTC - event for pod-c4e86621-d05a-4ab6-b894-950537713573: {multus } AddedInterface: Add eth0 [10.244.3.196/24] May 22 01:40:14.091: INFO: At 2021-05-22 01:35:51 +0000 UTC - event for pod-166d9b83-9cda-4fee-b89b-128f7e31b41e: {multus } AddedInterface: Add eth0 [10.244.3.197/24] May 22 01:40:14.091: INFO: At 2021-05-22 01:35:51 +0000 UTC - event for pod-ee148ef8-0618-42d8-bdb3-74178162c2d0: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.091: INFO: At 2021-05-22 01:35:51 +0000 UTC - event for pod-ee148ef8-0618-42d8-bdb3-74178162c2d0: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.091: INFO: At 2021-05-22 01:35:52 +0000 UTC - event for pod-16028087-0ae3-489e-8b7b-2f309c12136f: {multus } AddedInterface: Add eth0 [10.244.3.200/24] May 22 01:40:14.091: INFO: At 2021-05-22 01:35:52 +0000 UTC - event for pod-16028087-0ae3-489e-8b7b-2f309c12136f: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:52 +0000 UTC - event for pod-166d9b83-9cda-4fee-b89b-128f7e31b41e: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:52 +0000 UTC - event for pod-9d9e2186-4fd6-4a51-9dd4-f11b858ec786: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.091: INFO: At 2021-05-22 01:35:52 +0000 UTC - event for pod-9d9e2186-4fd6-4a51-9dd4-f11b858ec786: {multus } AddedInterface: Add eth0 [10.244.3.198/24] May 22 01:40:14.092: INFO: At 2021-05-22 01:35:52 +0000 UTC - event for pod-d8aa6580-4df3-4e8e-87e2-19954daee3ce: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:35:52 +0000 UTC - event for pod-d8aa6580-4df3-4e8e-87e2-19954daee3ce: {multus } AddedInterface: Add eth0 [10.244.3.199/24] May 22 01:40:14.092: INFO: At 2021-05-22 01:35:52 +0000 UTC - event for pod-e905733c-287b-4a80-b63d-06c4b6d5effd: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:35:52 +0000 UTC - event for pod-e905733c-287b-4a80-b63d-06c4b6d5effd: {multus } AddedInterface: Add eth0 [10.244.3.201/24] May 22 01:40:14.092: INFO: At 2021-05-22 01:35:53 +0000 UTC - event for pod-25416362-b799-49a7-9006-efdf18ad9c5d: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:35:53 +0000 UTC - event for pod-25416362-b799-49a7-9006-efdf18ad9c5d: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.092: INFO: At 2021-05-22 01:35:53 +0000 UTC - event for pod-d593da45-ae55-40b4-b38a-616eaafcc189: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.092: INFO: At 2021-05-22 01:35:53 +0000 UTC - event for pod-d593da45-ae55-40b4-b38a-616eaafcc189: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.092: INFO: At 2021-05-22 01:35:54 +0000 UTC - event for pod-550c8d3c-c425-44a5-86c7-6aa695148c5b: {multus } AddedInterface: Add eth0 [10.244.3.202/24] May 22 01:40:14.092: INFO: At 2021-05-22 01:35:54 +0000 UTC - event for pod-cb0494ec-13e3-4046-a345-1d02ce011718: {multus } AddedInterface: Add eth0 [10.244.3.203/24] May 22 01:40:14.092: INFO: At 2021-05-22 01:35:55 +0000 UTC - event for pod-550c8d3c-c425-44a5-86c7-6aa695148c5b: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:35:55 +0000 UTC - event for pod-9a8fdccc-3091-428e-8cd6-ce188c633067: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:35:55 +0000 UTC - event for pod-9a8fdccc-3091-428e-8cd6-ce188c633067: {multus } AddedInterface: Add eth0 [10.244.3.204/24] May 22 01:40:14.092: INFO: At 2021-05-22 01:35:55 +0000 UTC - event for pod-cb0494ec-13e3-4046-a345-1d02ce011718: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:35:56 +0000 UTC - event for pod-51a38ecb-aa5d-4cd5-9e9a-5e01ba11b26a: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.092: INFO: At 2021-05-22 01:35:56 +0000 UTC - event for pod-51a38ecb-aa5d-4cd5-9e9a-5e01ba11b26a: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.092: INFO: At 2021-05-22 01:35:56 +0000 UTC - event for pod-958e2d96-1d04-4d0b-87e0-7be3d23c258c: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.092: INFO: At 2021-05-22 01:35:56 +0000 UTC - event for pod-958e2d96-1d04-4d0b-87e0-7be3d23c258c: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:35:56 +0000 UTC - event for pod-d593da45-ae55-40b4-b38a-616eaafcc189: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.092: INFO: At 2021-05-22 01:35:56 +0000 UTC - event for pod-d593da45-ae55-40b4-b38a-616eaafcc189: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:35:56 +0000 UTC - event for pod-de79b91d-d259-449a-9636-9b73aa3abbf4: {multus } AddedInterface: Add eth0 [10.244.3.205/24] May 22 01:40:14.092: INFO: At 2021-05-22 01:35:56 +0000 UTC - event for pod-de79b91d-d259-449a-9636-9b73aa3abbf4: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:35:56 +0000 UTC - event for pod-ee148ef8-0618-42d8-bdb3-74178162c2d0: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.092: INFO: At 2021-05-22 01:35:56 +0000 UTC - event for pod-ee148ef8-0618-42d8-bdb3-74178162c2d0: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:35:57 +0000 UTC - event for pod-83cf990b-2e75-49d0-978d-3c83c55b4b18: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:35:57 +0000 UTC - event for pod-83cf990b-2e75-49d0-978d-3c83c55b4b18: {multus } AddedInterface: Add eth0 [10.244.3.206/24] May 22 01:40:14.092: INFO: At 2021-05-22 01:35:58 +0000 UTC - event for pod-6a5ca55b-7fea-4e70-81f4-251c29d7a077: {multus } AddedInterface: Add eth0 [10.244.3.207/24] May 22 01:40:14.092: INFO: At 2021-05-22 01:35:59 +0000 UTC - event for pod-6a5ca55b-7fea-4e70-81f4-251c29d7a077: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:35:59 +0000 UTC - event for pod-6acee2ee-d14c-4b78-9ff8-3e1f242a266a: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.092: INFO: At 2021-05-22 01:35:59 +0000 UTC - event for pod-6acee2ee-d14c-4b78-9ff8-3e1f242a266a: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.092: INFO: At 2021-05-22 01:35:59 +0000 UTC - event for pod-fb82cc33-a266-43a8-93a5-f5203a922ef1: {multus } AddedInterface: Add eth0 [10.244.3.208/24] May 22 01:40:14.092: INFO: At 2021-05-22 01:35:59 +0000 UTC - event for pod-fb82cc33-a266-43a8-93a5-f5203a922ef1: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:36:00 +0000 UTC - event for pod-fc92ed50-9307-4502-99c5-8d7b7aee5c25: {multus } AddedInterface: Add eth0 [10.244.3.209/24] May 22 01:40:14.092: INFO: At 2021-05-22 01:36:00 +0000 UTC - event for pod-fc92ed50-9307-4502-99c5-8d7b7aee5c25: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:36:01 +0000 UTC - event for pod-51a38ecb-aa5d-4cd5-9e9a-5e01ba11b26a: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:36:01 +0000 UTC - event for pod-51a38ecb-aa5d-4cd5-9e9a-5e01ba11b26a: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.092: INFO: At 2021-05-22 01:36:03 +0000 UTC - event for pod-3aa7b4b3-1a97-4094-ab85-a439721516b1: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:36:03 +0000 UTC - event for pod-3aa7b4b3-1a97-4094-ab85-a439721516b1: {multus } AddedInterface: Add eth0 [10.244.3.213/24] May 22 01:40:14.092: INFO: At 2021-05-22 01:36:03 +0000 UTC - event for pod-5247e6a6-6702-4904-9bd2-93e1126ee447: {multus } AddedInterface: Add eth0 [10.244.3.212/24] May 22 01:40:14.092: INFO: At 2021-05-22 01:36:03 +0000 UTC - event for pod-5247e6a6-6702-4904-9bd2-93e1126ee447: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:36:03 +0000 UTC - event for pod-c506f916-7ca2-4154-9d6c-30ceb910a947: {multus } AddedInterface: Add eth0 [10.244.3.210/24] May 22 01:40:14.092: INFO: At 2021-05-22 01:36:03 +0000 UTC - event for pod-c506f916-7ca2-4154-9d6c-30ceb910a947: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:36:03 +0000 UTC - event for pod-eb2dd31d-3c35-44f2-a121-e69e63ba64ad: {multus } AddedInterface: Add eth0 [10.244.3.211/24] May 22 01:40:14.092: INFO: At 2021-05-22 01:36:03 +0000 UTC - event for pod-eb2dd31d-3c35-44f2-a121-e69e63ba64ad: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:36:04 +0000 UTC - event for pod-6acee2ee-d14c-4b78-9ff8-3e1f242a266a: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 22 01:40:14.092: INFO: At 2021-05-22 01:36:07 +0000 UTC - event for pod-0806eda0-8dd0-499b-8419-f4c96fb3bf90: {multus } AddedInterface: Add eth0 [10.244.3.216/24] May 22 01:40:14.092: INFO: At 2021-05-22 01:36:07 +0000 UTC - event for pod-0806eda0-8dd0-499b-8419-f4c96fb3bf90: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:36:07 +0000 UTC - event for pod-57430379-2a25-4b50-80f8-1f9d8fb22de4: {multus } AddedInterface: Add eth0 [10.244.3.215/24] May 22 01:40:14.092: INFO: At 2021-05-22 01:36:07 +0000 UTC - event for pod-57430379-2a25-4b50-80f8-1f9d8fb22de4: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:36:07 +0000 UTC - event for pod-be10ab73-38c1-4d88-82d7-9594ce87d7f2: {multus } AddedInterface: Add eth0 [10.244.3.214/24] May 22 01:40:14.092: INFO: At 2021-05-22 01:36:07 +0000 UTC - event for pod-be10ab73-38c1-4d88-82d7-9594ce87d7f2: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:36:07 +0000 UTC - event for pod-c4e86621-d05a-4ab6-b894-950537713573: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.092: INFO: At 2021-05-22 01:36:07 +0000 UTC - event for pod-c4e86621-d05a-4ab6-b894-950537713573: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:36:08 +0000 UTC - event for pod-166d9b83-9cda-4fee-b89b-128f7e31b41e: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.092: INFO: At 2021-05-22 01:36:08 +0000 UTC - event for pod-166d9b83-9cda-4fee-b89b-128f7e31b41e: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.092: INFO: At 2021-05-22 01:36:09 +0000 UTC - event for pod-17c41de8-712a-4b64-8e12-b9167f1ccc82: {multus } AddedInterface: Add eth0 [10.244.3.217/24] May 22 01:40:14.092: INFO: At 2021-05-22 01:36:09 +0000 UTC - event for pod-17c41de8-712a-4b64-8e12-b9167f1ccc82: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:36:09 +0000 UTC - event for pod-4c5b332a-c9d0-4347-8498-3906e84828c4: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:36:09 +0000 UTC - event for pod-4c5b332a-c9d0-4347-8498-3906e84828c4: {multus } AddedInterface: Add eth0 [10.244.3.218/24] May 22 01:40:14.092: INFO: At 2021-05-22 01:36:09 +0000 UTC - event for pod-d8aa6580-4df3-4e8e-87e2-19954daee3ce: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.092: INFO: At 2021-05-22 01:36:09 +0000 UTC - event for pod-d8aa6580-4df3-4e8e-87e2-19954daee3ce: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.092: INFO: At 2021-05-22 01:36:11 +0000 UTC - event for pod-16028087-0ae3-489e-8b7b-2f309c12136f: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.092: INFO: At 2021-05-22 01:36:11 +0000 UTC - event for pod-16028087-0ae3-489e-8b7b-2f309c12136f: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.092: INFO: At 2021-05-22 01:36:11 +0000 UTC - event for pod-166d9b83-9cda-4fee-b89b-128f7e31b41e: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 22 01:40:14.092: INFO: At 2021-05-22 01:36:11 +0000 UTC - event for pod-6acee2ee-d14c-4b78-9ff8-3e1f242a266a: {multus } AddedInterface: Add eth0 [10.244.3.222/24] May 22 01:40:14.092: INFO: At 2021-05-22 01:36:11 +0000 UTC - event for pod-c59e1fe4-fe07-4049-8c93-09d9a5d6fcd0: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:36:11 +0000 UTC - event for pod-c59e1fe4-fe07-4049-8c93-09d9a5d6fcd0: {multus } AddedInterface: Add eth0 [10.244.3.219/24] May 22 01:40:14.092: INFO: At 2021-05-22 01:36:11 +0000 UTC - event for pod-d8aa6580-4df3-4e8e-87e2-19954daee3ce: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 22 01:40:14.092: INFO: At 2021-05-22 01:36:11 +0000 UTC - event for pod-f75bea85-5332-46b1-8674-f383dac77d5c: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:36:11 +0000 UTC - event for pod-f75bea85-5332-46b1-8674-f383dac77d5c: {multus } AddedInterface: Add eth0 [10.244.3.220/24] May 22 01:40:14.092: INFO: At 2021-05-22 01:36:11 +0000 UTC - event for pod-fbe47191-a015-44ef-a732-9f8f673c81a1: {multus } AddedInterface: Add eth0 [10.244.3.221/24] May 22 01:40:14.092: INFO: At 2021-05-22 01:36:11 +0000 UTC - event for pod-fbe47191-a015-44ef-a732-9f8f673c81a1: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:36:12 +0000 UTC - event for pod-16028087-0ae3-489e-8b7b-2f309c12136f: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.092: INFO: At 2021-05-22 01:36:12 +0000 UTC - event for pod-16028087-0ae3-489e-8b7b-2f309c12136f: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:36:12 +0000 UTC - event for pod-9d9e2186-4fd6-4a51-9dd4-f11b858ec786: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.092: INFO: At 2021-05-22 01:36:12 +0000 UTC - event for pod-9d9e2186-4fd6-4a51-9dd4-f11b858ec786: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 22 01:40:14.092: INFO: At 2021-05-22 01:36:12 +0000 UTC - event for pod-9d9e2186-4fd6-4a51-9dd4-f11b858ec786: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.092: INFO: At 2021-05-22 01:36:14 +0000 UTC - event for pod-e905733c-287b-4a80-b63d-06c4b6d5effd: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.092: INFO: At 2021-05-22 01:36:14 +0000 UTC - event for pod-e905733c-287b-4a80-b63d-06c4b6d5effd: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.092: INFO: At 2021-05-22 01:36:15 +0000 UTC - event for pod-166d9b83-9cda-4fee-b89b-128f7e31b41e: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.092: INFO: At 2021-05-22 01:36:15 +0000 UTC - event for pod-166d9b83-9cda-4fee-b89b-128f7e31b41e: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.092: INFO: At 2021-05-22 01:36:15 +0000 UTC - event for pod-166d9b83-9cda-4fee-b89b-128f7e31b41e: {multus } AddedInterface: Add eth0 [10.244.3.223/24] May 22 01:40:14.092: INFO: At 2021-05-22 01:36:16 +0000 UTC - event for pod-550c8d3c-c425-44a5-86c7-6aa695148c5b: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.092: INFO: At 2021-05-22 01:36:16 +0000 UTC - event for pod-550c8d3c-c425-44a5-86c7-6aa695148c5b: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.092: INFO: At 2021-05-22 01:36:16 +0000 UTC - event for pod-d8aa6580-4df3-4e8e-87e2-19954daee3ce: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.093: INFO: At 2021-05-22 01:36:16 +0000 UTC - event for pod-d8aa6580-4df3-4e8e-87e2-19954daee3ce: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.093: INFO: At 2021-05-22 01:36:16 +0000 UTC - event for pod-d8aa6580-4df3-4e8e-87e2-19954daee3ce: {multus } AddedInterface: Add eth0 [10.244.3.224/24] May 22 01:40:14.093: INFO: At 2021-05-22 01:36:17 +0000 UTC - event for pod-9d9e2186-4fd6-4a51-9dd4-f11b858ec786: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.093: INFO: At 2021-05-22 01:36:17 +0000 UTC - event for pod-9d9e2186-4fd6-4a51-9dd4-f11b858ec786: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.093: INFO: At 2021-05-22 01:36:17 +0000 UTC - event for pod-9d9e2186-4fd6-4a51-9dd4-f11b858ec786: {multus } AddedInterface: Add eth0 [10.244.3.225/24] May 22 01:40:14.093: INFO: At 2021-05-22 01:36:17 +0000 UTC - event for pod-ac068ac2-7008-4da8-84c9-09b597b100b5: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.093: INFO: At 2021-05-22 01:36:17 +0000 UTC - event for pod-ac068ac2-7008-4da8-84c9-09b597b100b5: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.093: INFO: At 2021-05-22 01:36:17 +0000 UTC - event for pod-cb0494ec-13e3-4046-a345-1d02ce011718: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.093: INFO: At 2021-05-22 01:36:17 +0000 UTC - event for pod-cb0494ec-13e3-4046-a345-1d02ce011718: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.093: INFO: At 2021-05-22 01:36:17 +0000 UTC - event for pod-e905733c-287b-4a80-b63d-06c4b6d5effd: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.093: INFO: At 2021-05-22 01:36:17 +0000 UTC - event for pod-e905733c-287b-4a80-b63d-06c4b6d5effd: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.093: INFO: At 2021-05-22 01:36:18 +0000 UTC - event for pod-550c8d3c-c425-44a5-86c7-6aa695148c5b: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 22 01:40:14.093: INFO: At 2021-05-22 01:36:18 +0000 UTC - event for pod-cb0494ec-13e3-4046-a345-1d02ce011718: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 22 01:40:14.093: INFO: At 2021-05-22 01:36:19 +0000 UTC - event for pod-9a8fdccc-3091-428e-8cd6-ce188c633067: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.093: INFO: At 2021-05-22 01:36:19 +0000 UTC - event for pod-9a8fdccc-3091-428e-8cd6-ce188c633067: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.093: INFO: At 2021-05-22 01:36:19 +0000 UTC - event for pod-9a8fdccc-3091-428e-8cd6-ce188c633067: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.093: INFO: At 2021-05-22 01:36:19 +0000 UTC - event for pod-9a8fdccc-3091-428e-8cd6-ce188c633067: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.093: INFO: At 2021-05-22 01:36:21 +0000 UTC - event for pod-550c8d3c-c425-44a5-86c7-6aa695148c5b: {multus } AddedInterface: Add eth0 [10.244.3.226/24] May 22 01:40:14.093: INFO: At 2021-05-22 01:36:21 +0000 UTC - event for pod-de79b91d-d259-449a-9636-9b73aa3abbf4: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.093: INFO: At 2021-05-22 01:36:21 +0000 UTC - event for pod-de79b91d-d259-449a-9636-9b73aa3abbf4: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.093: INFO: At 2021-05-22 01:36:22 +0000 UTC - event for pod-550c8d3c-c425-44a5-86c7-6aa695148c5b: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.093: INFO: At 2021-05-22 01:36:22 +0000 UTC - event for pod-550c8d3c-c425-44a5-86c7-6aa695148c5b: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.093: INFO: At 2021-05-22 01:36:22 +0000 UTC - event for pod-83cf990b-2e75-49d0-978d-3c83c55b4b18: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.093: INFO: At 2021-05-22 01:36:22 +0000 UTC - event for pod-83cf990b-2e75-49d0-978d-3c83c55b4b18: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.093: INFO: At 2021-05-22 01:36:22 +0000 UTC - event for pod-9d9e2186-4fd6-4a51-9dd4-f11b858ec786: {multus } AddedInterface: Add eth0 [10.244.3.228/24] May 22 01:40:14.093: INFO: At 2021-05-22 01:36:22 +0000 UTC - event for pod-cb0494ec-13e3-4046-a345-1d02ce011718: {multus } AddedInterface: Add eth0 [10.244.3.227/24] May 22 01:40:14.093: INFO: At 2021-05-22 01:36:22 +0000 UTC - event for pod-cb0494ec-13e3-4046-a345-1d02ce011718: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.093: INFO: At 2021-05-22 01:36:22 +0000 UTC - event for pod-cb0494ec-13e3-4046-a345-1d02ce011718: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.093: INFO: At 2021-05-22 01:36:22 +0000 UTC - event for pod-de79b91d-d259-449a-9636-9b73aa3abbf4: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.093: INFO: At 2021-05-22 01:36:22 +0000 UTC - event for pod-de79b91d-d259-449a-9636-9b73aa3abbf4: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.093: INFO: At 2021-05-22 01:36:23 +0000 UTC - event for pod-6a5ca55b-7fea-4e70-81f4-251c29d7a077: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.093: INFO: At 2021-05-22 01:36:23 +0000 UTC - event for pod-6a5ca55b-7fea-4e70-81f4-251c29d7a077: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.093: INFO: At 2021-05-22 01:36:23 +0000 UTC - event for pod-83cf990b-2e75-49d0-978d-3c83c55b4b18: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.093: INFO: At 2021-05-22 01:36:23 +0000 UTC - event for pod-83cf990b-2e75-49d0-978d-3c83c55b4b18: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.093: INFO: At 2021-05-22 01:36:24 +0000 UTC - event for pod-6a5ca55b-7fea-4e70-81f4-251c29d7a077: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 22 01:40:14.093: INFO: At 2021-05-22 01:36:24 +0000 UTC - event for pod-fb82cc33-a266-43a8-93a5-f5203a922ef1: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.093: INFO: At 2021-05-22 01:36:24 +0000 UTC - event for pod-fb82cc33-a266-43a8-93a5-f5203a922ef1: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.093: INFO: At 2021-05-22 01:36:25 +0000 UTC - event for pod-fb82cc33-a266-43a8-93a5-f5203a922ef1: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.093: INFO: At 2021-05-22 01:36:25 +0000 UTC - event for pod-fb82cc33-a266-43a8-93a5-f5203a922ef1: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.093: INFO: At 2021-05-22 01:36:25 +0000 UTC - event for pod-fc92ed50-9307-4502-99c5-8d7b7aee5c25: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.093: INFO: At 2021-05-22 01:36:25 +0000 UTC - event for pod-fc92ed50-9307-4502-99c5-8d7b7aee5c25: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.093: INFO: At 2021-05-22 01:36:26 +0000 UTC - event for pod-6a5ca55b-7fea-4e70-81f4-251c29d7a077: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.093: INFO: At 2021-05-22 01:36:26 +0000 UTC - event for pod-6a5ca55b-7fea-4e70-81f4-251c29d7a077: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.093: INFO: At 2021-05-22 01:36:26 +0000 UTC - event for pod-6a5ca55b-7fea-4e70-81f4-251c29d7a077: {multus } AddedInterface: Add eth0 [10.244.3.229/24] May 22 01:40:14.093: INFO: At 2021-05-22 01:36:26 +0000 UTC - event for pod-fc92ed50-9307-4502-99c5-8d7b7aee5c25: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.093: INFO: At 2021-05-22 01:36:26 +0000 UTC - event for pod-fc92ed50-9307-4502-99c5-8d7b7aee5c25: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.093: INFO: At 2021-05-22 01:36:28 +0000 UTC - event for pod-5247e6a6-6702-4904-9bd2-93e1126ee447: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.093: INFO: At 2021-05-22 01:36:28 +0000 UTC - event for pod-5247e6a6-6702-4904-9bd2-93e1126ee447: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.093: INFO: At 2021-05-22 01:36:29 +0000 UTC - event for pod-5247e6a6-6702-4904-9bd2-93e1126ee447: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 22 01:40:14.093: INFO: At 2021-05-22 01:36:29 +0000 UTC - event for pod-eb2dd31d-3c35-44f2-a121-e69e63ba64ad: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.093: INFO: At 2021-05-22 01:36:29 +0000 UTC - event for pod-eb2dd31d-3c35-44f2-a121-e69e63ba64ad: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.093: INFO: At 2021-05-22 01:36:30 +0000 UTC - event for pod-5247e6a6-6702-4904-9bd2-93e1126ee447: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.093: INFO: At 2021-05-22 01:36:30 +0000 UTC - event for pod-5247e6a6-6702-4904-9bd2-93e1126ee447: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.093: INFO: At 2021-05-22 01:36:30 +0000 UTC - event for pod-5247e6a6-6702-4904-9bd2-93e1126ee447: {multus } AddedInterface: Add eth0 [10.244.3.230/24] May 22 01:40:14.093: INFO: At 2021-05-22 01:36:30 +0000 UTC - event for pod-eb2dd31d-3c35-44f2-a121-e69e63ba64ad: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.093: INFO: At 2021-05-22 01:36:30 +0000 UTC - event for pod-eb2dd31d-3c35-44f2-a121-e69e63ba64ad: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.093: INFO: At 2021-05-22 01:36:31 +0000 UTC - event for pod-c506f916-7ca2-4154-9d6c-30ceb910a947: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.093: INFO: At 2021-05-22 01:36:31 +0000 UTC - event for pod-c506f916-7ca2-4154-9d6c-30ceb910a947: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.093: INFO: At 2021-05-22 01:36:31 +0000 UTC - event for pod-c506f916-7ca2-4154-9d6c-30ceb910a947: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 22 01:40:14.093: INFO: At 2021-05-22 01:36:32 +0000 UTC - event for pod-3aa7b4b3-1a97-4094-ab85-a439721516b1: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.093: INFO: At 2021-05-22 01:36:32 +0000 UTC - event for pod-3aa7b4b3-1a97-4094-ab85-a439721516b1: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.093: INFO: At 2021-05-22 01:36:32 +0000 UTC - event for pod-3aa7b4b3-1a97-4094-ab85-a439721516b1: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.093: INFO: At 2021-05-22 01:36:32 +0000 UTC - event for pod-3aa7b4b3-1a97-4094-ab85-a439721516b1: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.093: INFO: At 2021-05-22 01:36:33 +0000 UTC - event for pod-c506f916-7ca2-4154-9d6c-30ceb910a947: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.093: INFO: At 2021-05-22 01:36:33 +0000 UTC - event for pod-c506f916-7ca2-4154-9d6c-30ceb910a947: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.093: INFO: At 2021-05-22 01:36:33 +0000 UTC - event for pod-c506f916-7ca2-4154-9d6c-30ceb910a947: {multus } AddedInterface: Add eth0 [10.244.3.231/24] May 22 01:40:14.093: INFO: At 2021-05-22 01:36:35 +0000 UTC - event for pod-57430379-2a25-4b50-80f8-1f9d8fb22de4: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.093: INFO: At 2021-05-22 01:36:35 +0000 UTC - event for pod-57430379-2a25-4b50-80f8-1f9d8fb22de4: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.093: INFO: At 2021-05-22 01:36:35 +0000 UTC - event for pod-57430379-2a25-4b50-80f8-1f9d8fb22de4: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.093: INFO: At 2021-05-22 01:36:35 +0000 UTC - event for pod-57430379-2a25-4b50-80f8-1f9d8fb22de4: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.093: INFO: At 2021-05-22 01:36:35 +0000 UTC - event for pod-be10ab73-38c1-4d88-82d7-9594ce87d7f2: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.093: INFO: At 2021-05-22 01:36:35 +0000 UTC - event for pod-be10ab73-38c1-4d88-82d7-9594ce87d7f2: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.093: INFO: At 2021-05-22 01:36:36 +0000 UTC - event for pod-0806eda0-8dd0-499b-8419-f4c96fb3bf90: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.093: INFO: At 2021-05-22 01:36:36 +0000 UTC - event for pod-0806eda0-8dd0-499b-8419-f4c96fb3bf90: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.093: INFO: At 2021-05-22 01:36:36 +0000 UTC - event for pod-be10ab73-38c1-4d88-82d7-9594ce87d7f2: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.093: INFO: At 2021-05-22 01:36:36 +0000 UTC - event for pod-be10ab73-38c1-4d88-82d7-9594ce87d7f2: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.093: INFO: At 2021-05-22 01:36:37 +0000 UTC - event for pod-0806eda0-8dd0-499b-8419-f4c96fb3bf90: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.093: INFO: At 2021-05-22 01:36:37 +0000 UTC - event for pod-0806eda0-8dd0-499b-8419-f4c96fb3bf90: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.093: INFO: At 2021-05-22 01:36:41 +0000 UTC - event for pod-4c5b332a-c9d0-4347-8498-3906e84828c4: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.093: INFO: At 2021-05-22 01:36:41 +0000 UTC - event for pod-4c5b332a-c9d0-4347-8498-3906e84828c4: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.093: INFO: At 2021-05-22 01:36:41 +0000 UTC - event for pod-4c5b332a-c9d0-4347-8498-3906e84828c4: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.093: INFO: At 2021-05-22 01:36:41 +0000 UTC - event for pod-4c5b332a-c9d0-4347-8498-3906e84828c4: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.093: INFO: At 2021-05-22 01:36:42 +0000 UTC - event for pod-17c41de8-712a-4b64-8e12-b9167f1ccc82: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.093: INFO: At 2021-05-22 01:36:42 +0000 UTC - event for pod-17c41de8-712a-4b64-8e12-b9167f1ccc82: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.093: INFO: At 2021-05-22 01:36:42 +0000 UTC - event for pod-17c41de8-712a-4b64-8e12-b9167f1ccc82: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.093: INFO: At 2021-05-22 01:36:42 +0000 UTC - event for pod-17c41de8-712a-4b64-8e12-b9167f1ccc82: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.093: INFO: At 2021-05-22 01:36:44 +0000 UTC - event for pod-c59e1fe4-fe07-4049-8c93-09d9a5d6fcd0: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.093: INFO: At 2021-05-22 01:36:44 +0000 UTC - event for pod-c59e1fe4-fe07-4049-8c93-09d9a5d6fcd0: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.093: INFO: At 2021-05-22 01:36:45 +0000 UTC - event for pod-c59e1fe4-fe07-4049-8c93-09d9a5d6fcd0: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 22 01:40:14.093: INFO: At 2021-05-22 01:36:45 +0000 UTC - event for pod-fbe47191-a015-44ef-a732-9f8f673c81a1: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.093: INFO: At 2021-05-22 01:36:45 +0000 UTC - event for pod-fbe47191-a015-44ef-a732-9f8f673c81a1: {kubelet node1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 22 01:40:14.093: INFO: At 2021-05-22 01:36:45 +0000 UTC - event for pod-fbe47191-a015-44ef-a732-9f8f673c81a1: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.093: INFO: At 2021-05-22 01:36:46 +0000 UTC - event for pod-f75bea85-5332-46b1-8674-f383dac77d5c: {kubelet node1} Failed: Error: ErrImagePull May 22 01:40:14.093: INFO: At 2021-05-22 01:36:46 +0000 UTC - event for pod-f75bea85-5332-46b1-8674-f383dac77d5c: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.093: INFO: At 2021-05-22 01:36:47 +0000 UTC - event for pod-f75bea85-5332-46b1-8674-f383dac77d5c: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.093: INFO: At 2021-05-22 01:36:47 +0000 UTC - event for pod-f75bea85-5332-46b1-8674-f383dac77d5c: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.093: INFO: At 2021-05-22 01:36:53 +0000 UTC - event for pod-fbe47191-a015-44ef-a732-9f8f673c81a1: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.093: INFO: At 2021-05-22 01:36:53 +0000 UTC - event for pod-fbe47191-a015-44ef-a732-9f8f673c81a1: {multus } AddedInterface: Add eth0 [10.244.3.232/24] May 22 01:40:14.093: INFO: At 2021-05-22 01:36:53 +0000 UTC - event for pod-fbe47191-a015-44ef-a732-9f8f673c81a1: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.093: INFO: At 2021-05-22 01:36:54 +0000 UTC - event for pod-c59e1fe4-fe07-4049-8c93-09d9a5d6fcd0: {multus } AddedInterface: Add eth0 [10.244.3.233/24] May 22 01:40:14.093: INFO: At 2021-05-22 01:36:57 +0000 UTC - event for pod-6acee2ee-d14c-4b78-9ff8-3e1f242a266a: {multus } AddedInterface: Add eth0 [10.244.3.234/24] May 22 01:40:14.093: INFO: At 2021-05-22 01:36:57 +0000 UTC - event for pod-6acee2ee-d14c-4b78-9ff8-3e1f242a266a: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.093: INFO: At 2021-05-22 01:36:57 +0000 UTC - event for pod-6acee2ee-d14c-4b78-9ff8-3e1f242a266a: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.094: INFO: At 2021-05-22 01:36:58 +0000 UTC - event for pod-9d9e2186-4fd6-4a51-9dd4-f11b858ec786: {multus } AddedInterface: Add eth0 [10.244.3.235/24] May 22 01:40:14.094: INFO: At 2021-05-22 01:36:59 +0000 UTC - event for pod-fbe47191-a015-44ef-a732-9f8f673c81a1: {multus } AddedInterface: Add eth0 [10.244.3.236/24] May 22 01:40:14.094: INFO: At 2021-05-22 01:37:26 +0000 UTC - event for pod-c59e1fe4-fe07-4049-8c93-09d9a5d6fcd0: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:40:14.094: INFO: At 2021-05-22 01:37:26 +0000 UTC - event for pod-c59e1fe4-fe07-4049-8c93-09d9a5d6fcd0: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:40:14.094: INFO: At 2021-05-22 01:37:31 +0000 UTC - event for pod-f75bea85-5332-46b1-8674-f383dac77d5c: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.094: INFO: At 2021-05-22 01:38:34 +0000 UTC - event for pod-fb82cc33-a266-43a8-93a5-f5203a922ef1: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:40:14.102: INFO: POD NODE PHASE GRACE CONDITIONS May 22 01:40:14.102: INFO: pod-0806eda0-8dd0-499b-8419-f4c96fb3bf90 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.102: INFO: pod-0ef0bcb1-3e96-4db7-875a-08191af24833 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.102: INFO: pod-16028087-0ae3-489e-8b7b-2f309c12136f node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.102: INFO: pod-166d9b83-9cda-4fee-b89b-128f7e31b41e node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.103: INFO: pod-17c41de8-712a-4b64-8e12-b9167f1ccc82 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.103: INFO: pod-23fd4946-8b00-4e14-87d6-628987f1196e node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.103: INFO: pod-25416362-b799-49a7-9006-efdf18ad9c5d node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.103: INFO: pod-2ce73150-a2e6-468a-87df-ddb28cb19b3b node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.103: INFO: pod-33f6af65-447d-47e5-856a-ee0520e50654 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.103: INFO: pod-3aa7b4b3-1a97-4094-ab85-a439721516b1 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.103: INFO: pod-4c5b332a-c9d0-4347-8498-3906e84828c4 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.103: INFO: pod-51a38ecb-aa5d-4cd5-9e9a-5e01ba11b26a node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.103: INFO: pod-5247e6a6-6702-4904-9bd2-93e1126ee447 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.103: INFO: pod-52aa3294-e25c-4d72-bd4f-7a84f4491e1e node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.103: INFO: pod-550c8d3c-c425-44a5-86c7-6aa695148c5b node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.103: INFO: pod-57430379-2a25-4b50-80f8-1f9d8fb22de4 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.103: INFO: pod-6663facf-422c-4de6-a474-fc640916e836 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.103: INFO: pod-6a3981b8-47b7-4324-aa24-42b9517fa1b8 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.103: INFO: pod-6a5ca55b-7fea-4e70-81f4-251c29d7a077 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.103: INFO: pod-6acee2ee-d14c-4b78-9ff8-3e1f242a266a node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.103: INFO: pod-79204cb4-4be8-4740-8cba-d13da7fa71de node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.103: INFO: pod-81772626-74ba-492c-8d3e-0972578acee5 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.103: INFO: pod-83cf990b-2e75-49d0-978d-3c83c55b4b18 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:14 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:14 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:14 +0000 UTC }] May 22 01:40:14.103: INFO: pod-958e2d96-1d04-4d0b-87e0-7be3d23c258c node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.103: INFO: pod-96515024-2799-4240-ad76-2e3f70eb11c4 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.103: INFO: pod-9a8fdccc-3091-428e-8cd6-ce188c633067 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.103: INFO: pod-9c79a773-79d1-4e12-8a5f-7c40fd09ac29 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.104: INFO: pod-9d9e2186-4fd6-4a51-9dd4-f11b858ec786 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.104: INFO: pod-a1f65dd2-4488-4e0f-b5b0-7a2a720b0782 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.104: INFO: pod-a7e733ab-c9e1-4a96-8140-66bfbe914aa7 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.104: INFO: pod-ac068ac2-7008-4da8-84c9-09b597b100b5 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.104: INFO: pod-b5622c92-09b0-411f-8700-8aea35ec6f61 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.104: INFO: pod-bdc092d6-9ec5-4586-82d9-f8b25e423447 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.104: INFO: pod-be10ab73-38c1-4d88-82d7-9594ce87d7f2 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.104: INFO: pod-c4e86621-d05a-4ab6-b894-950537713573 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.104: INFO: pod-c506f916-7ca2-4154-9d6c-30ceb910a947 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.104: INFO: pod-c59e1fe4-fe07-4049-8c93-09d9a5d6fcd0 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.104: INFO: pod-cb0494ec-13e3-4046-a345-1d02ce011718 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:14 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:14 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:14 +0000 UTC }] May 22 01:40:14.104: INFO: pod-d593da45-ae55-40b4-b38a-616eaafcc189 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.104: INFO: pod-d8aa6580-4df3-4e8e-87e2-19954daee3ce node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.104: INFO: pod-de79b91d-d259-449a-9636-9b73aa3abbf4 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.104: INFO: pod-e325b6cd-538a-49cc-b2d2-f5610d29e416 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.104: INFO: pod-e905733c-287b-4a80-b63d-06c4b6d5effd node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.104: INFO: pod-eb2dd31d-3c35-44f2-a121-e69e63ba64ad node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.104: INFO: pod-ee148ef8-0618-42d8-bdb3-74178162c2d0 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.104: INFO: pod-f75bea85-5332-46b1-8674-f383dac77d5c node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.104: INFO: pod-fb82cc33-a266-43a8-93a5-f5203a922ef1 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:14 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:14 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.104: INFO: pod-fbe47191-a015-44ef-a732-9f8f673c81a1 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:14 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:14 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:14 +0000 UTC }] May 22 01:40:14.104: INFO: pod-fc92ed50-9307-4502-99c5-8d7b7aee5c25 node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:13 +0000 UTC }] May 22 01:40:14.104: INFO: pod-ff2283db-c4be-44af-ba93-13621c43139d node1 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:14 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:14 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:35:14 +0000 UTC }] May 22 01:40:14.104: INFO: May 22 01:40:14.109: INFO: Logging node info for node master1 May 22 01:40:14.111: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 18cf10c1-c08b-4d36-bbe6-5aa86f02296e 165005 0 2021-05-21 19:55:07 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"d2:e0:bb:d8:54:80"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-21 19:55:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-21 19:55:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-21 19:57:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-21 19:57:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-21 20:04:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-21 19:59:55 +0000 UTC,LastTransitionTime:2021-05-21 19:59:55 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-22 01:40:14 +0000 UTC,LastTransitionTime:2021-05-21 19:55:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-22 01:40:14 +0000 UTC,LastTransitionTime:2021-05-21 19:55:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-22 01:40:14 +0000 UTC,LastTransitionTime:2021-05-21 19:55:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-22 01:40:14 +0000 UTC,LastTransitionTime:2021-05-21 19:59:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0b916e2f445c4d05b6a9058a788d9410,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:ab86499b-409f-44c9-86ff-e9bc113e4112,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726676532,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:f013255695f4515c5b21b11281c7e0fb491082d15ec5a96adb8217e015a9c422 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:50746f542c19fda01d88ae124ce58c5a326dad7cd24d3c2d19fdf959cc7f0c49 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 22 01:40:14.112: INFO: Logging kubelet events for node master1 May 22 01:40:14.116: INFO: Logging pods the kubelet thinks is on node master1 May 22 01:40:14.132: INFO: kube-proxy-zv2rb started at 2021-05-21 19:57:00 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.132: INFO: Container kube-proxy ready: true, restart count 2 May 22 01:40:14.132: INFO: kube-flannel-5lkd2 started at 2021-05-21 19:57:34 +0000 UTC (1+1 container statuses recorded) May 22 01:40:14.132: INFO: Init container install-cni ready: true, restart count 0 May 22 01:40:14.132: INFO: Container kube-flannel ready: true, restart count 2 May 22 01:40:14.132: INFO: node-exporter-m7jht started at 2021-05-21 20:07:54 +0000 UTC (0+2 container statuses recorded) May 22 01:40:14.132: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:40:14.132: INFO: Container node-exporter ready: true, restart count 0 May 22 01:40:14.132: INFO: coredns-7677f9bb54-wl7h4 started at 2021-05-22 00:44:35 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.132: INFO: Container coredns ready: true, restart count 0 May 22 01:40:14.132: INFO: kube-scheduler-master1 started at 2021-05-21 19:59:16 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.132: INFO: Container kube-scheduler ready: true, restart count 0 May 22 01:40:14.132: INFO: kube-apiserver-master1 started at 2021-05-21 20:03:10 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.132: INFO: Container kube-apiserver ready: true, restart count 0 May 22 01:40:14.132: INFO: kube-controller-manager-master1 started at 2021-05-21 19:59:16 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.132: INFO: Container kube-controller-manager ready: true, restart count 3 May 22 01:40:14.132: INFO: kube-multus-ds-amd64-z8khx started at 2021-05-21 19:57:42 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.132: INFO: Container kube-multus ready: true, restart count 1 May 22 01:40:14.132: INFO: docker-registry-docker-registry-56cbc7bc58-mft7s started at 2021-05-21 20:00:26 +0000 UTC (0+2 container statuses recorded) May 22 01:40:14.132: INFO: Container docker-registry ready: true, restart count 0 May 22 01:40:14.132: INFO: Container nginx ready: true, restart count 0 May 22 01:40:14.132: INFO: node-feature-discovery-controller-5bf5c49849-ktcdq started at 2021-05-21 20:03:57 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.132: INFO: Container nfd-controller ready: true, restart count 0 W0522 01:40:14.142843 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 22 01:40:14.170: INFO: Latency metrics for node master1 May 22 01:40:14.170: INFO: Logging node info for node master2 May 22 01:40:14.173: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 d439093b-ce44-47e7-8b41-5739b7c49ca5 164972 0 2021-05-21 19:55:46 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"62:5a:fb:0b:f8:b0"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-21 19:55:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-21 19:55:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-21 19:57:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-21 19:57:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-21 20:00:49 +0000 UTC,LastTransitionTime:2021-05-21 20:00:49 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-22 01:40:06 +0000 UTC,LastTransitionTime:2021-05-21 19:55:46 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-22 01:40:06 +0000 UTC,LastTransitionTime:2021-05-21 19:55:46 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-22 01:40:06 +0000 UTC,LastTransitionTime:2021-05-21 19:55:46 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-22 01:40:06 +0000 UTC,LastTransitionTime:2021-05-21 19:57:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:495d1cf47bdb4c7e982c636c95de5648,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:cca785c0-d61e-403e-91fd-d0d4f3fa573c,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726676532,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 22 01:40:14.173: INFO: Logging kubelet events for node master2 May 22 01:40:14.178: INFO: Logging pods the kubelet thinks is on node master2 May 22 01:40:14.192: INFO: kube-apiserver-master2 started at 2021-05-21 20:03:00 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.192: INFO: Container kube-apiserver ready: true, restart count 0 May 22 01:40:14.192: INFO: kube-multus-ds-amd64-lwzkr started at 2021-05-21 19:57:42 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.192: INFO: Container kube-multus ready: true, restart count 1 May 22 01:40:14.192: INFO: dns-autoscaler-5b7b5c9b6f-dvlvw started at 2021-05-21 19:58:05 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.192: INFO: Container autoscaler ready: true, restart count 2 May 22 01:40:14.192: INFO: node-exporter-q52l5 started at 2021-05-21 20:07:54 +0000 UTC (0+2 container statuses recorded) May 22 01:40:14.192: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:40:14.192: INFO: Container node-exporter ready: true, restart count 0 May 22 01:40:14.192: INFO: kube-controller-manager-master2 started at 2021-05-21 19:59:58 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.192: INFO: Container kube-controller-manager ready: true, restart count 2 May 22 01:40:14.192: INFO: kube-scheduler-master2 started at 2021-05-21 20:00:08 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.192: INFO: Container kube-scheduler ready: true, restart count 2 May 22 01:40:14.192: INFO: kube-proxy-shdjd started at 2021-05-21 19:57:00 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.192: INFO: Container kube-proxy ready: true, restart count 1 May 22 01:40:14.192: INFO: kube-flannel-tnf5x started at 2021-05-21 19:57:34 +0000 UTC (1+1 container statuses recorded) May 22 01:40:14.192: INFO: Init container install-cni ready: true, restart count 0 May 22 01:40:14.192: INFO: Container kube-flannel ready: true, restart count 1 W0522 01:40:14.204982 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 22 01:40:14.235: INFO: Latency metrics for node master2 May 22 01:40:14.235: INFO: Logging node info for node master3 May 22 01:40:14.238: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 23066049-f3f1-42a7-9cea-19e20ed4ccec 165002 0 2021-05-21 19:55:56 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"ae:a6:e4:e7:ba:02"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-21 19:55:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-21 19:55:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-21 19:57:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-21 19:57:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-21 20:00:41 +0000 UTC,LastTransitionTime:2021-05-21 20:00:41 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-22 01:40:13 +0000 UTC,LastTransitionTime:2021-05-21 19:55:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-22 01:40:13 +0000 UTC,LastTransitionTime:2021-05-21 19:55:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-22 01:40:13 +0000 UTC,LastTransitionTime:2021-05-21 19:55:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-22 01:40:13 +0000 UTC,LastTransitionTime:2021-05-21 19:57:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:660844f5cb0944868621042562b928e3,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:02066ff3-4833-4130-ba94-138c581e4cc0,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726676532,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 22 01:40:14.238: INFO: Logging kubelet events for node master3 May 22 01:40:14.243: INFO: Logging pods the kubelet thinks is on node master3 May 22 01:40:14.255: INFO: kube-controller-manager-master3 started at 2021-05-21 20:03:00 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.255: INFO: Container kube-controller-manager ready: true, restart count 1 May 22 01:40:14.255: INFO: kube-scheduler-master3 started at 2021-05-21 20:03:10 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.255: INFO: Container kube-scheduler ready: true, restart count 1 May 22 01:40:14.255: INFO: kube-apiserver-master3 started at 2021-05-21 20:03:20 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.256: INFO: Container kube-apiserver ready: true, restart count 0 May 22 01:40:14.256: INFO: kube-flannel-8sd6n started at 2021-05-21 19:57:34 +0000 UTC (1+1 container statuses recorded) May 22 01:40:14.256: INFO: Init container install-cni ready: true, restart count 0 May 22 01:40:14.256: INFO: Container kube-flannel ready: true, restart count 1 May 22 01:40:14.256: INFO: kube-multus-ds-amd64-zgdbl started at 2021-05-21 19:57:42 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.256: INFO: Container kube-multus ready: true, restart count 1 May 22 01:40:14.256: INFO: coredns-7677f9bb54-7jdbv started at 2021-05-22 00:44:35 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.256: INFO: Container coredns ready: true, restart count 0 May 22 01:40:14.256: INFO: kube-proxy-hwwxt started at 2021-05-21 19:57:00 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.256: INFO: Container kube-proxy ready: true, restart count 2 May 22 01:40:14.256: INFO: node-exporter-s74rx started at 2021-05-21 20:07:54 +0000 UTC (0+2 container statuses recorded) May 22 01:40:14.256: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:40:14.256: INFO: Container node-exporter ready: true, restart count 0 W0522 01:40:14.273763 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 22 01:40:14.305: INFO: Latency metrics for node master3 May 22 01:40:14.305: INFO: Logging node info for node node1 May 22 01:40:14.308: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 62cfecb3-af08-42a4-ab85-25a317061b61 164968 0 2021-05-21 19:56:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1072":"csi-mock-csi-mock-volumes-1072","csi-mock-csi-mock-volumes-1813":"csi-mock-csi-mock-volumes-1813","csi-mock-csi-mock-volumes-3021":"csi-mock-csi-mock-volumes-3021","csi-mock-csi-mock-volumes-3986":"csi-mock-csi-mock-volumes-3986","csi-mock-csi-mock-volumes-4120":"csi-mock-csi-mock-volumes-4120","csi-mock-csi-mock-volumes-5362":"csi-mock-csi-mock-volumes-5362","csi-mock-csi-mock-volumes-7364":"csi-mock-csi-mock-volumes-7364","csi-mock-csi-mock-volumes-8090":"csi-mock-csi-mock-volumes-8090","csi-mock-csi-mock-volumes-9489":"csi-mock-csi-mock-volumes-9489"} flannel.alpha.coreos.com/backend-data:{"VtepMAC":"22:2e:2b:06:83:4a"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-05-21 19:56:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-21 19:57:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-21 20:04:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-21 20:06:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-05-22 01:15:35 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}}} {kube-controller-manager Update v1 2021-05-22 01:27:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubelet Update v1 2021-05-22 01:27:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-21 20:00:36 +0000 UTC,LastTransitionTime:2021-05-21 20:00:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-22 01:40:05 +0000 UTC,LastTransitionTime:2021-05-21 19:56:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-22 01:40:05 +0000 UTC,LastTransitionTime:2021-05-21 19:56:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-22 01:40:05 +0000 UTC,LastTransitionTime:2021-05-21 19:56:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-22 01:40:05 +0000 UTC,LastTransitionTime:2021-05-21 19:57:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94b7bb79c41d4a0492f63fb5fb3c5cc0,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:1e94d819-8d68-4948-b171-5fd2d8fc7ff5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:2b114e08442070e7232fcffc4cb89529bd5c9effe733ed690277a33772bf2d00 localhost:30500/barometer-collectd:stable],SizeBytes:1464382814,},ContainerImage{Names:[@ :],SizeBytes:1002487865,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d865c665dfeeec5a879dca7b9945cc49f55f10921b4e729f0da0cdec7dedbf7 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726676532,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:1636899c10870ab66c48d960a9df620f4f9e86a0c72fbacf36032d27404e7e6c golang:alpine3.12],SizeBytes:301156062,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:df13abe416e37eb3db4722840dd479b00ba193ac6606e7902331dcea50f4f1f2 nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:48281550,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:46041582,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:a4dc6d912ce1a8dd4c3a51b1cfb52454080ed36db95bf824895d5ecb7175199f nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392673,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:f013255695f4515c5b21b11281c7e0fb491082d15ec5a96adb8217e015a9c422 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:50746f542c19fda01d88ae124ce58c5a326dad7cd24d3c2d19fdf959cc7f0c49 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64@sha256:3b36bd80b97c532a774e7f6246797b8575d97037982f353476c703ba6686c75c gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64:1.0],SizeBytes:19227369,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:0b4273abac4c241fa3d70aaf52b0d79a133d2737081f4a5c5dea4949f6c45dc3 k8s.gcr.io/sig-storage/mock-driver:v3.1.0],SizeBytes:18687618,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:16322467,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 22 01:40:14.309: INFO: Logging kubelet events for node node1 May 22 01:40:14.312: INFO: Logging pods the kubelet thinks is on node node1 May 22 01:40:14.939: INFO: kube-proxy-h5k9s started at 2021-05-21 19:57:00 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.939: INFO: Container kube-proxy ready: true, restart count 1 May 22 01:40:14.939: INFO: cmk-init-discover-node1-48g7j started at 2021-05-21 20:06:17 +0000 UTC (0+3 container statuses recorded) May 22 01:40:14.939: INFO: Container discover ready: false, restart count 0 May 22 01:40:14.939: INFO: Container init ready: false, restart count 0 May 22 01:40:14.939: INFO: Container install ready: false, restart count 0 May 22 01:40:14.939: INFO: pod-e325b6cd-538a-49cc-b2d2-f5610d29e416 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.939: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.939: INFO: node-feature-discovery-worker-lh5hz started at 2021-05-21 20:03:47 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.939: INFO: Container nfd-worker ready: true, restart count 0 May 22 01:40:14.939: INFO: pod-550c8d3c-c425-44a5-86c7-6aa695148c5b started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.939: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.939: INFO: pod-9a8fdccc-3091-428e-8cd6-ce188c633067 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.939: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.939: INFO: pod-de79b91d-d259-449a-9636-9b73aa3abbf4 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.939: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.939: INFO: pod-0806eda0-8dd0-499b-8419-f4c96fb3bf90 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.939: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.939: INFO: pod-cb0494ec-13e3-4046-a345-1d02ce011718 started at 2021-05-22 01:35:14 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.939: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.939: INFO: kube-multus-ds-amd64-wlmhr started at 2021-05-21 19:57:42 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.939: INFO: Container kube-multus ready: true, restart count 1 May 22 01:40:14.939: INFO: prometheus-operator-5bb8cb9d8f-mzlrf started at 2021-05-21 20:07:47 +0000 UTC (0+2 container statuses recorded) May 22 01:40:14.939: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:40:14.939: INFO: Container prometheus-operator ready: true, restart count 0 May 22 01:40:14.939: INFO: node-exporter-l5k2r started at 2021-05-21 20:07:54 +0000 UTC (0+2 container statuses recorded) May 22 01:40:14.939: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:40:14.939: INFO: Container node-exporter ready: true, restart count 0 May 22 01:40:14.939: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-r8d7k started at 2021-05-22 00:30:47 +0000 UTC (0+2 container statuses recorded) May 22 01:40:14.939: INFO: Container tas-controller ready: true, restart count 0 May 22 01:40:14.939: INFO: Container tas-extender ready: true, restart count 0 May 22 01:40:14.939: INFO: pod-a7e733ab-c9e1-4a96-8140-66bfbe914aa7 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.939: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.939: INFO: pod-fb82cc33-a266-43a8-93a5-f5203a922ef1 started at 2021-05-22 01:35:14 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.939: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.939: INFO: pod-eb2dd31d-3c35-44f2-a121-e69e63ba64ad started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.939: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.939: INFO: pod-83cf990b-2e75-49d0-978d-3c83c55b4b18 started at 2021-05-22 01:35:14 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.939: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.939: INFO: prometheus-k8s-0 started at 2021-05-21 20:08:06 +0000 UTC (0+5 container statuses recorded) May 22 01:40:14.939: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 22 01:40:14.939: INFO: Container grafana ready: true, restart count 0 May 22 01:40:14.939: INFO: Container prometheus ready: true, restart count 1 May 22 01:40:14.939: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 22 01:40:14.939: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 22 01:40:14.939: INFO: pod-c506f916-7ca2-4154-9d6c-30ceb910a947 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-6a5ca55b-7fea-4e70-81f4-251c29d7a077 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-fc92ed50-9307-4502-99c5-8d7b7aee5c25 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-be10ab73-38c1-4d88-82d7-9594ce87d7f2 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-3aa7b4b3-1a97-4094-ab85-a439721516b1 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: kube-flannel-k6mr4 started at 2021-05-21 19:57:34 +0000 UTC (1+1 container statuses recorded) May 22 01:40:14.940: INFO: Init container install-cni ready: true, restart count 1 May 22 01:40:14.940: INFO: Container kube-flannel ready: true, restart count 1 May 22 01:40:14.940: INFO: pod-bdc092d6-9ec5-4586-82d9-f8b25e423447 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-81772626-74ba-492c-8d3e-0972578acee5 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-5247e6a6-6702-4904-9bd2-93e1126ee447 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: collectd-mc5kl started at 2021-05-21 20:13:40 +0000 UTC (0+3 container statuses recorded) May 22 01:40:14.940: INFO: Container collectd ready: true, restart count 0 May 22 01:40:14.940: INFO: Container collectd-exporter ready: true, restart count 0 May 22 01:40:14.940: INFO: Container rbac-proxy ready: true, restart count 0 May 22 01:40:14.940: INFO: pod-57430379-2a25-4b50-80f8-1f9d8fb22de4 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-fbe47191-a015-44ef-a732-9f8f673c81a1 started at 2021-05-22 01:35:14 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-czhnm started at 2021-05-21 20:04:29 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container kube-sriovdp ready: true, restart count 0 May 22 01:40:14.940: INFO: pod-2ce73150-a2e6-468a-87df-ddb28cb19b3b started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-c59e1fe4-fe07-4049-8c93-09d9a5d6fcd0 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-17c41de8-712a-4b64-8e12-b9167f1ccc82 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-4c5b332a-c9d0-4347-8498-3906e84828c4 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-23fd4946-8b00-4e14-87d6-628987f1196e started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-f75bea85-5332-46b1-8674-f383dac77d5c started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: kubernetes-dashboard-86c6f9df5b-8rsws started at 2021-05-21 19:58:07 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 22 01:40:14.940: INFO: pod-6663facf-422c-4de6-a474-fc640916e836 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-96515024-2799-4240-ad76-2e3f70eb11c4 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: kubernetes-metrics-scraper-678c97765c-nnrtl started at 2021-05-21 19:58:07 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 22 01:40:14.940: INFO: pod-33f6af65-447d-47e5-856a-ee0520e50654 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: true, restart count 0 May 22 01:40:14.940: INFO: pod-ff2283db-c4be-44af-ba93-13621c43139d started at 2021-05-22 01:35:14 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-9c79a773-79d1-4e12-8a5f-7c40fd09ac29 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-0ef0bcb1-3e96-4db7-875a-08191af24833 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-c4e86621-d05a-4ab6-b894-950537713573 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-b5622c92-09b0-411f-8700-8aea35ec6f61 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: nginx-proxy-node1 started at 2021-05-21 20:03:00 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container nginx-proxy ready: true, restart count 1 May 22 01:40:14.940: INFO: cmk-h8jxp started at 2021-05-21 20:07:00 +0000 UTC (0+2 container statuses recorded) May 22 01:40:14.940: INFO: Container nodereport ready: true, restart count 0 May 22 01:40:14.940: INFO: Container reconcile ready: true, restart count 0 May 22 01:40:14.940: INFO: pod-a1f65dd2-4488-4e0f-b5b0-7a2a720b0782 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-79204cb4-4be8-4740-8cba-d13da7fa71de started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-6a3981b8-47b7-4324-aa24-42b9517fa1b8 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-ac068ac2-7008-4da8-84c9-09b597b100b5 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-25416362-b799-49a7-9006-efdf18ad9c5d started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-958e2d96-1d04-4d0b-87e0-7be3d23c258c started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-ee148ef8-0618-42d8-bdb3-74178162c2d0 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-d593da45-ae55-40b4-b38a-616eaafcc189 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-51a38ecb-aa5d-4cd5-9e9a-5e01ba11b26a started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-6acee2ee-d14c-4b78-9ff8-3e1f242a266a started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: cmk-webhook-6c9d5f8578-8pz6w started at 2021-05-21 20:07:00 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container cmk-webhook ready: true, restart count 0 May 22 01:40:14.940: INFO: pod-52aa3294-e25c-4d72-bd4f-7a84f4491e1e started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-d8aa6580-4df3-4e8e-87e2-19954daee3ce started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-9d9e2186-4fd6-4a51-9dd4-f11b858ec786 started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-166d9b83-9cda-4fee-b89b-128f7e31b41e started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-16028087-0ae3-489e-8b7b-2f309c12136f started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 May 22 01:40:14.940: INFO: pod-e905733c-287b-4a80-b63d-06c4b6d5effd started at 2021-05-22 01:35:13 +0000 UTC (0+1 container statuses recorded) May 22 01:40:14.940: INFO: Container write-pod ready: false, restart count 0 W0522 01:40:14.949244 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 22 01:40:16.046: INFO: Latency metrics for node node1 May 22 01:40:16.046: INFO: Logging node info for node node2 May 22 01:40:16.050: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 3f64ce47-e96b-43b8-9c91-df57a4e26826 164987 0 2021-05-21 19:56:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1700":"csi-mock-csi-mock-volumes-1700","csi-mock-csi-mock-volumes-4959":"csi-mock-csi-mock-volumes-4959","csi-mock-csi-mock-volumes-5873":"csi-mock-csi-mock-volumes-5873","csi-mock-csi-mock-volumes-6723":"csi-mock-csi-mock-volumes-6723","csi-mock-csi-mock-volumes-6884":"csi-mock-csi-mock-volumes-6884","csi-mock-csi-mock-volumes-7303":"csi-mock-csi-mock-volumes-7303","csi-mock-csi-mock-volumes-8793":"csi-mock-csi-mock-volumes-8793","csi-mock-csi-mock-volumes-9199":"csi-mock-csi-mock-volumes-9199"} flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4e:d8:e9:66:bc:b7"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-05-21 19:56:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-21 19:57:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-21 20:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-21 20:06:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-05-22 01:16:03 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{}}}}} {kube-controller-manager Update v1 2021-05-22 01:26:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubelet Update v1 2021-05-22 01:26:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-21 20:00:39 +0000 UTC,LastTransitionTime:2021-05-21 20:00:39 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-22 01:40:09 +0000 UTC,LastTransitionTime:2021-05-21 19:56:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-22 01:40:09 +0000 UTC,LastTransitionTime:2021-05-21 19:56:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-22 01:40:09 +0000 UTC,LastTransitionTime:2021-05-21 19:56:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-22 01:40:09 +0000 UTC,LastTransitionTime:2021-05-21 19:57:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2aa9b8566664435b84c4146a11c874db,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:befe5c4e-169e-4c36-9e45-742bb80d4660,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:2b114e08442070e7232fcffc4cb89529bd5c9effe733ed690277a33772bf2d00 localhost:30500/barometer-collectd:stable],SizeBytes:1464382814,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d865c665dfeeec5a879dca7b9945cc49f55f10921b4e729f0da0cdec7dedbf7 localhost:30500/cmk:v1.5.1],SizeBytes:726676532,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726676532,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[nginx@sha256:df13abe416e37eb3db4722840dd479b00ba193ac6606e7902331dcea50f4f1f2 nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:48281550,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:46041582,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:a4dc6d912ce1a8dd4c3a51b1cfb52454080ed36db95bf824895d5ecb7175199f localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392673,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:f013255695f4515c5b21b11281c7e0fb491082d15ec5a96adb8217e015a9c422 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:50746f542c19fda01d88ae124ce58c5a326dad7cd24d3c2d19fdf959cc7f0c49 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:0b4273abac4c241fa3d70aaf52b0d79a133d2737081f4a5c5dea4949f6c45dc3 k8s.gcr.io/sig-storage/mock-driver:v3.1.0],SizeBytes:18687618,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:16322467,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 22 01:40:16.050: INFO: Logging kubelet events for node node2 May 22 01:40:16.055: INFO: Logging pods the kubelet thinks is on node node2 May 22 01:40:16.070: INFO: kube-flannel-5p7gq started at 2021-05-21 19:57:34 +0000 UTC (1+1 container statuses recorded) May 22 01:40:16.070: INFO: Init container install-cni ready: true, restart count 2 May 22 01:40:16.070: INFO: Container kube-flannel ready: true, restart count 2 May 22 01:40:16.070: INFO: cmk-xtrv9 started at 2021-05-22 00:30:51 +0000 UTC (0+2 container statuses recorded) May 22 01:40:16.070: INFO: Container nodereport ready: true, restart count 0 May 22 01:40:16.070: INFO: Container reconcile ready: true, restart count 0 May 22 01:40:16.070: INFO: kube-multus-ds-amd64-6q46t started at 2021-05-21 19:57:42 +0000 UTC (0+1 container statuses recorded) May 22 01:40:16.070: INFO: Container kube-multus ready: true, restart count 1 May 22 01:40:16.070: INFO: collectd-rkmjk started at 2021-05-22 00:31:19 +0000 UTC (0+3 container statuses recorded) May 22 01:40:16.070: INFO: Container collectd ready: true, restart count 0 May 22 01:40:16.070: INFO: Container collectd-exporter ready: true, restart count 0 May 22 01:40:16.070: INFO: Container rbac-proxy ready: true, restart count 0 May 22 01:40:16.070: INFO: nginx-proxy-node2 started at 2021-05-21 20:03:00 +0000 UTC (0+1 container statuses recorded) May 22 01:40:16.070: INFO: Container nginx-proxy ready: true, restart count 2 May 22 01:40:16.070: INFO: node-exporter-jctsz started at 2021-05-22 00:30:49 +0000 UTC (0+2 container statuses recorded) May 22 01:40:16.070: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:40:16.070: INFO: Container node-exporter ready: true, restart count 0 May 22 01:40:16.070: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-jkg9k started at 2021-05-22 00:30:58 +0000 UTC (0+1 container statuses recorded) May 22 01:40:16.070: INFO: Container kube-sriovdp ready: true, restart count 0 May 22 01:40:16.070: INFO: node-feature-discovery-worker-z827f started at 2021-05-22 00:30:50 +0000 UTC (0+1 container statuses recorded) May 22 01:40:16.070: INFO: Container nfd-worker ready: true, restart count 0 May 22 01:40:16.070: INFO: kube-proxy-q57hf started at 2021-05-21 19:57:00 +0000 UTC (0+1 container statuses recorded) May 22 01:40:16.070: INFO: Container kube-proxy ready: true, restart count 2 W0522 01:40:16.083071 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 22 01:40:16.133: INFO: Latency metrics for node node2 May 22 01:40:16.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8512" for this suite. • Failure [302.617 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Pods sharing a single local PV [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:614 all pods should be running [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:642 May 22 01:40:14.065: Some pods are not running within 5m0s Unexpected error: <*errors.errorString | 0xc0003001f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:683 ------------------------------ {"msg":"FAILED [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running","total":17,"completed":0,"skipped":1047,"failed":1,"failures":["[sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Set fsGroup for local volume should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:282 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:40:16.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 May 22 01:40:58.197: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-8978 PodName:hostexec-node1-tjh69 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:40:58.197: INFO: >>> kubeConfig: /root/.kube/config May 22 01:40:58.624: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 22 01:40:58.624: INFO: exec node1: stdout: "0\n" May 22 01:40:58.624: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" May 22 01:40:58.624: INFO: exec node1: exit code: 0 May 22 01:40:58.624: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:40:58.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-8978" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [42.487 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Set fsGroup for local volume [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:256 should set different fsGroup for second pod if first pod is deleted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:282 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:40:58.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 May 22 01:41:12.683: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-3201 PodName:hostexec-node1-b7kfc ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:41:12.683: INFO: >>> kubeConfig: /root/.kube/config May 22 01:41:12.818: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 22 01:41:12.818: INFO: exec node1: stdout: "0\n" May 22 01:41:12.818: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" May 22 01:41:12.818: INFO: exec node1: exit code: 0 May 22 01:41:12.818: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:41:12.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3201" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [14.190 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Two pods mounting a local volume at the same time [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:499 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:41:12.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 22 01:41:12.851: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:41:12.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-4343" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create bound pv/pvc count metrics for pvc controller after creating both pv and pvc /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:499 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:41:12.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 May 22 01:41:18.899: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-3751 PodName:hostexec-node1-54bv2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:41:18.899: INFO: >>> kubeConfig: /root/.kube/config May 22 01:41:19.279: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 22 01:41:19.279: INFO: exec node1: stdout: "0\n" May 22 01:41:19.279: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" May 22 01:41:19.279: INFO: exec node1: exit code: 0 May 22 01:41:19.279: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:41:19.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-3751" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [6.434 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 Two pods mounting a local volume one after the other [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250 should be able to write from pod1 and read from pod2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:477 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:41:19.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 22 01:41:19.318: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:41:19.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-6830" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create none metrics for pvc controller before creating any PV or PVC /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:477 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create metrics for total time taken in volume operations in P/V Controller /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:260 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:41:19.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 22 01:41:19.357: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:41:19.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-856" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create metrics for total time taken in volume operations in P/V Controller [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:260 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:490 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:41:19.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 22 01:41:19.388: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:41:19.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-6749" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create unbound pvc count metrics for pvc controller after creating pvc only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:490 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:41:19.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 May 22 01:41:23.445: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-2063 PodName:hostexec-node1-xsssz ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:41:23.445: INFO: >>> kubeConfig: /root/.kube/config May 22 01:41:23.569: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 22 01:41:23.569: INFO: exec node1: stdout: "0\n" May 22 01:41:23.569: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" May 22 01:41:23.569: INFO: exec node1: exit code: 0 May 22 01:41:23.569: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:41:23.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-2063" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.175 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205 should be able to mount volume and read from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local [Volume type: gce-localssd-scsi-fs] [Serial] One pod requesting one prebound PVC should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:41:23.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:191 May 22 01:41:27.626: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l] Namespace:persistent-local-volumes-test-6673 PodName:hostexec-node1-z8x6n ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:41:27.626: INFO: >>> kubeConfig: /root/.kube/config May 22 01:41:27.752: INFO: exec node1: command: ls -1 /mnt/disks/by-uuid/google-local-ssds-scsi-fs/ | wc -l May 22 01:41:27.752: INFO: exec node1: stdout: "0\n" May 22 01:41:27.752: INFO: exec node1: stderr: "ls: cannot access /mnt/disks/by-uuid/google-local-ssds-scsi-fs/: No such file or directory\n" May 22 01:41:27.752: INFO: exec node1: exit code: 0 May 22 01:41:27.752: INFO: Requires at least 1 scsi fs localSSD [AfterEach] [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 STEP: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:41:27.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-6673" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [4.183 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 [Volume type: gce-localssd-scsi-fs] [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188 One pod requesting one prebound PVC [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205 should be able to mount volume and write from pod1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234 Requires at least 1 scsi fs localSSD /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1241 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics PVController should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:481 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:41:27.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 22 01:41:27.786: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:41:27.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-824" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.028 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 PVController [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:382 should create unbound pv count metrics for pvc controller after creating pv only /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:481 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create prometheus metrics for volume provisioning and attach/detach /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:100 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:41:27.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 22 01:41:27.827: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:41:27.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-3231" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create prometheus metrics for volume provisioning and attach/detach [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:100 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:517 [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:41:27.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename persistent-local-volumes-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:155 [BeforeEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:441 STEP: Setting up 10 local volumes on node "node1" STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-f001620c-1906-4175-b782-3808f430cc26" May 22 01:41:31.893: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-f001620c-1906-4175-b782-3808f430cc26" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-f001620c-1906-4175-b782-3808f430cc26" "/tmp/local-volume-test-f001620c-1906-4175-b782-3808f430cc26"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:41:31.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-6069bc3c-9234-4082-af26-8e8c309eea35" May 22 01:41:32.009: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-6069bc3c-9234-4082-af26-8e8c309eea35" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-6069bc3c-9234-4082-af26-8e8c309eea35" "/tmp/local-volume-test-6069bc3c-9234-4082-af26-8e8c309eea35"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:41:32.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-0a23e19e-b899-467d-ab7f-345cd908a81c" May 22 01:41:32.134: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-0a23e19e-b899-467d-ab7f-345cd908a81c" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-0a23e19e-b899-467d-ab7f-345cd908a81c" "/tmp/local-volume-test-0a23e19e-b899-467d-ab7f-345cd908a81c"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:41:32.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-a8926d56-5c96-4885-9c7e-534e41960bf6" May 22 01:41:32.248: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-a8926d56-5c96-4885-9c7e-534e41960bf6" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-a8926d56-5c96-4885-9c7e-534e41960bf6" "/tmp/local-volume-test-a8926d56-5c96-4885-9c7e-534e41960bf6"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:41:32.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-a7d6bd09-f83c-40cc-98de-e09ec6f5311c" May 22 01:41:32.363: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-a7d6bd09-f83c-40cc-98de-e09ec6f5311c" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-a7d6bd09-f83c-40cc-98de-e09ec6f5311c" "/tmp/local-volume-test-a7d6bd09-f83c-40cc-98de-e09ec6f5311c"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:41:32.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-db1da846-fc32-4f30-9ace-a1fca31930ff" May 22 01:41:32.533: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-db1da846-fc32-4f30-9ace-a1fca31930ff" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-db1da846-fc32-4f30-9ace-a1fca31930ff" "/tmp/local-volume-test-db1da846-fc32-4f30-9ace-a1fca31930ff"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:41:32.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-ed985144-8f5b-4ea2-9fb3-4fa29f37e90f" May 22 01:41:32.662: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-ed985144-8f5b-4ea2-9fb3-4fa29f37e90f" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-ed985144-8f5b-4ea2-9fb3-4fa29f37e90f" "/tmp/local-volume-test-ed985144-8f5b-4ea2-9fb3-4fa29f37e90f"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:41:32.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-d240aa4a-256f-49ec-bd8d-3b4459f3ced8" May 22 01:41:32.775: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-d240aa4a-256f-49ec-bd8d-3b4459f3ced8" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-d240aa4a-256f-49ec-bd8d-3b4459f3ced8" "/tmp/local-volume-test-d240aa4a-256f-49ec-bd8d-3b4459f3ced8"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:41:32.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-708f9be1-1744-4448-beb3-a8f217621e44" May 22 01:41:32.884: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-708f9be1-1744-4448-beb3-a8f217621e44" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-708f9be1-1744-4448-beb3-a8f217621e44" "/tmp/local-volume-test-708f9be1-1744-4448-beb3-a8f217621e44"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:41:32.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node1" at path "/tmp/local-volume-test-e335eed1-119b-49eb-867c-1c38ad6bc2f1" May 22 01:41:33.012: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-e335eed1-119b-49eb-867c-1c38ad6bc2f1" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-e335eed1-119b-49eb-867c-1c38ad6bc2f1" "/tmp/local-volume-test-e335eed1-119b-49eb-867c-1c38ad6bc2f1"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:41:33.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Setting up 10 local volumes on node "node2" STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-c5454b59-415e-4053-baf4-4184b0582a30" May 22 01:41:35.140: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-c5454b59-415e-4053-baf4-4184b0582a30" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-c5454b59-415e-4053-baf4-4184b0582a30" "/tmp/local-volume-test-c5454b59-415e-4053-baf4-4184b0582a30"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:41:35.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-4c111f06-501c-4542-9e8a-4914525980bb" May 22 01:41:35.258: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-4c111f06-501c-4542-9e8a-4914525980bb" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-4c111f06-501c-4542-9e8a-4914525980bb" "/tmp/local-volume-test-4c111f06-501c-4542-9e8a-4914525980bb"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:41:35.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-c2e34d90-efbb-47ef-9a61-780df224a62a" May 22 01:41:35.372: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-c2e34d90-efbb-47ef-9a61-780df224a62a" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-c2e34d90-efbb-47ef-9a61-780df224a62a" "/tmp/local-volume-test-c2e34d90-efbb-47ef-9a61-780df224a62a"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:41:35.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-f9255f6b-0d18-4158-aa6a-07f4d8349875" May 22 01:41:35.511: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-f9255f6b-0d18-4158-aa6a-07f4d8349875" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-f9255f6b-0d18-4158-aa6a-07f4d8349875" "/tmp/local-volume-test-f9255f6b-0d18-4158-aa6a-07f4d8349875"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:41:35.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-4bbea01b-78db-4a90-b1d8-ecd9abd0762b" May 22 01:41:35.628: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-4bbea01b-78db-4a90-b1d8-ecd9abd0762b" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-4bbea01b-78db-4a90-b1d8-ecd9abd0762b" "/tmp/local-volume-test-4bbea01b-78db-4a90-b1d8-ecd9abd0762b"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:41:35.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-49729843-3ad7-4816-9b5e-3b79b8abc41a" May 22 01:41:35.739: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-49729843-3ad7-4816-9b5e-3b79b8abc41a" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-49729843-3ad7-4816-9b5e-3b79b8abc41a" "/tmp/local-volume-test-49729843-3ad7-4816-9b5e-3b79b8abc41a"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:41:35.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-adf07b00-2efb-4dc6-82a5-8c1e94c44908" May 22 01:41:35.850: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-adf07b00-2efb-4dc6-82a5-8c1e94c44908" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-adf07b00-2efb-4dc6-82a5-8c1e94c44908" "/tmp/local-volume-test-adf07b00-2efb-4dc6-82a5-8c1e94c44908"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:41:35.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-83c2b6ba-1045-4b8e-a574-008ec51646bb" May 22 01:41:35.961: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-83c2b6ba-1045-4b8e-a574-008ec51646bb" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-83c2b6ba-1045-4b8e-a574-008ec51646bb" "/tmp/local-volume-test-83c2b6ba-1045-4b8e-a574-008ec51646bb"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:41:35.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-08be4617-9ab1-49fd-8643-35560aaa1b67" May 22 01:41:36.089: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-08be4617-9ab1-49fd-8643-35560aaa1b67" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-08be4617-9ab1-49fd-8643-35560aaa1b67" "/tmp/local-volume-test-08be4617-9ab1-49fd-8643-35560aaa1b67"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:41:36.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating tmpfs mount point on node "node2" at path "/tmp/local-volume-test-e5c92b25-2af5-4ee6-aebd-7e551d6bc5e8" May 22 01:41:36.204: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p "/tmp/local-volume-test-e5c92b25-2af5-4ee6-aebd-7e551d6bc5e8" && mount -t tmpfs -o size=10m tmpfs-"/tmp/local-volume-test-e5c92b25-2af5-4ee6-aebd-7e551d6bc5e8" "/tmp/local-volume-test-e5c92b25-2af5-4ee6-aebd-7e551d6bc5e8"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:41:36.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Create 20 PVs STEP: Start a goroutine to recycle unbound PVs [It] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:517 STEP: Creating 7 pods periodically STEP: Waiting for all pods to complete successfully May 22 01:46:36.532: FAIL: some pods failed to complete within 5m0s Unexpected error: <*errors.errorString | 0xc0003001f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage.glob..func20.6.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:610 +0x42a k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001a08180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001a08180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001a08180, 0x4de37a0) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 May 22 01:46:36.533: INFO: Deleting pod pod-d483119e-2137-4ef3-9ec0-04935c4a44ae May 22 01:46:36.539: INFO: Deleting PersistentVolumeClaim "pvc-zk9s8" May 22 01:46:36.544: INFO: Deleting PersistentVolumeClaim "pvc-mtffc" May 22 01:46:36.547: INFO: Deleting PersistentVolumeClaim "pvc-rpkb9" May 22 01:46:36.551: INFO: Deleting pod pod-6216f1db-6681-43ef-9439-1c13c24b2cb4 May 22 01:46:36.556: INFO: Deleting PersistentVolumeClaim "pvc-x7lnp" May 22 01:46:36.560: INFO: Deleting PersistentVolumeClaim "pvc-622d4" May 22 01:46:36.563: INFO: Deleting PersistentVolumeClaim "pvc-wfg7r" May 22 01:46:36.566: INFO: Deleting pod pod-8bc65a05-64cb-4966-b2a7-e4b3db430b88 May 22 01:46:36.572: INFO: Deleting PersistentVolumeClaim "pvc-7mfqq" May 22 01:46:36.576: INFO: Deleting PersistentVolumeClaim "pvc-skb8m" May 22 01:46:36.579: INFO: Deleting PersistentVolumeClaim "pvc-jzj42" May 22 01:46:36.582: INFO: Deleting pod pod-176c9043-50f6-4264-9f78-fc63846f4518 May 22 01:46:36.586: INFO: Deleting PersistentVolumeClaim "pvc-bb5l6" May 22 01:46:36.590: INFO: Deleting PersistentVolumeClaim "pvc-rbfv5" May 22 01:46:36.593: INFO: Deleting PersistentVolumeClaim "pvc-fxjg9" May 22 01:46:36.597: INFO: Deleting pod pod-8062d881-108f-4ef9-944f-1bfd6918c57a May 22 01:46:36.600: INFO: Deleting PersistentVolumeClaim "pvc-25d6m" May 22 01:46:36.604: INFO: Deleting PersistentVolumeClaim "pvc-g5c62" May 22 01:46:36.607: INFO: Deleting PersistentVolumeClaim "pvc-lk67f" May 22 01:46:36.610: INFO: Deleting pod pod-0084f020-1258-4a68-95a8-4b283dd90314 May 22 01:46:36.614: INFO: Deleting PersistentVolumeClaim "pvc-5f5b4" May 22 01:46:36.617: INFO: Deleting PersistentVolumeClaim "pvc-b779q" May 22 01:46:36.620: INFO: Deleting PersistentVolumeClaim "pvc-kkldz" May 22 01:46:36.624: INFO: Deleting pod pod-d57e13e9-7cfe-4177-9144-404170d6bfef May 22 01:46:36.628: INFO: Deleting PersistentVolumeClaim "pvc-dtf2m" May 22 01:46:36.631: INFO: Deleting PersistentVolumeClaim "pvc-ndb65" May 22 01:46:36.634: INFO: Deleting PersistentVolumeClaim "pvc-t62qw" [AfterEach] Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:505 STEP: Stop and wait for recycle goroutine to finish STEP: Clean all PVs STEP: Cleaning up 10 local volumes on node "node1" STEP: Cleaning up PVC and PV May 22 01:46:36.638: INFO: pvc is nil May 22 01:46:36.638: INFO: Deleting PersistentVolume "local-pvwwcmg" STEP: Cleaning up PVC and PV May 22 01:46:36.641: INFO: pvc is nil May 22 01:46:36.641: INFO: Deleting PersistentVolume "local-pvv7zgn" STEP: Cleaning up PVC and PV May 22 01:46:36.644: INFO: pvc is nil May 22 01:46:36.644: INFO: Deleting PersistentVolume "local-pvj5zcz" STEP: Cleaning up PVC and PV May 22 01:46:36.648: INFO: pvc is nil May 22 01:46:36.648: INFO: Deleting PersistentVolume "local-pvxnj7p" STEP: Cleaning up PVC and PV May 22 01:46:36.652: INFO: pvc is nil May 22 01:46:36.652: INFO: Deleting PersistentVolume "local-pvrm9x8" STEP: Cleaning up PVC and PV May 22 01:46:36.655: INFO: pvc is nil May 22 01:46:36.655: INFO: Deleting PersistentVolume "local-pvggrng" STEP: Cleaning up PVC and PV May 22 01:46:36.659: INFO: pvc is nil May 22 01:46:36.659: INFO: Deleting PersistentVolume "local-pvtm42p" STEP: Cleaning up PVC and PV May 22 01:46:36.662: INFO: pvc is nil May 22 01:46:36.662: INFO: Deleting PersistentVolume "local-pvvwqj2" STEP: Cleaning up PVC and PV May 22 01:46:36.666: INFO: pvc is nil May 22 01:46:36.666: INFO: Deleting PersistentVolume "local-pvvjppv" STEP: Cleaning up PVC and PV May 22 01:46:36.670: INFO: pvc is nil May 22 01:46:36.670: INFO: Deleting PersistentVolume "local-pvks9jz" STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-f001620c-1906-4175-b782-3808f430cc26" May 22 01:46:36.673: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-f001620c-1906-4175-b782-3808f430cc26"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:36.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 22 01:46:36.828: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f001620c-1906-4175-b782-3808f430cc26] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:36.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-6069bc3c-9234-4082-af26-8e8c309eea35" May 22 01:46:36.955: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-6069bc3c-9234-4082-af26-8e8c309eea35"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:36.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 22 01:46:37.328: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-6069bc3c-9234-4082-af26-8e8c309eea35] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:37.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-0a23e19e-b899-467d-ab7f-345cd908a81c" May 22 01:46:37.470: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-0a23e19e-b899-467d-ab7f-345cd908a81c"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:37.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 22 01:46:37.880: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-0a23e19e-b899-467d-ab7f-345cd908a81c] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:37.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-a8926d56-5c96-4885-9c7e-534e41960bf6" May 22 01:46:38.027: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-a8926d56-5c96-4885-9c7e-534e41960bf6"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:38.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 22 01:46:38.260: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a8926d56-5c96-4885-9c7e-534e41960bf6] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:38.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-a7d6bd09-f83c-40cc-98de-e09ec6f5311c" May 22 01:46:38.519: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-a7d6bd09-f83c-40cc-98de-e09ec6f5311c"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:38.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 22 01:46:38.656: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-a7d6bd09-f83c-40cc-98de-e09ec6f5311c] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:38.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-db1da846-fc32-4f30-9ace-a1fca31930ff" May 22 01:46:38.762: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-db1da846-fc32-4f30-9ace-a1fca31930ff"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:38.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 22 01:46:38.880: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-db1da846-fc32-4f30-9ace-a1fca31930ff] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:38.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-ed985144-8f5b-4ea2-9fb3-4fa29f37e90f" May 22 01:46:38.989: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-ed985144-8f5b-4ea2-9fb3-4fa29f37e90f"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:38.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 22 01:46:39.109: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-ed985144-8f5b-4ea2-9fb3-4fa29f37e90f] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:39.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-d240aa4a-256f-49ec-bd8d-3b4459f3ced8" May 22 01:46:39.223: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-d240aa4a-256f-49ec-bd8d-3b4459f3ced8"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:39.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 22 01:46:39.339: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-d240aa4a-256f-49ec-bd8d-3b4459f3ced8] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:39.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-708f9be1-1744-4448-beb3-a8f217621e44" May 22 01:46:39.443: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-708f9be1-1744-4448-beb3-a8f217621e44"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:39.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 22 01:46:39.559: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-708f9be1-1744-4448-beb3-a8f217621e44] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:39.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node1" at path "/tmp/local-volume-test-e335eed1-119b-49eb-867c-1c38ad6bc2f1" May 22 01:46:39.679: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-e335eed1-119b-49eb-867c-1c38ad6bc2f1"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:39.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 22 01:46:39.792: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e335eed1-119b-49eb-867c-1c38ad6bc2f1] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node1-jhb47 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:39.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Cleaning up 10 local volumes on node "node2" STEP: Cleaning up PVC and PV May 22 01:46:39.905: INFO: pvc is nil May 22 01:46:39.905: INFO: Deleting PersistentVolume "local-pvdglx9" STEP: Cleaning up PVC and PV May 22 01:46:39.908: INFO: pvc is nil May 22 01:46:39.908: INFO: Deleting PersistentVolume "local-pvxz7pz" STEP: Cleaning up PVC and PV May 22 01:46:39.912: INFO: pvc is nil May 22 01:46:39.912: INFO: Deleting PersistentVolume "local-pvzfhnx" STEP: Cleaning up PVC and PV May 22 01:46:39.916: INFO: pvc is nil May 22 01:46:39.916: INFO: Deleting PersistentVolume "local-pvxclbj" STEP: Cleaning up PVC and PV May 22 01:46:39.920: INFO: pvc is nil May 22 01:46:39.920: INFO: Deleting PersistentVolume "local-pvf6v5v" STEP: Cleaning up PVC and PV May 22 01:46:39.924: INFO: pvc is nil May 22 01:46:39.924: INFO: Deleting PersistentVolume "local-pv97p2l" STEP: Cleaning up PVC and PV May 22 01:46:39.927: INFO: pvc is nil May 22 01:46:39.927: INFO: Deleting PersistentVolume "local-pvhwkwb" STEP: Cleaning up PVC and PV May 22 01:46:39.930: INFO: pvc is nil May 22 01:46:39.930: INFO: Deleting PersistentVolume "local-pvmcvsm" STEP: Cleaning up PVC and PV May 22 01:46:39.934: INFO: pvc is nil May 22 01:46:39.934: INFO: Deleting PersistentVolume "local-pvdfhzl" STEP: Cleaning up PVC and PV May 22 01:46:39.937: INFO: pvc is nil May 22 01:46:39.937: INFO: Deleting PersistentVolume "local-pvvtmpk" STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-c5454b59-415e-4053-baf4-4184b0582a30" May 22 01:46:39.940: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-c5454b59-415e-4053-baf4-4184b0582a30"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:39.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 22 01:46:40.060: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c5454b59-415e-4053-baf4-4184b0582a30] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:40.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-4c111f06-501c-4542-9e8a-4914525980bb" May 22 01:46:40.166: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-4c111f06-501c-4542-9e8a-4914525980bb"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:40.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 22 01:46:40.283: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4c111f06-501c-4542-9e8a-4914525980bb] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:40.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-c2e34d90-efbb-47ef-9a61-780df224a62a" May 22 01:46:40.398: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-c2e34d90-efbb-47ef-9a61-780df224a62a"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:40.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 22 01:46:40.516: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-c2e34d90-efbb-47ef-9a61-780df224a62a] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:40.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-f9255f6b-0d18-4158-aa6a-07f4d8349875" May 22 01:46:40.634: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-f9255f6b-0d18-4158-aa6a-07f4d8349875"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:40.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 22 01:46:40.750: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-f9255f6b-0d18-4158-aa6a-07f4d8349875] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:40.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-4bbea01b-78db-4a90-b1d8-ecd9abd0762b" May 22 01:46:40.854: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-4bbea01b-78db-4a90-b1d8-ecd9abd0762b"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:40.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 22 01:46:40.982: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-4bbea01b-78db-4a90-b1d8-ecd9abd0762b] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:40.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-49729843-3ad7-4816-9b5e-3b79b8abc41a" May 22 01:46:41.088: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-49729843-3ad7-4816-9b5e-3b79b8abc41a"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:41.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 22 01:46:41.208: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-49729843-3ad7-4816-9b5e-3b79b8abc41a] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:41.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-adf07b00-2efb-4dc6-82a5-8c1e94c44908" May 22 01:46:41.312: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-adf07b00-2efb-4dc6-82a5-8c1e94c44908"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:41.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 22 01:46:41.433: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-adf07b00-2efb-4dc6-82a5-8c1e94c44908] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:41.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-83c2b6ba-1045-4b8e-a574-008ec51646bb" May 22 01:46:41.543: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-83c2b6ba-1045-4b8e-a574-008ec51646bb"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:41.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 22 01:46:41.655: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-83c2b6ba-1045-4b8e-a574-008ec51646bb] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:41.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-08be4617-9ab1-49fd-8643-35560aaa1b67" May 22 01:46:41.773: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-08be4617-9ab1-49fd-8643-35560aaa1b67"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:41.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 22 01:46:41.901: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-08be4617-9ab1-49fd-8643-35560aaa1b67] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:41.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Unmount tmpfs mount point on node "node2" at path "/tmp/local-volume-test-e5c92b25-2af5-4ee6-aebd-7e551d6bc5e8" May 22 01:46:42.004: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/tmp/local-volume-test-e5c92b25-2af5-4ee6-aebd-7e551d6bc5e8"] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:42.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Removing the test directory May 22 01:46:42.117: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-volume-test-e5c92b25-2af5-4ee6-aebd-7e551d6bc5e8] Namespace:persistent-local-volumes-test-602 PodName:hostexec-node2-65rwt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true} May 22 01:46:42.117: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "persistent-local-volumes-test-602". STEP: Found 72 events. May 22 01:46:42.231: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for hostexec-node1-jhb47: { } Scheduled: Successfully assigned persistent-local-volumes-test-602/hostexec-node1-jhb47 to node1 May 22 01:46:42.231: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for hostexec-node2-65rwt: { } Scheduled: Successfully assigned persistent-local-volumes-test-602/hostexec-node2-65rwt to node2 May 22 01:46:42.231: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-0084f020-1258-4a68-95a8-4b283dd90314: { } Scheduled: Successfully assigned persistent-local-volumes-test-602/pod-0084f020-1258-4a68-95a8-4b283dd90314 to node2 May 22 01:46:42.231: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-176c9043-50f6-4264-9f78-fc63846f4518: { } Scheduled: Successfully assigned persistent-local-volumes-test-602/pod-176c9043-50f6-4264-9f78-fc63846f4518 to node2 May 22 01:46:42.231: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-6216f1db-6681-43ef-9439-1c13c24b2cb4: { } Scheduled: Successfully assigned persistent-local-volumes-test-602/pod-6216f1db-6681-43ef-9439-1c13c24b2cb4 to node1 May 22 01:46:42.231: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-8062d881-108f-4ef9-944f-1bfd6918c57a: { } Scheduled: Successfully assigned persistent-local-volumes-test-602/pod-8062d881-108f-4ef9-944f-1bfd6918c57a to node2 May 22 01:46:42.231: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-8bc65a05-64cb-4966-b2a7-e4b3db430b88: { } FailedScheduling: skip schedule deleting pod: persistent-local-volumes-test-602/pod-8bc65a05-64cb-4966-b2a7-e4b3db430b88 May 22 01:46:42.231: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-8bc65a05-64cb-4966-b2a7-e4b3db430b88: { } FailedScheduling: 0/5 nodes are available: 2 node(s) didn't find available persistent volumes to bind, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. May 22 01:46:42.231: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-8bc65a05-64cb-4966-b2a7-e4b3db430b88: { } FailedScheduling: 0/5 nodes are available: 2 node(s) didn't find available persistent volumes to bind, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. May 22 01:46:42.231: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-d483119e-2137-4ef3-9ec0-04935c4a44ae: { } Scheduled: Successfully assigned persistent-local-volumes-test-602/pod-d483119e-2137-4ef3-9ec0-04935c4a44ae to node1 May 22 01:46:42.231: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-d57e13e9-7cfe-4177-9144-404170d6bfef: { } Scheduled: Successfully assigned persistent-local-volumes-test-602/pod-d57e13e9-7cfe-4177-9144-404170d6bfef to node1 May 22 01:46:42.231: INFO: At 2021-05-22 01:41:28 +0000 UTC - event for hostexec-node1-jhb47: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 520.397247ms May 22 01:46:42.231: INFO: At 2021-05-22 01:41:28 +0000 UTC - event for hostexec-node1-jhb47: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 22 01:46:42.231: INFO: At 2021-05-22 01:41:29 +0000 UTC - event for hostexec-node1-jhb47: {kubelet node1} Started: Started container agnhost-container May 22 01:46:42.231: INFO: At 2021-05-22 01:41:29 +0000 UTC - event for hostexec-node1-jhb47: {kubelet node1} Created: Created container agnhost-container May 22 01:46:42.231: INFO: At 2021-05-22 01:41:33 +0000 UTC - event for hostexec-node2-65rwt: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.20" May 22 01:46:42.231: INFO: At 2021-05-22 01:41:34 +0000 UTC - event for hostexec-node2-65rwt: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 500.564614ms May 22 01:46:42.231: INFO: At 2021-05-22 01:41:34 +0000 UTC - event for hostexec-node2-65rwt: {kubelet node2} Started: Started container agnhost-container May 22 01:46:42.231: INFO: At 2021-05-22 01:41:34 +0000 UTC - event for hostexec-node2-65rwt: {kubelet node2} Created: Created container agnhost-container May 22 01:46:42.231: INFO: At 2021-05-22 01:41:36 +0000 UTC - event for pvc-25d6m: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding May 22 01:46:42.231: INFO: At 2021-05-22 01:41:36 +0000 UTC - event for pvc-bb5l6: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding May 22 01:46:42.231: INFO: At 2021-05-22 01:41:36 +0000 UTC - event for pvc-fxjg9: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding May 22 01:46:42.231: INFO: At 2021-05-22 01:41:36 +0000 UTC - event for pvc-g5c62: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding May 22 01:46:42.231: INFO: At 2021-05-22 01:41:36 +0000 UTC - event for pvc-lk67f: {persistentvolume-controller } WaitForPodScheduled: waiting for pod pod-8062d881-108f-4ef9-944f-1bfd6918c57a to be scheduled May 22 01:46:42.231: INFO: At 2021-05-22 01:41:36 +0000 UTC - event for pvc-rbfv5: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding May 22 01:46:42.231: INFO: At 2021-05-22 01:41:38 +0000 UTC - event for pvc-7mfqq: {persistentvolume-controller } WaitForPodScheduled: waiting for pod pod-8bc65a05-64cb-4966-b2a7-e4b3db430b88 to be scheduled May 22 01:46:42.231: INFO: At 2021-05-22 01:41:38 +0000 UTC - event for pvc-jzj42: {persistentvolume-controller } WaitForPodScheduled: waiting for pod pod-8bc65a05-64cb-4966-b2a7-e4b3db430b88 to be scheduled May 22 01:46:42.231: INFO: At 2021-05-22 01:41:38 +0000 UTC - event for pvc-skb8m: {persistentvolume-controller } WaitForPodScheduled: waiting for pod pod-8bc65a05-64cb-4966-b2a7-e4b3db430b88 to be scheduled May 22 01:46:42.231: INFO: At 2021-05-22 01:41:39 +0000 UTC - event for pod-8062d881-108f-4ef9-944f-1bfd6918c57a: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:46:42.231: INFO: At 2021-05-22 01:41:39 +0000 UTC - event for pod-8062d881-108f-4ef9-944f-1bfd6918c57a: {multus } AddedInterface: Add eth0 [10.244.4.120/24] May 22 01:46:42.231: INFO: At 2021-05-22 01:41:40 +0000 UTC - event for pod-176c9043-50f6-4264-9f78-fc63846f4518: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:46:42.231: INFO: At 2021-05-22 01:41:40 +0000 UTC - event for pod-176c9043-50f6-4264-9f78-fc63846f4518: {multus } AddedInterface: Add eth0 [10.244.4.121/24] May 22 01:46:42.231: INFO: At 2021-05-22 01:41:40 +0000 UTC - event for pod-8062d881-108f-4ef9-944f-1bfd6918c57a: {kubelet node2} Failed: Error: ErrImagePull May 22 01:46:42.231: INFO: At 2021-05-22 01:41:40 +0000 UTC - event for pod-8062d881-108f-4ef9-944f-1bfd6918c57a: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:46:42.231: INFO: At 2021-05-22 01:41:40 +0000 UTC - event for pod-d57e13e9-7cfe-4177-9144-404170d6bfef: {multus } AddedInterface: Add eth0 [10.244.3.237/24] May 22 01:46:42.231: INFO: At 2021-05-22 01:41:40 +0000 UTC - event for pod-d57e13e9-7cfe-4177-9144-404170d6bfef: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:46:42.232: INFO: At 2021-05-22 01:41:41 +0000 UTC - event for pod-176c9043-50f6-4264-9f78-fc63846f4518: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:46:42.232: INFO: At 2021-05-22 01:41:41 +0000 UTC - event for pod-176c9043-50f6-4264-9f78-fc63846f4518: {kubelet node2} Failed: Error: ErrImagePull May 22 01:46:42.232: INFO: At 2021-05-22 01:41:41 +0000 UTC - event for pod-8062d881-108f-4ef9-944f-1bfd6918c57a: {kubelet node2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 22 01:46:42.232: INFO: At 2021-05-22 01:41:41 +0000 UTC - event for pod-d483119e-2137-4ef3-9ec0-04935c4a44ae: {multus } AddedInterface: Add eth0 [10.244.3.238/24] May 22 01:46:42.232: INFO: At 2021-05-22 01:41:41 +0000 UTC - event for pod-d483119e-2137-4ef3-9ec0-04935c4a44ae: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:46:42.232: INFO: At 2021-05-22 01:41:41 +0000 UTC - event for pod-d57e13e9-7cfe-4177-9144-404170d6bfef: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:46:42.232: INFO: At 2021-05-22 01:41:41 +0000 UTC - event for pod-d57e13e9-7cfe-4177-9144-404170d6bfef: {kubelet node1} Failed: Error: ErrImagePull May 22 01:46:42.232: INFO: At 2021-05-22 01:41:42 +0000 UTC - event for pod-0084f020-1258-4a68-95a8-4b283dd90314: {kubelet node2} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:46:42.232: INFO: At 2021-05-22 01:41:42 +0000 UTC - event for pod-0084f020-1258-4a68-95a8-4b283dd90314: {multus } AddedInterface: Add eth0 [10.244.4.122/24] May 22 01:46:42.232: INFO: At 2021-05-22 01:41:42 +0000 UTC - event for pod-176c9043-50f6-4264-9f78-fc63846f4518: {kubelet node2} SandboxChanged: Pod sandbox changed, it will be killed and re-created. May 22 01:46:42.232: INFO: At 2021-05-22 01:41:42 +0000 UTC - event for pod-6216f1db-6681-43ef-9439-1c13c24b2cb4: {multus } AddedInterface: Add eth0 [10.244.3.239/24] May 22 01:46:42.232: INFO: At 2021-05-22 01:41:42 +0000 UTC - event for pod-6216f1db-6681-43ef-9439-1c13c24b2cb4: {kubelet node1} Pulling: Pulling image "docker.io/library/busybox:1.29" May 22 01:46:42.232: INFO: At 2021-05-22 01:41:42 +0000 UTC - event for pod-d483119e-2137-4ef3-9ec0-04935c4a44ae: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:46:42.232: INFO: At 2021-05-22 01:41:42 +0000 UTC - event for pod-d483119e-2137-4ef3-9ec0-04935c4a44ae: {kubelet node1} Failed: Error: ErrImagePull May 22 01:46:42.232: INFO: At 2021-05-22 01:41:42 +0000 UTC - event for pod-d57e13e9-7cfe-4177-9144-404170d6bfef: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:46:42.232: INFO: At 2021-05-22 01:41:42 +0000 UTC - event for pod-d57e13e9-7cfe-4177-9144-404170d6bfef: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:46:42.232: INFO: At 2021-05-22 01:41:43 +0000 UTC - event for pod-0084f020-1258-4a68-95a8-4b283dd90314: {kubelet node2} Failed: Error: ErrImagePull May 22 01:46:42.232: INFO: At 2021-05-22 01:41:43 +0000 UTC - event for pod-0084f020-1258-4a68-95a8-4b283dd90314: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:46:42.232: INFO: At 2021-05-22 01:41:43 +0000 UTC - event for pod-d483119e-2137-4ef3-9ec0-04935c4a44ae: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:46:42.232: INFO: At 2021-05-22 01:41:43 +0000 UTC - event for pod-d483119e-2137-4ef3-9ec0-04935c4a44ae: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:46:42.232: INFO: At 2021-05-22 01:41:44 +0000 UTC - event for pod-0084f020-1258-4a68-95a8-4b283dd90314: {kubelet node2} Failed: Error: ImagePullBackOff May 22 01:46:42.232: INFO: At 2021-05-22 01:41:44 +0000 UTC - event for pod-0084f020-1258-4a68-95a8-4b283dd90314: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:46:42.232: INFO: At 2021-05-22 01:41:44 +0000 UTC - event for pod-176c9043-50f6-4264-9f78-fc63846f4518: {multus } AddedInterface: Add eth0 [10.244.4.124/24] May 22 01:46:42.232: INFO: At 2021-05-22 01:41:44 +0000 UTC - event for pod-176c9043-50f6-4264-9f78-fc63846f4518: {kubelet node2} Failed: Error: ImagePullBackOff May 22 01:46:42.232: INFO: At 2021-05-22 01:41:44 +0000 UTC - event for pod-176c9043-50f6-4264-9f78-fc63846f4518: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:46:42.232: INFO: At 2021-05-22 01:41:44 +0000 UTC - event for pod-6216f1db-6681-43ef-9439-1c13c24b2cb4: {kubelet node1} Failed: Error: ErrImagePull May 22 01:46:42.232: INFO: At 2021-05-22 01:41:44 +0000 UTC - event for pod-6216f1db-6681-43ef-9439-1c13c24b2cb4: {kubelet node1} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:46:42.232: INFO: At 2021-05-22 01:41:44 +0000 UTC - event for pod-8062d881-108f-4ef9-944f-1bfd6918c57a: {kubelet node2} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:46:42.232: INFO: At 2021-05-22 01:41:44 +0000 UTC - event for pod-8062d881-108f-4ef9-944f-1bfd6918c57a: {multus } AddedInterface: Add eth0 [10.244.4.123/24] May 22 01:46:42.232: INFO: At 2021-05-22 01:41:44 +0000 UTC - event for pod-8062d881-108f-4ef9-944f-1bfd6918c57a: {kubelet node2} Failed: Error: ImagePullBackOff May 22 01:46:42.232: INFO: At 2021-05-22 01:41:45 +0000 UTC - event for pod-6216f1db-6681-43ef-9439-1c13c24b2cb4: {kubelet node1} BackOff: Back-off pulling image "docker.io/library/busybox:1.29" May 22 01:46:42.232: INFO: At 2021-05-22 01:41:45 +0000 UTC - event for pod-6216f1db-6681-43ef-9439-1c13c24b2cb4: {kubelet node1} Failed: Error: ImagePullBackOff May 22 01:46:42.232: INFO: At 2021-05-22 01:42:23 +0000 UTC - event for pod-8062d881-108f-4ef9-944f-1bfd6918c57a: {kubelet node2} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit May 22 01:46:42.232: INFO: At 2021-05-22 01:46:36 +0000 UTC - event for pvc-7mfqq: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding May 22 01:46:42.232: INFO: At 2021-05-22 01:46:36 +0000 UTC - event for pvc-jzj42: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding May 22 01:46:42.232: INFO: At 2021-05-22 01:46:36 +0000 UTC - event for pvc-skb8m: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding May 22 01:46:42.235: INFO: POD NODE PHASE GRACE CONDITIONS May 22 01:46:42.235: INFO: hostexec-node1-jhb47 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:41:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:41:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:41:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:41:27 +0000 UTC }] May 22 01:46:42.235: INFO: hostexec-node2-65rwt node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:41:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:41:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:41:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-22 01:41:33 +0000 UTC }] May 22 01:46:42.235: INFO: May 22 01:46:42.240: INFO: Logging node info for node master1 May 22 01:46:42.242: INFO: Node Info: &Node{ObjectMeta:{master1 /api/v1/nodes/master1 18cf10c1-c08b-4d36-bbe6-5aa86f02296e 167587 0 2021-05-21 19:55:07 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"d2:e0:bb:d8:54:80"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-21 19:55:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-21 19:55:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-21 19:57:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-21 19:57:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-05-21 20:04:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-21 19:59:55 +0000 UTC,LastTransitionTime:2021-05-21 19:59:55 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-22 01:46:35 +0000 UTC,LastTransitionTime:2021-05-21 19:55:07 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-22 01:46:35 +0000 UTC,LastTransitionTime:2021-05-21 19:55:07 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-22 01:46:35 +0000 UTC,LastTransitionTime:2021-05-21 19:55:07 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-22 01:46:35 +0000 UTC,LastTransitionTime:2021-05-21 19:59:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0b916e2f445c4d05b6a9058a788d9410,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:ab86499b-409f-44c9-86ff-e9bc113e4112,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726676532,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:f013255695f4515c5b21b11281c7e0fb491082d15ec5a96adb8217e015a9c422 tas-controller:latest localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[nginx@sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314 nginx:1.19.2-alpine],SizeBytes:22052669,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:50746f542c19fda01d88ae124ce58c5a326dad7cd24d3c2d19fdf959cc7f0c49 tas-extender:latest localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 22 01:46:42.243: INFO: Logging kubelet events for node master1 May 22 01:46:42.245: INFO: Logging pods the kubelet thinks is on node master1 May 22 01:46:42.259: INFO: kube-proxy-zv2rb started at 2021-05-21 19:57:00 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.259: INFO: Container kube-proxy ready: true, restart count 2 May 22 01:46:42.259: INFO: kube-flannel-5lkd2 started at 2021-05-21 19:57:34 +0000 UTC (1+1 container statuses recorded) May 22 01:46:42.259: INFO: Init container install-cni ready: true, restart count 0 May 22 01:46:42.259: INFO: Container kube-flannel ready: true, restart count 2 May 22 01:46:42.259: INFO: coredns-7677f9bb54-wl7h4 started at 2021-05-22 00:44:35 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.259: INFO: Container coredns ready: true, restart count 0 May 22 01:46:42.259: INFO: kube-scheduler-master1 started at 2021-05-21 19:59:16 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.259: INFO: Container kube-scheduler ready: true, restart count 0 May 22 01:46:42.259: INFO: kube-apiserver-master1 started at 2021-05-21 20:03:10 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.259: INFO: Container kube-apiserver ready: true, restart count 0 May 22 01:46:42.259: INFO: kube-controller-manager-master1 started at 2021-05-21 19:59:16 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.259: INFO: Container kube-controller-manager ready: true, restart count 3 May 22 01:46:42.259: INFO: kube-multus-ds-amd64-z8khx started at 2021-05-21 19:57:42 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.259: INFO: Container kube-multus ready: true, restart count 1 May 22 01:46:42.259: INFO: docker-registry-docker-registry-56cbc7bc58-mft7s started at 2021-05-21 20:00:26 +0000 UTC (0+2 container statuses recorded) May 22 01:46:42.259: INFO: Container docker-registry ready: true, restart count 0 May 22 01:46:42.259: INFO: Container nginx ready: true, restart count 0 May 22 01:46:42.259: INFO: node-feature-discovery-controller-5bf5c49849-ktcdq started at 2021-05-21 20:03:57 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.259: INFO: Container nfd-controller ready: true, restart count 0 May 22 01:46:42.259: INFO: node-exporter-m7jht started at 2021-05-21 20:07:54 +0000 UTC (0+2 container statuses recorded) May 22 01:46:42.259: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:46:42.259: INFO: Container node-exporter ready: true, restart count 0 W0522 01:46:42.270925 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 22 01:46:42.297: INFO: Latency metrics for node master1 May 22 01:46:42.297: INFO: Logging node info for node master2 May 22 01:46:42.301: INFO: Node Info: &Node{ObjectMeta:{master2 /api/v1/nodes/master2 d439093b-ce44-47e7-8b41-5739b7c49ca5 167698 0 2021-05-21 19:55:46 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"62:5a:fb:0b:f8:b0"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-21 19:55:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-21 19:55:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-21 19:57:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-21 19:57:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-21 20:00:49 +0000 UTC,LastTransitionTime:2021-05-21 20:00:49 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-22 01:46:38 +0000 UTC,LastTransitionTime:2021-05-21 19:55:46 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-22 01:46:38 +0000 UTC,LastTransitionTime:2021-05-21 19:55:46 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-22 01:46:38 +0000 UTC,LastTransitionTime:2021-05-21 19:55:46 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-22 01:46:38 +0000 UTC,LastTransitionTime:2021-05-21 19:57:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:495d1cf47bdb4c7e982c636c95de5648,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:cca785c0-d61e-403e-91fd-d0d4f3fa573c,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726676532,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 22 01:46:42.301: INFO: Logging kubelet events for node master2 May 22 01:46:42.303: INFO: Logging pods the kubelet thinks is on node master2 May 22 01:46:42.318: INFO: kube-apiserver-master2 started at 2021-05-21 20:03:00 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.319: INFO: Container kube-apiserver ready: true, restart count 0 May 22 01:46:42.319: INFO: kube-proxy-shdjd started at 2021-05-21 19:57:00 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.319: INFO: Container kube-proxy ready: true, restart count 1 May 22 01:46:42.319: INFO: kube-flannel-tnf5x started at 2021-05-21 19:57:34 +0000 UTC (1+1 container statuses recorded) May 22 01:46:42.319: INFO: Init container install-cni ready: true, restart count 0 May 22 01:46:42.319: INFO: Container kube-flannel ready: true, restart count 1 May 22 01:46:42.319: INFO: kube-multus-ds-amd64-lwzkr started at 2021-05-21 19:57:42 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.319: INFO: Container kube-multus ready: true, restart count 1 May 22 01:46:42.319: INFO: dns-autoscaler-5b7b5c9b6f-dvlvw started at 2021-05-21 19:58:05 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.319: INFO: Container autoscaler ready: true, restart count 2 May 22 01:46:42.319: INFO: node-exporter-q52l5 started at 2021-05-21 20:07:54 +0000 UTC (0+2 container statuses recorded) May 22 01:46:42.319: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:46:42.319: INFO: Container node-exporter ready: true, restart count 0 May 22 01:46:42.319: INFO: kube-controller-manager-master2 started at 2021-05-21 19:59:58 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.319: INFO: Container kube-controller-manager ready: true, restart count 2 May 22 01:46:42.319: INFO: kube-scheduler-master2 started at 2021-05-21 20:00:08 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.319: INFO: Container kube-scheduler ready: true, restart count 2 W0522 01:46:42.331764 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 22 01:46:42.364: INFO: Latency metrics for node master2 May 22 01:46:42.364: INFO: Logging node info for node master3 May 22 01:46:42.367: INFO: Node Info: &Node{ObjectMeta:{master3 /api/v1/nodes/master3 23066049-f3f1-42a7-9cea-19e20ed4ccec 167585 0 2021-05-21 19:55:56 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[flannel.alpha.coreos.com/backend-data:{"VtepMAC":"ae:a6:e4:e7:ba:02"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-05-21 19:55:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-05-21 19:55:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {flanneld Update v1 2021-05-21 19:57:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-05-21 19:57:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-21 20:00:41 +0000 UTC,LastTransitionTime:2021-05-21 20:00:41 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-22 01:46:35 +0000 UTC,LastTransitionTime:2021-05-21 19:55:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-22 01:46:35 +0000 UTC,LastTransitionTime:2021-05-21 19:55:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-22 01:46:35 +0000 UTC,LastTransitionTime:2021-05-21 19:55:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-22 01:46:35 +0000 UTC,LastTransitionTime:2021-05-21 19:57:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:660844f5cb0944868621042562b928e3,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:02066ff3-4833-4130-ba94-138c581e4cc0,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726676532,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 22 01:46:42.368: INFO: Logging kubelet events for node master3 May 22 01:46:42.370: INFO: Logging pods the kubelet thinks is on node master3 May 22 01:46:42.385: INFO: kube-controller-manager-master3 started at 2021-05-21 20:03:00 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.385: INFO: Container kube-controller-manager ready: true, restart count 1 May 22 01:46:42.385: INFO: kube-scheduler-master3 started at 2021-05-21 20:03:10 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.385: INFO: Container kube-scheduler ready: true, restart count 1 May 22 01:46:42.385: INFO: kube-apiserver-master3 started at 2021-05-21 20:03:20 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.385: INFO: Container kube-apiserver ready: true, restart count 0 May 22 01:46:42.385: INFO: kube-flannel-8sd6n started at 2021-05-21 19:57:34 +0000 UTC (1+1 container statuses recorded) May 22 01:46:42.385: INFO: Init container install-cni ready: true, restart count 0 May 22 01:46:42.385: INFO: Container kube-flannel ready: true, restart count 1 May 22 01:46:42.385: INFO: kube-multus-ds-amd64-zgdbl started at 2021-05-21 19:57:42 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.385: INFO: Container kube-multus ready: true, restart count 1 May 22 01:46:42.385: INFO: coredns-7677f9bb54-7jdbv started at 2021-05-22 00:44:35 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.385: INFO: Container coredns ready: true, restart count 0 May 22 01:46:42.385: INFO: kube-proxy-hwwxt started at 2021-05-21 19:57:00 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.385: INFO: Container kube-proxy ready: true, restart count 2 May 22 01:46:42.385: INFO: node-exporter-s74rx started at 2021-05-21 20:07:54 +0000 UTC (0+2 container statuses recorded) May 22 01:46:42.385: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:46:42.385: INFO: Container node-exporter ready: true, restart count 0 W0522 01:46:42.398251 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 22 01:46:42.428: INFO: Latency metrics for node master3 May 22 01:46:42.428: INFO: Logging node info for node node1 May 22 01:46:42.431: INFO: Node Info: &Node{ObjectMeta:{node1 /api/v1/nodes/node1 62cfecb3-af08-42a4-ab85-25a317061b61 167729 0 2021-05-21 19:56:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1072":"csi-mock-csi-mock-volumes-1072","csi-mock-csi-mock-volumes-1813":"csi-mock-csi-mock-volumes-1813","csi-mock-csi-mock-volumes-3021":"csi-mock-csi-mock-volumes-3021","csi-mock-csi-mock-volumes-3986":"csi-mock-csi-mock-volumes-3986","csi-mock-csi-mock-volumes-4120":"csi-mock-csi-mock-volumes-4120","csi-mock-csi-mock-volumes-5362":"csi-mock-csi-mock-volumes-5362","csi-mock-csi-mock-volumes-7364":"csi-mock-csi-mock-volumes-7364","csi-mock-csi-mock-volumes-8090":"csi-mock-csi-mock-volumes-8090","csi-mock-csi-mock-volumes-9489":"csi-mock-csi-mock-volumes-9489"} flannel.alpha.coreos.com/backend-data:{"VtepMAC":"22:2e:2b:06:83:4a"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-05-21 19:56:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-21 19:57:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-21 20:04:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-21 20:06:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-05-22 01:15:35 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}}} {kube-controller-manager Update v1 2021-05-22 01:27:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubelet Update v1 2021-05-22 01:27:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-21 20:00:36 +0000 UTC,LastTransitionTime:2021-05-21 20:00:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-22 01:46:40 +0000 UTC,LastTransitionTime:2021-05-21 19:56:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-22 01:46:40 +0000 UTC,LastTransitionTime:2021-05-21 19:56:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-22 01:46:40 +0000 UTC,LastTransitionTime:2021-05-21 19:56:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-22 01:46:40 +0000 UTC,LastTransitionTime:2021-05-21 19:57:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94b7bb79c41d4a0492f63fb5fb3c5cc0,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:1e94d819-8d68-4948-b171-5fd2d8fc7ff5,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:2b114e08442070e7232fcffc4cb89529bd5c9effe733ed690277a33772bf2d00 localhost:30500/barometer-collectd:stable],SizeBytes:1464382814,},ContainerImage{Names:[@ :],SizeBytes:1002487865,},ContainerImage{Names:[opnfv/barometer-collectd@sha256:ed5c574f653e2a39e784ff322033a2319aafde7366c803a88f20f7a2a8bc1efb opnfv/barometer-collectd:stable],SizeBytes:825413035,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d865c665dfeeec5a879dca7b9945cc49f55f10921b4e729f0da0cdec7dedbf7 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:726676532,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:1636899c10870ab66c48d960a9df620f4f9e86a0c72fbacf36032d27404e7e6c golang:alpine3.12],SizeBytes:301156062,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:3af248961c56916aeca8eb4000c15d6cf6a69641ea92f0540865bb37b495932f kubernetesui/dashboard-amd64:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[grafana/grafana@sha256:89304bc2335f4976618548d7b93d165ed67369d3a051d2f627fc4e0aa3d0aff1 grafana/grafana:7.1.0],SizeBytes:179601493,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:d4ba4dd1a9ebb90916d0bfed3c204adcb118ed24546bf8dd2e6b30fc0fd2009e quay.io/prometheus/prometheus:v2.20.0],SizeBytes:144886595,},ContainerImage{Names:[nginx@sha256:df13abe416e37eb3db4722840dd479b00ba193ac6606e7902331dcea50f4f1f2 nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter-amd64@sha256:b63dc612e3cb73f79d2401a4516f794f9f0a83002600ca72e675e41baecff437 directxman12/k8s-prometheus-adapter-amd64:v0.6.0],SizeBytes:53267842,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:48281550,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:46041582,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:a4dc6d912ce1a8dd4c3a51b1cfb52454080ed36db95bf824895d5ecb7175199f nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392673,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:9d07c391aeb1a9d02eb4343c113ed01825227c70c32b3cae861711f90191b0fd quay.io/coreos/kube-rbac-proxy:v0.4.1],SizeBytes:41317870,},ContainerImage{Names:[quay.io/coreos/prometheus-operator@sha256:a54e806fb27d2fb0251da4f3b2a3bb5320759af63a54a755788304775f2384a7 quay.io/coreos/prometheus-operator:v0.40.0],SizeBytes:38238457,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:f013255695f4515c5b21b11281c7e0fb491082d15ec5a96adb8217e015a9c422 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:50746f542c19fda01d88ae124ce58c5a326dad7cd24d3c2d19fdf959cc7f0c49 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64@sha256:3b36bd80b97c532a774e7f6246797b8575d97037982f353476c703ba6686c75c gcr.io/kubernetes-e2e-test-images/regression-issue-74839-amd64:1.0],SizeBytes:19227369,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:0b4273abac4c241fa3d70aaf52b0d79a133d2737081f4a5c5dea4949f6c45dc3 k8s.gcr.io/sig-storage/mock-driver:v3.1.0],SizeBytes:18687618,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:16322467,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/coreos/prometheus-config-reloader@sha256:c679a143b24b7731ad1577a9865aa3805426cbf1b25e30807b951dff68466ffd quay.io/coreos/prometheus-config-reloader:v0.40.0],SizeBytes:10131705,},ContainerImage{Names:[jimmidyson/configmap-reload@sha256:d107c7a235c266273b1c3502a391fec374430e5625539403d0de797fa9c556a2 jimmidyson/configmap-reload:v0.3.0],SizeBytes:9700438,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:36553b10a4947067b9fbb7d532951066293a68eae893beba1d9235f7d11a20ad alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 22 01:46:42.432: INFO: Logging kubelet events for node node1 May 22 01:46:42.434: INFO: Logging pods the kubelet thinks is on node node1 May 22 01:46:42.455: INFO: kube-multus-ds-amd64-wlmhr started at 2021-05-21 19:57:42 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.455: INFO: Container kube-multus ready: true, restart count 1 May 22 01:46:42.455: INFO: prometheus-operator-5bb8cb9d8f-mzlrf started at 2021-05-21 20:07:47 +0000 UTC (0+2 container statuses recorded) May 22 01:46:42.455: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:46:42.455: INFO: Container prometheus-operator ready: true, restart count 0 May 22 01:46:42.455: INFO: node-exporter-l5k2r started at 2021-05-21 20:07:54 +0000 UTC (0+2 container statuses recorded) May 22 01:46:42.455: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:46:42.455: INFO: Container node-exporter ready: true, restart count 0 May 22 01:46:42.455: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-r8d7k started at 2021-05-22 00:30:47 +0000 UTC (0+2 container statuses recorded) May 22 01:46:42.455: INFO: Container tas-controller ready: true, restart count 0 May 22 01:46:42.455: INFO: Container tas-extender ready: true, restart count 0 May 22 01:46:42.455: INFO: prometheus-k8s-0 started at 2021-05-21 20:08:06 +0000 UTC (0+5 container statuses recorded) May 22 01:46:42.455: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 22 01:46:42.455: INFO: Container grafana ready: true, restart count 0 May 22 01:46:42.455: INFO: Container prometheus ready: true, restart count 1 May 22 01:46:42.455: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 22 01:46:42.455: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 22 01:46:42.455: INFO: kube-flannel-k6mr4 started at 2021-05-21 19:57:34 +0000 UTC (1+1 container statuses recorded) May 22 01:46:42.455: INFO: Init container install-cni ready: true, restart count 1 May 22 01:46:42.455: INFO: Container kube-flannel ready: true, restart count 1 May 22 01:46:42.455: INFO: collectd-mc5kl started at 2021-05-21 20:13:40 +0000 UTC (0+3 container statuses recorded) May 22 01:46:42.455: INFO: Container collectd ready: true, restart count 0 May 22 01:46:42.455: INFO: Container collectd-exporter ready: true, restart count 0 May 22 01:46:42.455: INFO: Container rbac-proxy ready: true, restart count 0 May 22 01:46:42.455: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-czhnm started at 2021-05-21 20:04:29 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.455: INFO: Container kube-sriovdp ready: true, restart count 0 May 22 01:46:42.455: INFO: kubernetes-dashboard-86c6f9df5b-8rsws started at 2021-05-21 19:58:07 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.455: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 22 01:46:42.455: INFO: kubernetes-metrics-scraper-678c97765c-nnrtl started at 2021-05-21 19:58:07 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.455: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 22 01:46:42.455: INFO: nginx-proxy-node1 started at 2021-05-21 20:03:00 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.455: INFO: Container nginx-proxy ready: true, restart count 1 May 22 01:46:42.455: INFO: cmk-h8jxp started at 2021-05-21 20:07:00 +0000 UTC (0+2 container statuses recorded) May 22 01:46:42.455: INFO: Container nodereport ready: true, restart count 0 May 22 01:46:42.455: INFO: Container reconcile ready: true, restart count 0 May 22 01:46:42.455: INFO: hostexec-node1-jhb47 started at 2021-05-22 01:41:27 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.455: INFO: Container agnhost-container ready: true, restart count 0 May 22 01:46:42.455: INFO: cmk-webhook-6c9d5f8578-8pz6w started at 2021-05-21 20:07:00 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.455: INFO: Container cmk-webhook ready: true, restart count 0 May 22 01:46:42.455: INFO: kube-proxy-h5k9s started at 2021-05-21 19:57:00 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.455: INFO: Container kube-proxy ready: true, restart count 1 May 22 01:46:42.455: INFO: cmk-init-discover-node1-48g7j started at 2021-05-21 20:06:17 +0000 UTC (0+3 container statuses recorded) May 22 01:46:42.455: INFO: Container discover ready: false, restart count 0 May 22 01:46:42.455: INFO: Container init ready: false, restart count 0 May 22 01:46:42.455: INFO: Container install ready: false, restart count 0 May 22 01:46:42.455: INFO: node-feature-discovery-worker-lh5hz started at 2021-05-21 20:03:47 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.455: INFO: Container nfd-worker ready: true, restart count 0 W0522 01:46:42.466334 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 22 01:46:42.511: INFO: Latency metrics for node node1 May 22 01:46:42.511: INFO: Logging node info for node node2 May 22 01:46:42.514: INFO: Node Info: &Node{ObjectMeta:{node2 /api/v1/nodes/node2 3f64ce47-e96b-43b8-9c91-df57a4e26826 167734 0 2021-05-21 19:56:58 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.25.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1700":"csi-mock-csi-mock-volumes-1700","csi-mock-csi-mock-volumes-4959":"csi-mock-csi-mock-volumes-4959","csi-mock-csi-mock-volumes-5873":"csi-mock-csi-mock-volumes-5873","csi-mock-csi-mock-volumes-6723":"csi-mock-csi-mock-volumes-6723","csi-mock-csi-mock-volumes-6884":"csi-mock-csi-mock-volumes-6884","csi-mock-csi-mock-volumes-7303":"csi-mock-csi-mock-volumes-7303","csi-mock-csi-mock-volumes-8793":"csi-mock-csi-mock-volumes-8793","csi-mock-csi-mock-volumes-9199":"csi-mock-csi-mock-volumes-9199"} flannel.alpha.coreos.com/backend-data:{"VtepMAC":"4e:d8:e9:66:bc:b7"} flannel.alpha.coreos.com/backend-type:vxlan flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-hardware_multithreading,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.7.0 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-05-21 19:56:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-05-21 19:57:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-05-21 20:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-05-21 20:06:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {e2e.test Update v1 2021-05-22 01:16:03 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{}}}}} {kube-controller-manager Update v1 2021-05-22 01:26:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubelet Update v1 2021-05-22 01:26:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cmk.intel.com/exclusive-cores":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:intel.com/intel_sriov_netdevice":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-05-21 20:00:39 +0000 UTC,LastTransitionTime:2021-05-21 20:00:39 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-22 01:46:40 +0000 UTC,LastTransitionTime:2021-05-21 19:56:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-22 01:46:40 +0000 UTC,LastTransitionTime:2021-05-21 19:56:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-22 01:46:40 +0000 UTC,LastTransitionTime:2021-05-21 19:56:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-22 01:46:40 +0000 UTC,LastTransitionTime:2021-05-21 19:57:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2aa9b8566664435b84c4146a11c874db,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:befe5c4e-169e-4c36-9e45-742bb80d4660,KernelVersion:3.10.0-1160.25.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://19.3.14,KubeletVersion:v1.19.8,KubeProxyVersion:v1.19.8,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost:30500/barometer-collectd@sha256:2b114e08442070e7232fcffc4cb89529bd5c9effe733ed690277a33772bf2d00 localhost:30500/barometer-collectd:stable],SizeBytes:1464382814,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d865c665dfeeec5a879dca7b9945cc49f55f10921b4e729f0da0cdec7dedbf7 localhost:30500/cmk:v1.5.1],SizeBytes:726676532,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:726676532,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[nginx@sha256:df13abe416e37eb3db4722840dd479b00ba193ac6606e7902331dcea50f4f1f2 nginx:1.19],SizeBytes:133122553,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:82e0ce4e1d08f3749d05c584fd60986197bfcdf9ce71d4666c71674221d53135 k8s.gcr.io/kube-apiserver:v1.19.8],SizeBytes:118813022,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:8ed30419d9cf8965854f9ed501159e15deb30c42c3d2a60a278ae169320d140e k8s.gcr.io/kube-proxy:v1.19.8],SizeBytes:117674285,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:113869866,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:2769005fb667dbb936009894d01fe35f5ce1bce45eee80a9ce3c139b9be4080e k8s.gcr.io/kube-controller-manager:v1.19.8],SizeBytes:110805342,},ContainerImage{Names:[gcr.io/k8s-staging-nfd/node-feature-discovery@sha256:5d116c2c340be665a2c8adc9aca7f91396bd5cbde4add4fdc8dab95d8db43425 gcr.io/k8s-staging-nfd/node-feature-discovery:v0.7.0],SizeBytes:108309584,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel@sha256:ac5322604bcab484955e6dbc507f45a906bde79046667322e3918a8578ab08c8 quay.io/coreos/flannel:v0.13.0 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:48281550,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:bb66135ce9a25ac405e43bbae6a2ac766e0efcac0a6a73ef9d1fbb4cf4732c9b k8s.gcr.io/kube-scheduler:v1.19.8],SizeBytes:46510430,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:46131354,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:46041582,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:a4dc6d912ce1a8dd4c3a51b1cfb52454080ed36db95bf824895d5ecb7175199f localhost:30500/sriov-device-plugin:v3.3.1],SizeBytes:44392673,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:4bd7ae247de5c988700233c5a4b55e804ffe90f8c66ae64853f1dae37b847213 gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:a2f29256e53cc3e0b64d7a472512600b2e9410347d53cdc85b49f659c17e02ee quay.io/prometheus/node-exporter:v0.18.1],SizeBytes:22933477,},ContainerImage{Names:[localhost:30500/tas-controller@sha256:f013255695f4515c5b21b11281c7e0fb491082d15ec5a96adb8217e015a9c422 localhost:30500/tas-controller:0.1],SizeBytes:22922439,},ContainerImage{Names:[localhost:30500/tas-extender@sha256:50746f542c19fda01d88ae124ce58c5a326dad7cd24d3c2d19fdf959cc7f0c49 localhost:30500/tas-extender:0.1],SizeBytes:21320903,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:0b4273abac4c241fa3d70aaf52b0d79a133d2737081f4a5c5dea4949f6c45dc3 k8s.gcr.io/sig-storage/mock-driver:v3.1.0],SizeBytes:18687618,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:16322467,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 22 01:46:42.516: INFO: Logging kubelet events for node node2 May 22 01:46:42.519: INFO: Logging pods the kubelet thinks is on node node2 May 22 01:46:42.531: INFO: collectd-rkmjk started at 2021-05-22 00:31:19 +0000 UTC (0+3 container statuses recorded) May 22 01:46:42.531: INFO: Container collectd ready: true, restart count 0 May 22 01:46:42.531: INFO: Container collectd-exporter ready: true, restart count 0 May 22 01:46:42.531: INFO: Container rbac-proxy ready: true, restart count 0 May 22 01:46:42.531: INFO: node-exporter-jctsz started at 2021-05-22 00:30:49 +0000 UTC (0+2 container statuses recorded) May 22 01:46:42.531: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:46:42.531: INFO: Container node-exporter ready: true, restart count 0 May 22 01:46:42.531: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-jkg9k started at 2021-05-22 00:30:58 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.531: INFO: Container kube-sriovdp ready: true, restart count 0 May 22 01:46:42.531: INFO: nginx-proxy-node2 started at 2021-05-21 20:03:00 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.531: INFO: Container nginx-proxy ready: true, restart count 2 May 22 01:46:42.531: INFO: node-feature-discovery-worker-z827f started at 2021-05-22 00:30:50 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.531: INFO: Container nfd-worker ready: true, restart count 0 May 22 01:46:42.531: INFO: kube-proxy-q57hf started at 2021-05-21 19:57:00 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.531: INFO: Container kube-proxy ready: true, restart count 2 May 22 01:46:42.531: INFO: hostexec-node2-65rwt started at 2021-05-22 01:41:33 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.531: INFO: Container agnhost-container ready: true, restart count 0 May 22 01:46:42.531: INFO: cmk-xtrv9 started at 2021-05-22 00:30:51 +0000 UTC (0+2 container statuses recorded) May 22 01:46:42.531: INFO: Container nodereport ready: true, restart count 0 May 22 01:46:42.531: INFO: Container reconcile ready: true, restart count 0 May 22 01:46:42.531: INFO: kube-flannel-5p7gq started at 2021-05-21 19:57:34 +0000 UTC (1+1 container statuses recorded) May 22 01:46:42.531: INFO: Init container install-cni ready: true, restart count 2 May 22 01:46:42.531: INFO: Container kube-flannel ready: true, restart count 2 May 22 01:46:42.531: INFO: kube-multus-ds-amd64-6q46t started at 2021-05-21 19:57:42 +0000 UTC (0+1 container statuses recorded) May 22 01:46:42.531: INFO: Container kube-multus ready: true, restart count 1 W0522 01:46:42.541038 21 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 22 01:46:42.574: INFO: Latency metrics for node node2 May 22 01:46:42.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "persistent-local-volumes-test-602" for this suite. • Failure [314.735 seconds] [sig-storage] PersistentVolumes-local /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Stress with local volumes [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:427 should be able to process many pods and reuse local volumes [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:517 May 22 01:46:36.532: some pods failed to complete within 5m0s Unexpected error: <*errors.errorString | 0xc0003001f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:610 ------------------------------ {"msg":"FAILED [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes","total":17,"completed":0,"skipped":5306,"failed":2,"failures":["[sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create volume metrics with the correct PVC ref /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:203 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:46:42.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 22 01:46:42.601: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:46:42.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-7786" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.028 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics with the correct PVC ref [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:203 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] [Serial] Volume metrics should create volume metrics in Volume Manager /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:291 [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:46:42.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pv STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:56 May 22 01:46:42.629: INFO: Only supported for providers [gce gke aws] (not skeleton) [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:46:42.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pv-4719" for this suite. [AfterEach] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:81 S [SKIPPING] in Spec Setup (BeforeEach) [0.026 seconds] [sig-storage] [Serial] Volume metrics /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should create volume metrics in Volume Manager [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:291 Only supported for providers [gce gke aws] (not skeleton) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:60 ------------------------------ SSMay 22 01:46:42.638: INFO: Running AfterSuite actions on all nodes May 22 01:46:42.638: INFO: Running AfterSuite actions on node 1 May 22 01:46:42.639: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_storage_serial/junit_01.xml {"msg":"Test Suite completed","total":17,"completed":0,"skipped":5482,"failed":2,"failures":["[sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] all pods should be running","[sig-storage] PersistentVolumes-local Stress with local volumes [Serial] should be able to process many pods and reuse local volumes"]} Summarizing 2 Failures: [Fail] [sig-storage] PersistentVolumes-local Pods sharing a single local PV [Serial] [It] all pods should be running /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:683 [Fail] [sig-storage] PersistentVolumes-local Stress with local volumes [Serial] [It] should be able to process many pods and reuse local volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:610 Ran 2 of 5484 Specs in 689.328 seconds FAIL! -- 0 Passed | 2 Failed | 0 Pending | 5482 Skipped --- FAIL: TestE2E (689.44s) FAIL Ginkgo ran 1 suite in 11m30.588345405s Test Suite Failed